text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
On Thu, 7 Oct 2021, Volodymyr Babchuk wrote: > > On Thu, 7 Oct 2021, Volodymyr Babchuk wrote: > >> Hi Stefano, > >> > >> Stefano Stabellini <sstabellini@xxxxxxxxxx> writes: > >> > >> > On Wed, 6 Oct 2021, Oleksandr wrote: > >> >> Hello all > >> >> > >> >> Gentle reminder. > >> > > >> > Many thanks for the ping, this patch fell off my radar. > >> > > >> > > >> > > >> >> On 23.09.21 23:57, Volodymyr Babchuk wrote: > >> >> > Hi Stefano, > >> >> > > >> >> > Stefano Stabellini <sstabellini@xxxxxxxxxx> writes: > >> >> > > >> >> > > On Mon, 6 Sep 2021, Oleksandr Tyshchenko wrote: > >> >> > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> > >> >> > > > > >> >> > > > Allocate anonymous domheap pages as there is no strict need to > >> >> > > > account them to a particular domain. > >> >> > > > > >> >> > > > Since XSA-383 "xen/arm: Restrict the amount of memory that > >> >> > > > dom0less > >> >> > > > domU and dom0 can allocate" the dom0 cannot allocate memory > >> >> > > > outside > >> >> > > > of the pre-allocated region. This means if we try to allocate > >> >> > > > non-anonymous page to be accounted to dom0 we will get an > >> >> > > > over-allocation issue when assigning that page to the domain. > >> >> > > > The anonymous page, in turn, is not assigned to any domain. > >> >> > > > > >> >> > > > CC: Julien Grall <jgrall@xxxxxxxxxx> > >> >> > > > Signed-off-by: Oleksandr Tyshchenko > >> >> > > > <oleksandr_tyshchenko@xxxxxxxx> > >> >> > > > Acked-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx> > >> >> > > Only one question, which is more architectural: given that these > >> >> > > pages > >> >> > > are "unlimited", could the guest exploit the interface somehow to > >> >> > > force > >> >> > > Xen to allocate an very high number of anonymous pages? > >> >> > > > >> >> > > E.g. could a domain call OPTEE_SMC_RPC_FUNC_ALLOC in a loop to > >> >> > > force Xen > >> >> > > to exaust all memory pages? > >> >> > Generally, OP-TEE mediator tracks all resources allocated and imposes > >> >> > limits on them. > >> >> > > >> >> > OPTEE_SMC_RPC_FUNC_ALLOC case is a bit different, because it is issued > >> >> > not by domain, but by OP-TEE itself. As OP-TEE is more trusted piece > >> >> > of > >> >> > system we allow it to request as many buffers as it wants. Also, we > >> >> > know > >> >> > that OP-TEE asks only for one such buffer per every standard call. And > >> >> > number of simultaneous calls is limited by number of OP-TEE threads, > >> >> > which is quite low: typically only two. > >> > > >> > So let me repeat it differently to see if I understood correctly: > >> > > >> > - OPTEE_SMC_RPC_FUNC_ALLOC is only called by OP-TEE, not by the domain > >> > - OPTEE is trusted and only call it twice anyway > >> > >> Correct. > >> > >> > I am OK with this argument, but do we have a check to make sure a domU > >> > cannot issue OPTEE_SMC_RPC_FUNC_ALLOC? > >> > >> domU can't issue any RPC, because all RPCs are issued from OP-TEE > >> side. This is the nature of RPC - OP-TEE requests Normal World for some > >> service. But of course, Normal World can perform certain actions that > >> will make OP-TEE to issue a RPC. I discuss this in depth below. > >> > >> > > >> > Looking at the patch, there are other two places, in addition to > >> > OPTEE_SMC_RPC_FUNC_ALLOC, where the anonymous memory pages can be > >> > allocated: > >> > > >> > 1) copy_std_request > >> > 2) translate_noncontig > >> > > >> > We need to prove that neither 1) or 2) can result in a domU exausting > >> > Xen memory. > >> > > >> > In the case of 1), it looks like the memory is freed before returning to > >> > the DomU, right? If so, it should be no problem? > >> > >> Yes, mediator makes shadow copy of every request buffer to hide > >> translated addresses from the guest. Number of requests is limited by > >> number of OP-TEE threads. > >> > >> > In the case of 2), it looks like the memory could outlive the call where > >> > it is allocated. Is there any kind of protection against issuing > >> > something like OPTEE_MSG_ATTR_TYPE_TMEM_INOUT in a loop? Is it OP-TEE > >> > itself that would refuse the attempt? Thus, the idea is that > >> > do_call_with_arg will return error and we'll just free the memory there? > >> > >> Well, translate_noncontig() calls allocate_optee_shm_buf() which counts > >> all allocated buffers. So you can't call it more than > >> MAX_SHM_BUFFER_COUNT times, without de-allocating previous memory. But, > >> thanks to your question, I have found a bug there: memory is not freed > >> if allocate_optee_shm_buf() fails. I'll prepare patch later today. > >> > >> > I cannot see a check for errors returned by do_call_with_arg and memory > >> > freeing done because of that. Sorry I am not super familiar with the > >> > code, I am just trying to make sure we are not offering to DomUs an easy > >> > way to crash the system. > >> > >> I tried to eliminate all possibilities for a guest to crash the > >> system. Of course, this does not mean that there are none of them. > >> > >> And yes, code is a bit hard to understand, because calls to OP-TEE are > >> stateful and you need to account for that state. From NW and SW this > >> looks quite fine, because state is handled naturally. But mediator sits > >> in a middle, so it's implementation is a bit messy. > >> > >> I'll try to explain what is going on, so you it will be easier to > >> understand logic in the mediator. > >> > >> There are two types of OP-TEE calls: fast calls and standard calls. Fast > >> call is simple: call SMC and get result. It does not allocate thread > >> context in OP-TEE and is non-preemptive. So yes, it should be fast. It > >> is used for simple things like "get OP-TEE version" or "exchange > >> capabilities". It is easy to handle them in mediator: just forward > >> the call, check result, return it back to a guest. > >> > >> Standard calls are stateful. OP-TEE allocates thread for each call. This > >> call can be preempted either by IRQ or by RPC. For consistency IRQ > >> return is also considered as special type of RPC. So, in general one > >> standard call can consist of series of SMCs: > >> > >> --> SMC with request > >> <-- RPC return (like IRQ) > >> --> SMC "resume call" > >> <-- RPC return (like "read disk") > >> --> SMC "resume call" > >> <-- RPC return (like "send network packet") > >> --> SMC "resume call" > >> ... > >> <-- Final return > >> > >> There are many types of RPCs: "handle IRQ", additional shared buffer > >> allocation/de-allocation, RPMB access, disks access, network access, > >> synchronization primitives (when OP-TEE thread is gets blocked on a > >> mutex), etc. > >> > >> Two more things that makes all this worse: Normal World can register > >> shared buffer with OP-TEE. Such buffer can live indefinitely > >> long. Also, Normal World decides when to resume call. For example, > >> calling process can be preempted and then resumed seconds > >> later. Misbehaving guest can decide to not resume call at all. > >> > >> As I said, I tried to take all this things into account. There are > >> basically 3 types of objects that can lead to memory allocation on Xen > >> side: > >> > >> 1. Standard call context. Besides memory space for struct optee_std_call > >> itself it allocates page for a shadow buffer, where argument addresses > >> are translated by Xen. Number of this objects is limited by number of > >> OP-TEE threads: > >> > >> count = atomic_add_unless(&ctx->call_count, 1, max_optee_threads); > >> if ( count == max_optee_threads ) > >> return ERR_PTR(-ENOSPC); > >> > >> 2. Shared buffer. This is a buffer shared by guest with OP-TEE. It can > >> be either temporary buffer which is shared for one standard call > >> duration, or registered shared buffer, which is remains active until it > >> is de-registered. This is where translate_noncontig() comes into play. > >> Number of this buffers is limited in allocate_optee_shm_buf(): > >> > >> count = atomic_add_unless(&ctx->optee_shm_buf_count, 1, > >> MAX_SHM_BUFFER_COUNT); > >> if ( count == MAX_SHM_BUFFER_COUNT ) > >> return ERR_PTR(-ENOMEM); > >> > >> 3. Shared RPC buffer. This is very special kind of buffer. Basically, > >> OP-TEE needs some shared memory to provide RPC call parameters. So it > >> requests buffer from Normal World. There is no hard limit on this from > >> mediator side, because, as I told earlier, OP-TEE itself limits number > >> of this buffers. There is no cases when more that one buffer will be > >> allocated per OP-TEE thread. This type of buffer is used only to process > >> RPC requests themselves. OP-TEE can request more buffers via RPC, but > >> they will fall to p.2: NW uses separate request to register buffer and > >> then returns its handle in the preempted call. > >> > >> > >> Apart from those two limits, there is a limit on total number of pages > >> which is shared between guest and OP-TEE: MAX_TOTAL_SMH_BUF_PG. This > >> limit is for a case when guest tries to allocate few really BIG buffers. > >> > >> > >> > It looks like they could be called from one of the OPTEE operations that > >> > a domU could request? Is there a limit for them? > >> > >> Yes, there are limits, as I described above. > >> > >> Also, bear in mind that resources available to OP-TEE are also quite > >> limited. So, in case of some breach in mediator, OP-TEE will give up > >> first. This of course is not an excuse to have bugs in the mediator... > > > > OK, thanks for the explanation. The reasons for my questions is that if > > the allocations are using the memory of DomU, then at worst DomU can run > > out of memory. But if the allocations are using anonymous memory, then > > the whole platform might run out of memory. We have issued XSAs for > > things like that in the past. > > > > This is why I am worried about this patch: if we apply it we really > > become reliant on these limits being implemented correctly. A bug can > > have much more severe consequences. > > Agree. > > > As you are the maintainer for this code, and this code is not security > > supported, I'll leave it up to you (also see the other email about > > moving optee to "supported, not security supported"). > > Yes, I've seen this email. Just didn't had time to write followup. > > > However, maybe a different solution would be to increase max_pages for a > > domain when optee is enabled? Maybe just by a few pages (as many as > > needed by the optee mediator)? Because if we did that, we wouldn't risk > > exposing DOS attack vectors for every bug in the mediator limits checks. > > > > The below adds a 10 pages slack. > > Well, I didn't know that such option is available. If this is a valid > approach and there are no objections from other maintainers, I'd rather > use it. I think it is a valid approach, and it is "more secure" than the other patch. I suggest that you send a patch for it so that if people can voice their objections if any. > Only one comment there is about number of pages. Maximal number of > domheap pages used per request is 6: one for request itself, one for RPC > buffer, 4 at most for request arguments. I checked OP-TEE configuration, > looks like some platforms allow up to 16 threads. This yields 6 * 16 = 96 > pages in total. If this is acceptable I'd set TEE_SLACK to 96. OK. Please add a good in-code comment explaining how you got to 96. > > diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c > > index 3964a8a5cd..a3105f1a9a 100644 > > --- a/xen/arch/arm/tee/tee.c > > +++ b/xen/arch/arm/tee/tee.c > > @@ -38,8 +38,11 @@ bool tee_handle_call(struct cpu_user_regs *regs) > > return cur_mediator->ops->handle_call(regs); > > } > > > > +#define TEE_SLACK (10) > > int tee_domain_init(struct domain *d, uint16_t tee_type) > > { > > + int ret; > > + > > if ( tee_type == XEN_DOMCTL_CONFIG_TEE_NONE ) > > return 0; > > > > @@ -49,7 +52,15 @@ int tee_domain_init(struct domain *d, uint16_t tee_type) > > if ( cur_mediator->tee_type != tee_type ) > > return -EINVAL; > > > > - return cur_mediator->ops->domain_init(d); > > + ret = cur_mediator->ops->domain_init(d); > > + if ( ret < 0 ) > > + return ret; > > + > > + /* > > + * Increase maxmem for domains with TEE, the extra pages are used by > > + * the mediator > > + */ > > + d->max_pages += TEE_SLACK; > > } > > > > int tee_relinquish_resources(struct domain .
https://lists.xenproject.org/archives/html/xen-devel/2021-10/msg00325.html
CC-MAIN-2021-49
refinedweb
1,826
64.3
Hi all! Today we would like to show you brand new renaming feature for next NetBeans release. Most of you definitely use common standard that every class is in its own file and the file name is the same as the class name. It means that if you have class Foo it will surely be in a file named Foo.php. But when you rename that class you surely want to rename its file as well. And NetBeans, finally, allows you to do that automatically :-) Just check one checkbox during refactoring process and that's all. So here are some screenshots: Type new name and check Rename Also File With the Declaration. checkbox. Confirm refactoring and look at new file name :-) And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Refactoring). Thanks a lot! Good feature. Will the checkbox automatically be checked the next time a refactoring occurs? No, it has to be rechecked. Will it use lower case filename if the previous is it? autoloader with strtolower($class) in it... No, the name of the file is exactly the same as for class. But it's a good enhancement. You can write it to our bugzilla [1]. Thanks! [1] I do back the request Robert made. Names of classes and namespaces are not case-sensitive in PHP. Thus you should not rely on the class name being given in the "right" casing. If this is your convention, better be aware that you are introducing a new principle into the language that has never been a part of the language definition and never will be. In PHP there is by definition no difference in writing "new foo()" instead of "new Foo()" and no development environment will warn or force you to prefer the one over the other. What would be even better though is a feature for refactoring namespaces. My colleagues have asked me several times now for that now because it would be so useful. However: this includes that you would need to assume a certain behavior of the auto-loader. So to implement this feature properly you would have to add some project-wide settings that describe what your auto-loader does. Usually that would be (at least) a "root-directory" and a "style of file- and folder names" (case-sensitive or not). But since you can have multiple auto-loaders within the same project you may want to bind those settings to a namespace. > be aware that you are introducing a new principle into the language Definitely not. Classes with the same name as files (including casing) is a widely used style. But I'll try to make some improvements. To let NB remember the last used option (rename file too or not) and add a checkbox to rename file with lowercases. I just pushed that changes, so that lowercase option will be in the dev build soon.
https://blogs.oracle.com/netbeansphp/improved-type-renaming
CC-MAIN-2019-18
refinedweb
501
72.46
Hi, I am making a program that will find the average of what ever you enter, but i am having trouble with a part in my program. My program asks the user how many numbers they are going to enter. Then I store that number in unsigned int choice. Now, I have another function that I need to use the value in int choice. I have tried Making int choice global, and have tried pointers (not sure if i was doing it right though..) I am getting errors such as c:\Documents and Settings\default\My Documents\Visual Studio Projects\Average Finder\Average.cpp(54): error C2057: expected constant expression and other errors due to it cant read the choice value. Here is some of my code to better understand my situation.. Right now i have choice as global variable. If someone could help me it would be greatly appreciated :)If someone could help me it would be greatly appreciated :)Code:#include <iostream> #include <stdio.h> #include <windows.h> using namespace std; int average(); unsigned int choice = 1; //Is used to hold the numbers wanted to average int main() { double* nums = NULL; //This will hold the average array cout << endl << endl << "\t\t\tAverage Finder v1\n\n\n"; //Prints average findeer cout << "How many numbers do you want to find the average of?: "; cin >> choice; ......... average(); return 0; } int average() { int avg[choice]; // im trying to use the value in choice to set the arrays size ....... } Thanks! -Optix
http://forums.devshed.com/programming-42/calculate-average-prob-44803.html
CC-MAIN-2014-52
refinedweb
249
61.87
C Exercises: Check a given integer and return true if it is within 10 of 100 or 200 C-programming basic algorithm: Exercise-4 with Solution Write a C program to check a given integer and return true if it is within 10 of 100 or 200. C Code: #include <stdio.h> #include <stdlib.h> int main(void){ printf("%d",test(103)); printf("\n%d",test(90)); printf("\n%d",test(89)); } int test(int x) { if(abs(x - 100) <= 10 || abs(x - 200) <= 10) return 1; return 0; } Sample Output: 1 1 0 Flowchart: Solution Contribute your code and comments through Disqus. Previous: Write a C program to check two given integers, and return true if one of them is 30 or if their sum is 30. Next: Write a C program to check if a given positive number is a multiple of 3 or a multiple of 7. New Content: Composer: Dependency manager for PHP, R Programming
https://www.w3resource.com/c-programming-exercises/basic-algo/c-programming-basic-algorithm-exercises-4.php
CC-MAIN-2019-18
refinedweb
159
59.74
Easy use of the GroupMe API with Python Project description PYGROUPME VER 0.0.1 Simple python wrapper for the GroupMe API that allows users to interface with Microsoft's GroupMe messaging platform via the requests module. I created this after realizing that there is no existing wrapper for the API in python, and typing the code necessary to interface with GroupMe repeatedly in code is exhausting. In order to interact with GroupMe, users first need to have a GroupMe account and a designated GroupMe group, then register a bot at and provide the program with its bot id to send messages and/or the token code to receive messages. Example usage: import pygroupme as pgm pgm.gm_msg(bid=BOT_ID_HERE, text="Hello, world!") # sends "Hello, world!" to the GroupMe chat in which the user has set up his GroupME api interface message = pgm.gm_rec(token=USER_TOKEN_HERE, gid=USER_GID_HERE, limt=n) # receive the last n messages from a GroupMe chat, default is 1 Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pygroupme/0.0.2/
CC-MAIN-2019-09
refinedweb
189
62.17
1. We are here at QCon London 2012 and I am sitting here with Rich Hickey the creator of Clojure and Datomic and we thought we could catch up with Rich on the state of everything Clojure. So Rich what is next for Clojure? The next thing for Clojure will be sort of a maintenance release, version 1.4. The primary thing there is bug fixes and probably the major surface enhancement people will see is the extensible reader support. So an extensible reader haslong been asked for, what people wanted was something a lot like what Common Lisp used to do which is essentially give you reader macros which just allow you to commandeer some of the characters and say: "This means this" and I’ve resisted it throughout the history of Clojure because I wanted Clojure to be a strong library language and that capability doesn’t allow for interoperability. If I say "This character means this" and you say "That character means that" then those two files and applications that are written that need to use both those things now can’t be composed. So I thought about the problem a lot and the answer I came up with was to do extensible reader where you use tags to identify novel things that can be put into data structures that you can read and because those tags are namespaced now they are free of clashes so there will be a few tags that don’t have namespaces which will be built-in. So for instance Clojure reader will gain the ability to read UUIDs with this enhancement and time stamps. It’s an open system in that you can define your own tag with a namespace qualifier and then have that map to any reading function that you want. So by combining a tag and anything that can already be read, so you can tag a vector literal or tag a map literal or tag a symbol or string, you can enhance what can be read and I think this is the right way to do this. It’s an open system that is extensible, but that is also interoperable. So I am very excited about it, for instance one of the things that is neat about it is prior to having this there was a way to, for instance, embed in a reader call to a constructor, construct the date when it was read and the problem you have is for instance when you have something like ClojureScript, now this isn’t really a transportable data format anymore, so we’d like it to be possible to write something, write a UUID from Java and read it in ClojureScript and each have their own independent representations of that but understand the semantics of what was transferred. So I think it’s a great feature, not everyone understands the power of it but it’s going to be a critical interoperability feature for Clojure and ClojureScript. 2. So you’ve given up the total power of being the only one who writes reader macros? Well it’s not reader macros, it’s still less than reader macros. People who wanted reader macros will probably still be upset, but I think it’s a good solution that brings you a lot of the capabilities you’d be looking for it there, probably not as succinctly as you can do with reader macros, but you can write extensible data records and transfer stuff around without transferring Java implementation details around and I think that is an important power that we were missing so far. 3. So these tags are like prefixes to the data, is that it? Yes it’s just a symbol prefix by Hash tags for extensions. Let’s say you have a handle and you say #mycompany.handle and then if you want to use maps to represent your handles or numbers then you could just put a map or a number. So the tag qualifies the next thing read and that causes your reader function to read it and interpret it as whatever you would like. And you can plug those in both the handlers and the tags. 4. And the reader function returns a Clojure data structure. It returns whatever data structure you want, so you could read something in and get back at java.util.Date if you wanted, anything you want. And of course depending on the reader, if for instance, dates are a good example and maybe you have two applications and one is written in Clojure and one is written in Java and you have a reader on the Java side so that you can use it as a transfer medium. Maybe you are manipulating dates with Joda Time in one application and java.util.Date in another. You write date, now you are in a reader form that doesn’t dictate what type represents it and then when you read it you can have a different thing plugged in that reads it at something else and that is what is important,I think, to isolate what you are putting in the data files or in the data stream from the representations that end up coming up in memory. 5. I think people will love that and you will solved the problem of messing with the global syntax. So that is actually a really big feature. You are underselling it . So that is for Clojure and people often tell me to ask you about Pods. So what are Pods? Pods are an example of talking about something before you’ve made it, a feature you are going to be asked about forever, which I actually normally don’t do and because I did that and it’s actually shown up how long the cycle is around getting these ideas completed, I would say that the work I have done on Datomic has informed my thinking about Pods and probably will enable me to complete them, but I hate to continue to talk about them because I haven’t made them yet. 6. So it needs a few more hammock cycles. So something we can talk about is ClojureScript. You announced that last year, and have you regreted targeting JavaScript yet? Not at all. I think it’s hot. 7. So what have you been doing with ClojureScript in the last half year? Have you been improving it, adding features? I personally haven’t had a lot of time to work on it after I got the core compiler going, but there is a whole sub community oriented around it and a bunch of people on the Clojure core team who are really enthusiastic about it and have been moving forward in the implementation, the libraries and the tooling. So there is a lot of activity around ClojureScript and I am really impressed with what they are doing. 8. It’s still a subset of Clojure? That is right. It’s definitely designed to be Clojure and it’s a substantial subset. It doesn’t have concurrency because JavaScript engines don’t have concurrency, but it is Clojure and people have moved nontrivial amounts of Clojure code there without changes. So yes, it’s Clojure on the JavaScript engine, the parts that make sense. 9. So you mentioned concurrency which doesn’t exist in Java script. So what are the big things that you miss in JavaScript ; I suppose you miss numbers, proper integers? Yes. We tried to fix up some of that, we can’t fix all of it. But it is important that ClojureScript implements Clojure semantics, so in many ways it’s a dramatic improvement over using JavaScript because the semantics there are difficult to ascertain. You don’t miss much. Actually it was a very exciting thing to build ClojureScript because it was the first time we were able to implement the abstractions of Clojure using Clojure’s own protocol and deftype and so ClojureScript is the best example so far of Clojure in Clojure being written entirely in Clojure and using deftype and protocols at the bottom. So the ClojureScript implementation is actually very exciting because as you want to enhance it, you have protocols for all those fundamental things where the Clojure implementation is still a hybrid of Java and Clojure. 10. So it’s basically a big proof of concept of protocols? It definitely is a proof that they were viable for implementing the abstraction, that they were performant enough for that and that they were independent of the JVM as abstractions which they moved right over and of course the implementation is dramatically different on the JavaScript engine than otherwise. So it’s very exciting and when I get more time to get back to it I anticipate that the work we did on the ClojureScript will inform Clojure in Clojure substantially, if not be the starting point for it. 11. So you are still aiming for actual Clojure in Clojure to appear at some point. So now you are asking me to talk about something I haven’t made yet. Yes, I think it’s inevitable that that happens. It’s a matter of someone having the time to work on it. 12. But you have laid the foundations and if proven it works? So another thing that was released, I think, in the Clojure community was something called Avout, which is about distributed STM, is that one way to explain it? I think so, yes, I think ClojureScript is a good proof of that. The idea behind Avout was to, it’s David Liebke's work. He wanted initially to make a library for Zookeeper for Clojure and because once he had done that he would end up with locks and I said: "Distributed locks are as hard or harder to use than locks are in process, could you take it to the next level, could you take Clojure’s abstractions around state management in the reference system, so Atoms and Refs and put that again on top of that infrastructure?" and that is what he did. So Avout is fundamentally delivering the Clojure state abstractions in a distributed context and that is the idea. So it has Refs and Atoms running against not only Zookeeper now but other kinds of coordination engines and other kinds of storage engines, where you put the values that you are synchronizing, you can swap out. So it’s very interesting. 13. So it’s basically for distributing Clojure, is that sort of the solution? No. I think when people move to something like Zookeeper usually it’s a very small part of their application, but that they desperately need to be distributed like that, like it would be process coordination or any kind of coordination or mastership of things or shared small things. You don’t want to pile a whole bunch of state in something like a Avout, it’s not a database. But it can be used for those particular kinds of things and it’s one of those things like when you need it, you need it and to have something that works reliably with those nice semantics I think it is a good value proposition. Hopefully you don’t need it that often. But that is true of references in Clojure too. People are surprised actually of how infrequently they need them. So that is what you want, you want to not need it very frequently and when you need it you want it to be sound. Community comments
https://www.infoq.com/interviews/hickey-clojure-reader/
CC-MAIN-2019-26
refinedweb
1,939
64.54
Regular Expressions in Java Java illustrates how to find a digit string from the given alphanumeric string: import java.util.regex.Matcher; import java.util.regex.Pattern;”); } } }. re* Matches 0 or more occurrences of preceding expression. re+ Matches 1 or more of the previous thing re? Matches 0 or 1 occurrence of preceding expression. re{ n} Matches exactly n number of occurrences of preceding expression. re{ n,} Matches n or more occurrences of preceding expression. re{ n, m} Matches at least n and at most m occurrences of preceding expression. a| b Matches either a or b. (re) Groups regular expressions and remembers matched text. (?: re) Groups regular expressions without remembering matched text. (?> re) Matches independent pattern without backtracking.. n Back-reference to capture group number “n” b Matches word boundaries when outside brackets.():Returns the offset after the last character matched. 4.public int end(int group):Returns the offset after the last character of the subsequence captured by the given group during the previous match operation. Study Methods: Study methods review the input string and return a Boolean indicating whether or not the pattern is found:. 5.public static String quoteReplacement(String s):Returns a literal replacement String for the specified String. This method produces a String that will work as a literal replacement s in the appendReplacement method of the Matcher class..
http://www.lessons99.com/regular-expressions-in-java.html
CC-MAIN-2019-09
refinedweb
225
51.14
I'm kind of a coding newb but i managed to make this: Its supposed to do this:Its supposed to do this:Code : import java.util.Scanner; public class ReactionGame { public static void main(String[] args) { int Time = 0; String s = null; @SuppressWarnings("resource") Scanner NewScanner = new Scanner(System.in); System.out.println("Type K and press enter when STOP appears"); Delay(1000); System.out.println("STOP"); while (s.equalsIgnoreCase("k")) { String k1 = NewScanner.nextLine(); s = k1; Time ++; } System.out.println("Your time was:" + Time); } private static void Delay(int i) { try { Thread.sleep(100L); } catch (Exception e) {} } } Say: when STOP appears enter k and enter! then after some time STOP and then you must type in K+Enter Then it will show your Score... But it gives this error: Type K and press enter when STOP appears STOP Exception in thread "main" java.lang.NullPointerException at ReactionGame.main(ReactionGame.java:14) I don't know what i did wrong and even eclipse doesnt give me an answer. Kind regards, Niels
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/37072-some-problem-main-printingthethread.html
CC-MAIN-2016-36
refinedweb
172
60.61
The Real-World State of Windows Use 374 snydeq writes "Performance and metrics researcher Devil Mountain Software has released an array of real-world Windows use data as compiled by its exo.performance.network, a community-based monitoring tool that receives real-time data from about 10,000 PCs throughout the world. Tracking users' specific configurations, as well as the applications they actually use, the tool provides insights into real-world Windows use, including browser share, multicore adoption, service pack adoption, and which anti-virus, productivity, and media software are most prevalent among Windows users. Of note are the following conclusions: two years after Vista's release, not even 30 percent of PCs actually run it; OpenOffice.org is making inroads into the Microsoft Office user base; and despite the rise of Firefox, Internet Explorer remains the standard option for inside-the-firewall apps." Spyware (Score:3, Insightful) The Windows Sentinel app: When they sell your info it's spyware When they post it on slashdot it's a community-based monitoring tool Re:Spyware (Score:5, Insightful) Re:Spyware (Score:5, Insightful) Yes, that is the way the world works, correctly. Just like the difference between you giving some guy on the street money, and the same one stealing your wallet. One is called robbery, one isn't. Complicated world we live in, isn't it? Re: (Score:3, Insightful). Browser use isn't exclusive (Score:3, Informative) At least Hulu lets me use Firefox. Re:Browser use isn't exclusive (Score:5, Informative) Re: (Score:3, Insightful) Re:Browser use isn't exclusive (Score:4, Informative) Re: (Score:2) I don't use the Netflix streaming much, but when I do I just boot a Windows XP image in VirtualBox and watch it. The performance of VirtualBox is good enough that this works great. There may be a more "native" way to make this work in Linux, but to be honest, it simply isn't worth my time since the virtualization is so easy to setup. Re:Browser use isn't exclusive (Score:4, Funny) Surely you mean downgraded, right? Re: (Score:3, Insightful) Dang, now he has to admit he actually likes the little rascal Re: (Score:2) I use mostly Firefox but when I want to watch a movie on Netflix I have to use IE. The same with Netlibrary. The latest firmware upgrade to my Samsung Blu-Ray player added support for Blockbuster and YouTube to Netflix and Pandora. The only thing missing really is the ability to browse the Netflix catalog through the player. I have to ask why I am using a PC for media play when a set up like this is so easy to live with. Netlibrary (Score:2) Windows as a Real World State? (Score:4, Funny) Since it's not, I'll make it up: bloated, past its prime, and fueled entirely by the force of its own inertia. Re: (Score:2) Re: (Score:3, Funny) Re: (Score:2) Re: (Score:2) Re: (Score:2) I agree... oh wait, NO.. I mean I disagree... er. Re: (Score:2) Sorry, no. The USA is an older country than almost all the countries in Europe, except perhaps Britain (which kinda 'morphed' into its current state). All the European countries are quite young, and only date back to WWII in most cases. Before that, many of them were still kingdoms. Or are you going to try to argue that, for instance, Croatia is an old country, even though it didn't exist only 20 years ago? Re: (Score:2, Insightful) Re: (Score:2) So you're saying that Croatia magically became a new country when it split off from Yugoslavia, but Germany is the same country it was when it was run by Kaiser Wilhelm? So how old are Austria and Hungary in your opinion, since neither of those existed independently before WWI? Is Italy 2500 years old in your opinion? How old is the USA in your opinion, since the land certainly existed before Europeans found it, and was inhabited by people as well. It's simple: countries are "born" when they adopt a new f Re: (Score:2, Interesting) Constitutional REPUBLIC. The word "Democracy" appears NOWHERE in the Declaration of Independence, or the US Constitution. WE ARE NOT A DEMOCRACY!! WE ARE A CONSTITUTIONAL REPUBLIC! Learn your history, and get it straight for Christ's sake! US Constitution, Article IV, Section 4: The United States shall guarantee to every State in this Union a Republican Form of Government, and shall protect each of them against Invasion; and on Application of the Legislature, or of the Executive (when the Legislature cann Re: (Score:3, Informative) Here is a map of europe in 1900 [google.com] Re: (Score:3, Informative) Heh. Any full blooded Croatian will tell you that Croatia is 1300 years old. Or, if you wanted to split hairs, 1100-- the first king was crowned in 925. Re: (Score:3, Informative) Yes of course it's an old country, because it's not like it popped into existence out of nowhere suddenly 20 years ago. There's a ton of history that happened to the same people living there and while governments might change and even constitutions that doesn't mean that the country ceases to exist Or are you going to argue that Egypt is only 87 years old? Let's just not look at the Pyramids, because they didn't exist 87 years ago? I find it even more impressive that they knocked up those suckers in such a s Re:Windows as a Real World State? (Score:5, Funny) Re: (Score:3, Informative) The amount of forms and signatures needed to conduct even the simplest actions is totally hallucinogenic. Case in point: I recently flew in to Brussels, and we had a group of American tourists on board who needed to be reassured be the airline staff that they didn't need to sign any forms, or deposit any money or fingerprints to be allowed to enter the country... Re: (Score:2) Most of the European countries are less than 80 years old... This must be some strange definition of "most" that I (and most of the rest of the english-speaking world) am unfamiliar with. If you said most eastern European countries, then I'd probably have to agree with you on a technicality, simply because a lot of them radically reconfigured their governments and renamed themselves after the fall of the iron curtain. But Western Europe? Not a chance. England has been a country for nearly 1100 years. Even Finland, which is a recent one, is 90 years old, when you just Re: (Score:2, Funny) The nation of the United States of Microsoft? Fascist dictatorship and economy, views and opinions forced on civilians (users of the product). Enemies are The People's Republic of Mac OSX and FinnLinuxLand, but sometimes The NetherBSDlands. The currency is the WGA check, without it you are a dirty no good software pirate. Blue Screen of Deaths are common but unreported and the government denies knowledge of it, but keeps asking citizens to install service packs or buy the new version of the nations operating sy Re: (Score:3, Insightful) We really need a -1 I want to kick you in the throat Please refrain from reckless use of analogies. Or a -1 Stop Requesting New Negative Mods. Re: (Score:2) Since it's not, I'll make it up: bloated, past its prime, and fueled entirely by the force of its own inertia. Ah, yes much like the first Galactic Empire. Re: (Score:2) Re: (Score:2) Well... would you consider Hell as a country? Have associated enough related names to be considered there (bsod, dll hell, ping of death, bill gates, etc). To be fair, unix should belong to that country too, is full of daemons (so much that the logo of one is a small devil), and when they get angry they dump cores... but dont lose hope, seems that that general area is getting cool enough to get penguins happy. Let's not forget that bit about eating from the forbidden Apple tree. Representative? (Score:5, Insightful) This is only representative of the 10,000 PCs running this software downloaded from InfoWorld, it would appear. This doesnâ(TM)t sound like it has anything to do with the âoereal worldâ unless you think that the subset of Infoworldâ(TM)s readers who would download this software are somehow representative of the broader Windows population. Re:Representative? (Score:5, Interesting) Re: (Score:2) Even more ignored are the machines running in a totally isolated or "specialty" environment such as kiosks, point-of-sale, order taking, and other closed (but not imbedded) systems. I know of a chain of pizza shops still running DOS boxes (and doing a great job!). I would bet that there are no HTPCs in this survey. Re: (Score:2, Insightful) Totally agree. And even whether it's 20K PCs, as the linked article says, I'd still not represent anything... You don't understand statistics, do you? You have 170,000+ computers. Great. That is not a random sample. A random sample of 10,000 computers is enough to generate a confidence greater than 95%. It doesn't matter how many computers there are in total. Whether it is 1 or 100 billion or 100 million billions. 10,000 randomly chosen samples give you more than 95% certainty. There's a reason to doubt Need to retake to Introduction to Statistics .... (Score:5, Insightful) Of note is the fact that, two years after Vista's release, not even 30 percent of PCs actually run it No, not even 30% of the subset of PCs with this performance-monitoring software run it. In order to claim that not even 30% of PCs run Vista, you would need to establish that the sampling method is not biased, which is a pretty implausible claim. It would not surprise me if the subset of technically savvy PC users are biased towards XP and that subset of "Windows is what comes on the computer from the store" have whatever the store put on it. Re: (Score:2, Insightful) Re: (Score:2, Insightful) Re: (Score:3, Insightful) Re: (Score:3, Insightful) Irrelevant. Also irrelevant. Well.... not to offend any Linux users, or Mac users, but those operating systems are entirely irrelevant. For business, I only use Linux. Specifically, CentOS 5 and Ubuntu. Personally, I use Linux on all my machines. Re: (Score:3, Insightful) I am saying that Linux and Mac are irrelevant to the discussion of whether or not Vista is a failure. Vista is not really competing with them. It was always competing with XP. New computer sales of course have Vista on them which skews the numbers a bit. You need to remember how pissed off HP and Dell were about the constant nagging to downgrade from Vista back to XP. Working professionals had unpleasant experiences and were constantly asking people such as myself to give them back their XP. Downgrading Re:Need to retake to Introduction to Statistics .. (Score:4, Interesting) You know, it is hard to say. I am having a hard time finding old market share data but consider this. Windows XP was released in October 2001 and XP Service Pack 2 was released in August(?) 2004. That is a 2 year, 10 month gap from Release to the Service Pack that made it a decent operating system. Most people I knew were afraid of XP before SP2 came out and were not budging from the (By that point) rock solid Windows 2000. Vista was released when? January 2007 or something like that? Here we are 1 year and 8 months into the Vista experiment (With a successor on the horizon...when XP SP2 was released I don't even think there was any information on the next windows version being bandied about). Yet, Vista still has achieved a 30% market share, apparently. I'd have to guess that pre-SP2 XP would not have been much higher than 30% despite an additional year of availability...and that is with the absolutely horrendous publicity that Windows Vista got. I would think that the numbers would suggest to Microsoft that it did pretty well. What the hell? (Score:5, Insightful) If you have to sign up to be a part of the data gathering, it is NOT real-world usage, as the other billions of us out there haven't signed up. And another thing. The summary quotes a number of 10,000 sampled machines, yet the number in the first link says 20,000. Which is it, boys? A +/- variation of 50% in something as simple as the number of machines sampled leads me to believe there more then likely other errors. Re: (Score:2) so for all we know, the sample size could've been 0. Re: (Score:2, Funny) A +/- variation of 50% in something as simple as the number of machines sampled leads me to believe there more then likely other errors. actually, that's a +/- 100% variation, for people who saw the 10,000 number first. so for all we know, the sample size could've been 0. So that means that the study is about as reliable as an average Slashdot poster. Great... =P Re: (Score:2) If you click on the graphs to go to the xpnet.com site, you'll see that it is reporting 10,270 active clients. That number was last updated 2 days ago, apparently. Re: (Score:2) A +/- variation of 50% in something as simple as the number of machines sampled leads me to believe there more then likely other errors. I want to know where I can by the anti-virus software from Unknown - it seems to be the most popular by a long shot, Re: (Score:2, Insightful) Re: (Score:2) Can someone provide some sort of evidence of a non-Windows OS machine being part of a botnet? If not, I think your idea might produce slightly skewed results. "Windows market share seems to have reached an all time high of 100%(+/- 3%), based on current data provided by surveyed Botnet operators. The Botnet operators cautioned that 3% of respondents failed to respond, and that any data should take this into account." Legacy Software (Score:5, Insightful) 10,000 PCs is a small sample size, try a few million. You might have a sampling error there if they are not randomly picked. One reason why Windows Vista has not caught on is that older hardware won't run it, like my Father's Pentium 4, 512M, Windows XP Home System, it is not listed as Vista compatible and fails the Vista upgrade check. The memory cannot be upgraded to more than 512M due to motherboard limitations, and the video is not Aero compatible and there is no video slot to upgrade it. I doubt it will run Windows 7 either. Trying to force a Windows Vista install on it will mean that it will run slowly (512M is the minimum I know, but with that size memory Vista runs slow) and some features would be disabled. My own laptop a Compaq Presario F700 series came with Windows Vista Home Premium on it, but it caused random lockups that Microsoft blamed on Compaq, and Compaq blamed on Microsoft, and after going in circles trying to get help I downgraded it to an OEM copy of Windows XP Pro that works without any problems at all. But I have a Windows 7 Pro upgrade coming in October to try it out. Hoping that if Windows 7 stinks as much as Vista did, that I can go back to XP Pro. On the other hand Fedora 11 works with the wireless card and it would make a good Linux based laptop when XP retires and there is no more updates for it. I just wish that Visual BASIC 2008/2005 works with WINE, because currently it does not, and I need to keep my VB skills up to date for possible jobs or contract work. Something about needing the BITS service installed to install the software. Otherwise the outdated and ancient Visual BASIC 6 works in WINE, but hardly anyone calls for VB 6.0 skills anymore. Re: (Score:3, Informative) Re: (Score:2) I have this pres presario at work that I need to fix for someone. Installing xp with the sata drivers rolled in works, but as soon as it boots it bluescreens somewhere along the 2nd HD based setup phase. When it reboots it boots into safe mode and installs fine, and allows a normal login, but as soon as I launch IE to get windows updates IE crashes and so does just about everything else, but it doesn't bsod. I think its like the V6000 laptop. Anyone know anything? (I know this is OT, so I apologize in advan Re: (Score:3, Insightful) 10,000 is a very large sample size, and adequate for almost anything. Vista's not *so* rare that it won't show up on a sample of 10,000. However your point about random sampling is valid, although it would be just as big a problem with a sample of 10 million. This is a self-selected sample, so is highly likely to suffer from this a great deal. Re:Legacy Software (Score:5, Insightful) The size of the sample is fine, the method for picking the same is the main issue. 10,000 PC's is well over enough to get a +/- 2% error if the sample is random. Re: (Score:2) I have a Thinkpad X30 (1.2GHz P3M, 512M) with an old, slow 20G disk in it (original died). It runs W7 better than it did XP, with a minor caveat that you'll suffer slightly longer load times for everything. However, it's more responsive and less prone to the "minor IO = mouse glitching" problems of XP. and as usual the StatisticAreFlawed crowd is here (Score:3, Insightful) and they aren't right. They aren't even wrong. AMD v. Intel (Score:2) That's quite interesting. The graph shows that about 25% of systems runs on AMD CPUs. Frankly (though I claim to be an AMD enthusiast) I thought that AMD now at 10-15% max. Apparently, thanks to the media hype around high-end toys, low-end gets neglected. And low-end is a place were AMD is very strong. That's only way I can explain the 25% user share of AMD.... Re: (Score:2) Market share statistics are usually based on quarterly sales. So when AMD is up or down, between 15 and 25%, that's sales for that quarter. Nobody knows how many millions or billions of AMD and Intel CPUs are out there, still functioning. I would put more merit on the Steam Survey [steampowered.com] than this. Steam says 1 in 3 CPUs are AMD. That's a subset of the general populace - people that use steam and play games - but it reflects that heavy push a few years back as the Gamer's CPU of choice. Windows 7 market share. (Score:2) Inside the firewall (Score:4, Insightful) Firewalls are pretty good to avoid things from outside getting in by themselves. But once you put an agent there that opens the door to things from outside (probably the most used vector right now) it turns the firewall meaningless. And if well you can put things that do a virus scan of what is coming, its not easy to detect 0-day attacks, or targetted trojans, or the js/activex/dynamic html/whatever attack of the day. Re:Inside the firewall (Score:4, Insightful) > Forcing the use of IE in particular for that application, as in corporate networks, is like mandating to use belt only in very slow cars. Many companies use Internet Explorer because they want the tight integration with other Microsoft products (like Sharepoint or Office). It takes very little effort to setup a Sharepoint intranet, where people can post Excel documents and generate KPIs and dashboards and whatever the business needs to move forward. Other big software companies also have that kind of stuff, like IBM Lotus Notes or Novell Groupwise. But just like Microsoft, it's a lock-in process. Setting up that kind of environment with other software, like those open-source PHP CMS, will require a lot of work, and quite possibly, more skilled staff and a training program for users. Also many companies use Internet Explorer because it's already built-in and it would cost more to support more than one browser. Internet Explorer is already paid for, and usually people can get things done with it, so it's a hard sale to bring in Firefox or another browser. > Firewalls are pretty good to avoid things from outside getting in by themselves. But once you put an agent there that opens the door to things from outside (probably the most used vector right now) it turns the firewall meaningless. Any decent firewall can have rules for both inbound and outbound traffic. Also, decent firewalls usually have DPI or other smart technologies that can really raise flags when something goes wrong. > [...] forcing to have an agent there that is vulnerable even to bad breath (usually the enforced version is an old one, wont be very surprised if a good percent of those inside the firewall browsers are IE6) sounds almost criminal. In my experience, companies that have a policy about the allowed version of Internet Explorer usually have a very efficient change management strategy and suffer very little downtime because of software problems. I have a Fortune 500 client that started rolling out Windows XP last year only; they have no downtime at all and the business is doing great. It's not cool or edgy to work in such an environment (even Flash is not supported), but they are not in the cool or edgy business. The Net Applications Stats For August (Score:2) The Net Applications stats for August: XP 71.8% Vista 18.8% OSX 10.5 3.5% Win 7 1.2% OSX 10.4 1% Linux 0.9% W2K 0.9% Operating System Market Share [hitslink.com] These global stats are built from about 160 million hits per month to its clients' websites: Additional estimates about the website population: 76% participate in pay per click programs to drive traffic to their sites. 43% are commerce sites 18% are corporate sites 10% are content sites 29% classify themselves as other (includes gov, org, search engine marketers etc..) About [hitslink.com] OpenOffice making inroads (Score:2) AVG the top a/v (Score:3, Insightful) AVG leading McAfee and Norton by a significant margine. Some other "Unknown" a/v has 35% but is not avast. These are not corporate computers. If you click on the Get More Charts link you can see the entire array. Another I found interesting was Home Premium lead among Vista uses. Again, not corporate. RAM has 2-3GB leading so I'd think these are mostly 32-bit systems. It would be nice if that were a metric. Inside the (Corp.) Firewall no one can ... (Score:5, Interesting) Re:Inside the (Corp.) Firewall no one can ... (Score:4, Interesting) Sometimes when on break, I boot a live Ubuntu distro. It runs in memory. I set the networking in Firefox to use the default proxy, load flashplayer from Adobe, and enjoy the break with tabs and no worries. Some who think they are stuck with IE simply don't know they have an option. IE 6 at work badly scrambles Slashdot pages with text running over text. I use Firefox to check my user page and see replies. The page is unusable in the corporate IE 6 default browser. Re: (Score:2, Interesting) or you could get your organisation to move into this century. ie6 is ancient even in the MS world. incidently most could not do what you suggest as it would breach their corporate security policy, hell if you tried that where I work you would be sacked, not to mention you would not actually even be able to conect to the proxies anyway as they are all authenticated (as are our network ports). Re: (Score:2) What about browsing from a VMware image running inside the corporate desktop template? I've been doing that for years. Re:Inside the (Corp.) Firewall no one can ... (Score:5, Insightful) IE6 supports many extensions and has many bugs (at least things that work different from standards but consistent in IE6), that were at the time nice for programmers and were used really a lot. Thus many internal software used in companies needs IE6 as front-end. Many of those behaviour issues have changed in later releases, also many of the hooks have been removed for being unsafe amongst other reasons. Thus your nice corporate app doesn't work anymore in later IE releases. Then you can go "oh but then just fix it!" - why fix it when it isn't broke? IE6 still works, right? It will likely require a large if not total re-write of the app, which is very very expensive and needs to go through all the testing that the current app has gone through over the years. It is also very well possible that the original developers do not work at the company any more (IE6 is old enough for that), so even more likely that development has to start from scratch, including getting a list of current features of your corporate app. Blame the people originally choosing to lock themselves into IE, but you can not blame the current people from insisting to continue using IE. What they should do of course is install IE7/8 or FF or Opera or any other modern browser as well, and restrict IE6 for those internal sites only. Still using IE6 on the open Internet is too dangerous, and more and more web sites will not support its quirks any more either. Re: (Score:3, Insightful) Re:Inside the (Corp.) Firewall no one can ... (Score:4, Insightful) booting a separate os during a break to read a web page is hardly a productive use of one's personal time. Your company restricts flash drives but allows you to boot from an optical drive? Re:Inside the (Corp.) Firewall no one can ... (Score:4, Insightful) You mentioned Ubuntu. Even though you're using it in a completely counterproductive manner, it will definitely get modded +5 insightful. Re:Inside the (Corp.) Firewall no one can ... (Score:5, Insightful) I'm guessing that's some sort of internal agreement with your IT department or outsourced IT provider, because speaking as someone with an Enterprise license agreement and a support contract with software support through IBM, there's nothing binding me or my users to a particular technology stack. That said, we end up sticking with IE because of obsolete external applications. ADI Time's web site is an excellent example of a site that only runs in IE using Compatibility Mode, and since we use that for leave management at the insistence of HR and a long-running contract, we have to keep IE around. Personally, I'd like to move to Firefox or Chrome; not being subject to our normal software installation rules, I actually use Chrome as my primary browser. We don't install it and offer it as a choice for users because our users are, sadly, not the sort of people who'd deal well with being told, "Well, if it doesn't work in one browser, try the other one." Some of them don't grok that there is such a thing as a "web browser" and need to be told to open "the Internet" (i.e., Internet Explorer) instead. I think the major limiters to F/OSS adoption in the corporation are a relative lack of mature management tools, the increased support costs due to user ineptitude and limited support resources, and general corporate inertia. Specific contracts that bind a company to a particular technology stack probably fall into the category of "more common than you'd expect, but less common than you'd think," and it's certainly not a Microsoft-specific thing -- and shame on the CIO who signs such a contract, or buys a piece of software that isn't cross-platform compatible. Not to start a language holy war or anything, but we use IIS and the .NET Framework by choice, not because we're forced to do so. :) Re: (Score:3, Interesting) Really? You might want to tell that to the military... I can't get Firefox installed much less supported on a military computer. Re: (Score:3, Informative) Which federal govt? Not the US. Re: (Score:3, Funny) Re:Inside the (Corp.) Firewall no one can ... (Score:4, Informative) I work at NASA. We got a message from the higher-ups that we are not to use IE unless absolutely necessary. It may be agency-specific for us, but all our payroll apps and stuff work in Firefox. Re: (Score:2) What percentage of total computer users is represented by 10,000 computers, each with glorified spyware installed on it? I manage many computers both in offices and homes and none of them have this monitoring software installed. That could add up to 10% of this survey, potentially completely changing the end results. Bill: "Should we tell 'em the software is called Microsoft Update?" Steve: "Hell no! Are you nuts?!?" Re: (Score:2, Insightful) Or in two words: sampling bias. Re: (Score:2) If you're worried about the small sample size, perhaps you can contact the exo performance network and see what data their software transmits and whether or not it would be useful to install on your 1000 client machines. If there's no personally-identifiable information sent back, and if the software is not responsible for crashing clients or disrupting work, it would definitely help the accuracy of the statistics. Re: (Score:2) Pay attention, they aren't tracking sales data. They're tracking actual usage data as reported by their actual reporting tool, installed on actual computers by actual users. The sample size is only just over 10k machines, but even so, it's actual usage data, not sales data. Biasm small sample... (Score:2) Also sounds like spyware to me. MS gets similar data through its monitoring in Office & other applications. Re: (Score:2) Re: (Score:3, Insightful) This is a fav argument as always - the problem is that when you look at OS share collected by online data aggregators like NetApps it seems someone is actually connecting these mythical warehoused copies of Vista to the internet. Personally, I think it's amazing Microsoft found a way to make unsold boxed DVDs of Vista to the internet. They might struggle to make Aero run on older hardware, but they're brilliant at wireless networking and power management. By the way, 30% of roughly a billion PCs in the world. Re: (Score:2, Insightful) You clearly don't understand statistics. 10,000 samples is a very large study. If there's a problem with the data (and there probably is), it is because of selection bias. Re: (Score:2) Re: (Score:2) AFAICT, on low-end PCs highly likely RAM is shared with video RAM. Depending on how much is given to video, RAM size might vary. Thus the silly size ranges. Re: (Score:2) Re: (Score:2) Sure. I use all three, hence the overlap. I use Firefox most of the time, MSIE when I want to load faster, do pics, or use the built in RSS feed. Firefox is slower to load (by far. I've benchmarked it with a stop watch.) and much slower to load pics, but it renders more pages better more often and rarely hangs up. Try patching your system (Score:2) [technet.com] Re:Try patching your system (Score:5, Funny) I believe this is the relevant patch: #!/usr/bin/python #When SMB2.0 recieve a "&" char in the "Process Id High" SMB header field #it dies with a PAGE_FAULT_IN_NONPAGED_AREA error from socket import socket from time import sleep host = "127.0.0.73", 445 :) normal value should be "\x00\x00"() Re: (Score:2) Re: (Score:3) In fairness to the GP, World of Warcraft doesn't require "installation" to run. You can copy it to a USB stick of sufficient size and run it from there on any computer without having to install it; I often do this when I'm on the road and want to get a bit of virtual adventure in. WoW's config is stored in its directory, not the registry. The network copying issue, though, is indicative of a problem that isn't OS-related, I'd argue. :) Re: (Score:3) Windows's built-in filesharing is fine; depends how you use it to make it a lousy or good tool. Seriously, something as simple as copying files from computer A to computer B (or from folder A to folder B) is slow as fuck because "you're doing it wrong?" I've experienced ridiculously slow local file copies and ZIP file extractions on my up-to-date, always-install-the-latest patch Windows Vista Home installation. The same machine running a current Ubuntu doesn't exhibit any such problems, but I'm not some Linux guru that knows how to tweak out my system. Maybe I'm just unlucky, but during the time I've Re: (Score:2) Re: (Score:2) The whole point of statistics is make estimations that take less effort than counting out what you are trying to examine. Most studies could only dream of having a sample size of 10K. If there is a problem with the sample, it is the fact it was not randomly selected. Re:Emperical evidence of bundling (Score:5, Insightful) And here we have emperical evidence that Microsoft's bundling of IE does hurt the competition. OpenOffice can get a foothold on Windows becuase its competitor costs money, but Firefox can't because its competitor is free, and is built into every copy of Windows. Firefox has far more penetration into the Windows market than OpenOffice so what you said makes absolutely no sense.
https://it.slashdot.org/story/09/09/09/2223230/The-Real-World-State-of-Windows-Use
CC-MAIN-2017-34
refinedweb
5,766
71.34
This article describes how to write multithreaded programs, also known as concurrent programs, using the CSP.NET library. CSP.NET is .NET 2.0 library that provides a more intuitive and secure alternative to standard thread-programming, which is often used in concurrent programs. The library supports both distributed and concurrent programming, but the focus of this article is solely on concurrent programing. CSP.NET and documentation is available for free at. Background Many programmers know but do not use concurrent programming, hence relying primarily on sequential programming—only one thread of execution. So far, this has been an acceptable choice because of the constant increase in processor performance, but unfortunately those happy days are over. All major manufacturers of microprocessors are aiming towards multiple cores on a chip and multiple chips in a machine, but the performance of a single core will probably not increase significantly. Microprocessors containing four cores will most likely be on the market this year and Intel has announced that 32 cores will be available in 2010. These multi-chip, multi-core machines will have a major impact on existing and future software. Concurrent programming is not an option anymore; it's a way of life because concurrent programs are necessary in order to exploit the full power of the machines. Traditional multithreaded programming The vast majority of programmers use threads and various locks and synchronization mechanisms to achieve concurrency. This is a low-level approach not capable of expressing complex concurrent programs in a intuitive and simple manner. Threads are nondeterministic in nature and the lack of simplicity makes deadlocks and livelocks hard to detect. These are major drawbacks because concurrent programming is much harder than sequential programming and the main reasons for this added complexity are nondeterminism, inconsistent data, deadlocks, and livelocks, which are absent in sequential programs. For those new to concurrent programming, a program is nondeterministic if it doesn't exhibit unique behavior—the same input always result in the same behavior. A nondeterministic program is, of course, a lot harder to debug than a deterministic program. A program is said to deadlock if no part of the program is capable of making any progress; for example, if part A is waiting for response from part B, which is waiting for response from part C, which again is waiting for response from part A. And finally, a livelock occurs if there is an infinite internal communication between various part of the program, hence a livelock can be seen as an infinite loop. Due to these drawbacks, the thread paradigm isn't suited for large-scale concurrent programming. The next section introduces CSP.NET, built upon the Communication Sequential Processes paradigm, which tries to take some of the earlier mentioned difficulties into account. Introducing CSP.NET This sections describes how to use CSP.NET as an alternative to thread-programming described in the last sections. I'll start out by introducing the basic concepts in CSP.NET and then move on to show an implementation of the Producer/Consumer problem using CSP.NET. A lot of CSP.NET features are left out in this section, but they are described in the CSP.NET documentation. What is CSP.NET? CSP.NET is a Microsoft.NET 2.0 library designed to ease distributed and concurrent programming in any language supported by the .NET platform—C#, VB, and C++. CSP.NET is a new library, but it's built upon an old paradigm named Communicating Sequential Processes (CSP) first introduced in 1978. CSP is an algebra for describing and reasoning about distributed and concurrent programs, but you don't need to know CSP to use CSP.NET. A CSP.NET program essentially consists of multiple processes communicating through channels. Processes A process in CSP.NET is simply a class implementing the ICSProcess interface, shown below. public interface ICSProcess { void Run(); } A very simple process writing all even numbers less than 100 is shown below. public class Even : ICSProcess { public void Run() { for(int i=2; i < 100; i += 2) Console.Writeline(i); } } Once the processes have been created, they may be executed by using the Parallel class. The Parallel class runs each process as an operating system thread, meaning that they may be executed in parallel on a multiprocessor machine and concurrently on single processor machines. The code below creates an instance of the Parallel class with two processes, Even and Odd, and execute the processes by calling the Run method. Parallel par = new Parallel(new ICSProcess[]{new Even(), new Odd()}); par.Run(); The Run method of the Parallel class returns when all the processes have finished running. Channels The last section illustrates how to create and run multiple processes concurrently, but processes often need to communicate and the way to do that in CSP.NET is through channels. Two types of channels are provided, anonymous channels, and named channels, but this article only focuses on anonymous channels. Four kinds of anonymous channels are provided by CSP.NET, One2OneChannel, One2AnyChannel, Any2OneChannel, and Any2AnyChannel, and all of them include a Write method and a Read method for writing to and reading from the channel. Another important feature in common for all channels is that they are used by at most two processes at any given time, although multiple processes may be connected to the channel. Finally, all CSP.NET channels are so-called rendezvous channels; in other words, the Write method is blocked until another process reads the data from the channel and the Read method is blocked until data is available from the channel. If preferred, channels can be buffered, making the Write method non-blocking. The One2OneChannel is shared by two processes, a writer and a reader. The One2AnyChannel has one writer and multiple readers. The writer writes data to the channel and one of the readers read the data. Notice that all the readers may try to read data from the channel, but one and only one of them will get the data and we don't know which one. All we know is that one of them has read it when the Write method returns. We do not know which reader gets the data, but we know that one and only one gets it. The Any2OneChannel is shared by multiple writers and one reader and again only one of the writers can use the channel at any given time.The Any2AnyChannel has multiple writers and readers and only one writer and one reader uses the channel at any given time. Anonymous channels are created just like any other object. One2OneChannel< string > chan1 = new One2OneChannel< string >(); One2AnyChannel< Anytype > chan2 = new One2AnyChannel< Anytype >(); Channels are generic thus any data type may be send through a channel. A Simple Example This section shows an implementation of a very simple problem—the Producer/Consumer problem. The producer produces some kind of items, in our case Integers, and the consumer consumes these items. Before you rush into the code, you should get CSP.NET up and running. CSP.NET can be downloaded here and for now you only have to download the CSP.NET Library; the Name Server Service and the WorkerPool Service are used in distributed applications. Once CSP.NET is downloaded, install it and add a reference from your project to the CSP.dll. That's it; you are ready to code. All you need is a Consumer process and a Producer process. The Consumer process is shown below. using System; using System.Collections.Generic; using System.Text; using Csp; // use the Csp namespace namespace MultiprogrammingCSP { // implement the ICSProcess interface. class Consumer : ICSProcess { // Channel to receive Integers. private IChannelIn< int > inChannel; public Consumer(IChannelIn< int > c) { inChannel = c; } public void Run() { // Read 10 Integers from the channel and // write them to the Console. for(int i=0; i < 10; i++) Console.WriteLine("\t\t\t Consumed: "+ inChannel.Read()); } } The implementation of the Consumer process is very simple. Notice that inChannel is declared as an IChannelIn channel. All channels implement the IChannelIn interface; hence, the Consumer process will accept any kind of channel. Furthermore, IChannelIn specifies that the process reads from the channel. The Run method simply reads Integers from the channel by calling the Read method. Remember that the Read method will block until data is available. Next, you need a producer to feed the consumer with Integers. class Producer : ICSProcess { private IChannelOut< int > outChannel; public Producer(IChannelOut< int > c) { outChannel = c; } public void Run() { for (int i = 0; i < 10; i++) { Console.WriteLine("Produced: " + i); outChannel.Write(i); } } } The Producer process is similar to the Consumer, but you use an IChannelOut channel instead of an IChannelIn channel because you need to write the produced items to the channel. The processes are implemented and all you need is to get them running. public class ProducerConsumer { public static void Main() { CspManager.InitStandAlone(); One2OneChannel< int > chan = new One2OneChannel< int >(); new Parallel(new ICSProcess[] { new Producer(chan), new Consumer(chan) }).Run(); } } } The Main program calls the InitStandAlone method, which is standard procedure in every CSP.NET program; always call InitStandAlone as the first thing in your CSP.NET programs. Actually, InitStandAlone isn't the only alternative; but, as long as your program is running in one application domain, InitStandAlone should be used. An One2OneChannel channel is created and passed to the constructor of the producer and consumer. Finally, you create an instance of the Parallel class and run the producer and consumer processes concurrently. The Parallel.Run returns when both processes have finished. That is all there is to it—very easy. No explicit use of threads, no need to lock shared data. It's very simple and intuitive. One may argue that the producer shouldn't wait for the consumer to get an item before moving on to produce a new one. Fortunately, CSP.NET channels can be defined with buffers, thus a solution to the problem is very easy: Just use a buffered channel. Replace One2OneChannel< int > chan = new One2OneChannel< int >(); with One2OneChannel< int > chan = new One2OneChannel< int >(new FifoBuffer< int >(10)); in the Main method. You can define your own buffer or use one the predefined buffers—for example, the FifoBuffer—which is a standard first-in, first-out buffer. There may be more than one consumer, but again this is very easy to solve. Use an One2AnyChannel and add the consumer processes to the Parallel instance, as in this example: One2AnyChannel< int > chan = new One2OneChannel< int >(); new Parallel(new ICSProcess[] { new Producer(chan), new Consumer(chan), new Consumer(chan) }).Run(); I'll leave it up to you to figure out how to handle multiple producers and consumers, but again it's very easy. Conclusion This article has given a very brief introduction to CSP.NET and a lot of features haven't even been mentioned. Those include barriers and buckets used to synchronize processes and named channels primarily used for communication between processes residing on different machines. CSP.NET also provides a DistParallel class, similar to the Parallel class, capable of running processes on remote machines. All CSP.NET features are briefly described in the documentation. Although the program presented was very simple, it should illustrate that the CSP paradigm is more intuitive and simple than standard thread programming.
http://mobile.codeguru.com/csharp/.net/net_general/threads/article.php/c12533/Multithreaded-Programming-Using-CSPNET.htm
CC-MAIN-2017-43
refinedweb
1,875
55.64
This action might not be possible to undo. Are you sure you want to continue? 11/19/2013 text original FINANCIAL FREEDOM THROUGH FOREX How to Harness the World’s Money Mountain and Set Yourself Financially Free Greg Secker & Chris Weaver Knowledge to Action Publishing. There will always be people out there who will happily take your money and do nothing more with it than generate fees. commissions. only for large banks trading with each other. The first is the advancement in technology and. in particular. The expansion and popularity of the Internet has made it possible for everyday people to have access to the world’s largest and fastest growing marketplace (the Foreign Exchange market) through online brokerages and charting software companies. Nowadays. anyone can trade. That’s why you are reading this book. and income for themselves. In the past. and they have now decided to manage their own money and keep 100% of the return (without paying management or performance fees). individuals were unlikely to know anything about the Forex market because it was really an exclusive club. . At the end of the day.Preface There are more individual. the smart move is to find somewhere sensible to grow it. leaving you with just a husk of the potential wealth that money could have generated. After you’ve lived a little and made some money. private Foreign Exchange traders now than there have ever been. it’s your money and you’ve worked hard for it. and people are talking about it and understanding the profound effects that successful money management and Forex trading can have on their lives. There are three main reasons for this. and the number is growing daily. The second reason is awareness. the Internet. He cites a specific example: On the bitterly cold afternoon of January 13. arguably one of the largest and most successful investment banks. we are herded by the next Hermes tie-clad shepherd into his pen. I remember a certain money manager telling me in a pub in the city one late Friday night. and the credit crunch? Did you? No! It amazes me how quickly we forget the lessons from the past and then. poor trading strategy. we witnessed Goldman Sachs. Cialdini talks about Captainitis. D. after that I’m going to sack this off and do something I really enjoy. In his book. the plane was finally cleared to take vi . Recently. and tells us how people need “captains. where his well-educated but poorly skilled traders erode our savings with irresponsible money management. in the late 1980s who first identified this phenomenon in his groundbreaking book Influence: The Psychology of Persuasion. 1982.C. if anything of value. the oil crisis. As a young man fresh out of the university. among investment bankers and asset managers. and just a basic lack of care and responsibility.” It was an American psychologist. Air Florida Flight 90 sat on the tarmac of National Airport in Washington. expensive lunches. How many people did well in the dotcom boom and bust. like sheep. testifying on Capitol Hill for betting against the very products they induced their clients to buy in the latest unravelling of the credit crunch scandal. after a time. “It’s all about this year’s bonus. and exorbitant offsite junkets—the answer? You! (Who else?) We have all watched city bankers (clad in fitted suits and Hermes ties) promise us much—casting their investment outlooks while talking in tongues—but actually deliver little.I have worked for years. Following a series of delays. I found the “city” to be a fascinating place—but.” because they have such little confidence in their own ability. Robert Cialdini. you just have to wonder who is paying for all the flashy cars. look at that thing. This tragedy is an example of a troubling and all-too-pervasive problem in aviation that officials in the airline industry have referred to as “Captainitis. Flight 90 crashed into the icy waters of the Potomac River. That don’t seem right. resulting in a crash. that’s not right. As the captain and the first officer were completing their last round of pre-flight checks.off. For example. I don’t think that’s right. Ah. Less than one minute later. Experts also have a hand in helping us decide what we should think. does it? Uh. vii . There is an unending supply of people who are willing to hand over their money to you on the promise that you will do something more useful with it. This is a clear example of the power of the principle of authority. In this case—and many others like it—the copilot made the calamitous decision to defer to the captain’s authority. the following exchange took place regarding one of the systems: First officer: God. there’s eighty. one study found that when an acknowledged expert’s opinion on an issue was aired just once on national television. that is. Do you think the investment management world runs on this basic principle of authority? You bet it does. First officer: Naw. public opinion shifted in the direction of the expert’s view by as much as 4 percent. we tend to defer to the counsel of authority figures and experts to help us decide how to behave. the plane took off. Captain: Yes it is. If you are in the Investment Management business. especially when we are feeling ambivalent about a decision or when we are in an ambiguous situation. maybe it is.” This occurs when crew members fail to correct an obvious error made by the plane’s captain. Shortly after this conversation transpired. this is important. The good news is that you can harness tools and strategies to rapidly affect your psychological and internal beliefs—and this will change your entire world. and we know exactly what is required for trading success. we must find a trading strategy that suits our personality and. To achieve this concentrated approach. with that. be read progressively—especially if you are a complete beginner.Making money in the financial markets requires a personal epiphany. including the sequence in which the information should be delivered. thereby bypassing the automatic invoking of our hard-wired mechanisms. we are ready to start producing. viii . private Forex traders with the skills and confidence required to trade successfully.” speaks to the importance of mastering your mind and creating confidence within yourself. The old adage. Simply learning some trading tools and skills is not enough. Once the psychology exercises are in full flow. we have divided this book into three separate sections.. We understand that 95% of what you read about finance and economics is either useless or incorrect. You have to realise that the cultural hypnosis that has been keeping people poor for hundreds of thousands of years works as strongly today as it ever did. “Scared Money never makes money. Our mission at Knowledge To Action is to equip this next generation of up-and-coming. We have mentored a lot of traders at Knowledge To Action. therefore. Jumping straight to the back of the book and reading only the strategies would minimise the impact that the strategies have on your personal trading account. so we are going to scrap all of the irrelevant information and focus on the 5% that is potent and effective. The book should. Read through it thoroughly and methodically and make sure to complete all the exercises. ix . most of all. And. have fun and enjoy the journey. it is vital that you get your hands dirty.To get the most out of this book. . ...................................................................................................... 70 Chapter 4: THE PROFITABLE PROCESS ....................................................................................................................... v Part 1: Environment—The Right Condition for Trading ................................ 73 Trading within the Triangle – Creating Real Income and Sustainable Growth .... 38 The “Bounce” ....................... 54 The Money You Lose Is Harder to Get Back—FACT!............................................................................................. 58 1% Maximum Account Risk per Trade ............... 44 Trading the Small Bars (and Why Amateurs Don’t) ........................................................................................... 51 Leverage...........................................Contents Preface ...................................................... 62 Reward to Risk Ratio ........................................... 1 Chapter 1: THE ESSENTIALS .......................................................................... 6 I Want to Learn How to “Trade” the FX Market! ...................................................................... 21 Two Schools of Thought… .................................................................................................................................................................................................................................. 51 The Most Important Facts about Trading .................. 47 Chapter 3: RISK MANAGEMENT .......... 16 Chapter 2: THE BASICS & THE BOUNCE ............................................ 21 Basic Terminology ................................................................................................................... 73 Flexible or Chaotic? ..... 59 Trading Sizing........ 3 Who You Really Are and How You Really Feel about Making Money .............................................. 23 Cyclicity................................................................................................... 66 A Quick Recap of the Two Significant Ratios ........................ 78 ..................... .................................................................................................................86 Who Are You Trading?.................................................................Part 2: Strategy—The Ultimate Implementation ...................................81 Chapter 5: THE ANGRY BEAR ........130 xii ............................96 Strategy Summary ...85 Exploring the Strategy....................................................112 Strategy Summary ..........................................................101 Exploring the Strategy .............103 What Are You Looking For?................................................................................................................. A Life of BENEFIT .........87 What Are You Looking For?....................................................97 Chapter 6: FOREX ECLIPSE .......87 Where Do You Place Your Entry and Exit Orders? ..........117 A Life of Cost.115 Chapter 7: Let’s Talk about Money… .......................103 Where Do You Enter and Exit? ................................................................................................................................................126 Bring this Book to Life ...............................................................................122 The Powerful Force of Compounding ............................................................101 Who Are You Trading?.102 When Are You Trading?..............................................................90 Why Does This Strategy Work?........................86 When Are You Trading?...........................................................................................................................................................................................................................................................................................................................107 Why Does This Strategy Work?.........113 Part 3: Purpose—Moving Forward Effectively..........................117 The Trader with the Most Pips… .............................................................................. Thus. Part 1 deals with the environment. One way to think about a strategy is as if it were a seed. Successful strategies are certainly a big piece of the trading puzzle. Most people believe that successful trading entails using the “best” strategies only—this is not the case. but they are not enough on their own.Part 1 Environment—The Right Condition for Trading The first part of this book is about making sure that your personal environment is right to receive and interpret the strategies correctly. . A strong seed will not germinate without the correct growing environment. . Europe. with a goal of equipping you with the tools (both mechanical and psychological) to generate life-changing amounts of money through profitable trading. Trading books generally do a very good job of overcomplicating very simple ideas—not this one! Instead. You have certainly made the right choice! We have privately mentored thousands of promising new traders all over the world—from the Americas. despite the fact we haven’t met you in person yet. so given the fact that we can’t be with you in person. the Middle East. New Zealand. I am going to gently nudge you throughout. so. to watch some amazing testimonials from our real-life students.knowledgetoaction. Belief is important. we are committed to the success of our tradersin-training.com. Our family is huge and we are growing. this book focuses on practical application. Most people don’t read much past the first chapter of any book.”—Albert Einstein Einstein’s approach perfectly describes how this book will be presented. Though you can’t replace “real-life” proof as the . and Asia. little result. welcome! Please go to our Web site. This requires your commitment to take immediate and massive action and lock all the doors behind you. These will help to build belief in you in terms of what is possible. Without belief comes little action and. ultimately. Australia. www. At Knowledge To Action. So I asked the following question: 4 . London.Financial Freedom through FOREX ultimate reference point upon which to build your beliefs. spent over $5. money being made. How did all this happen? Before Knowledge To Action came to be what it is today. Before long. As I got to the party. where we would all actively trade the FX markets day in. second. After a series of questions. everyone was excited to finally meet a trader. day out. recent trading seminar attendees that I learned something very interesting about human psychology and belief. all camped out at my mews house in Kensington. It wasn’t until I was invited to a late-night gathering of keen. taking a look at our graduate’s success stories will certainly get you on the right path. I quickly discovered that they What was going on here? Why had thirty-eight people. I had amassed a cult following of some nineteen traders. I was trading from home and inviting friends over to watch me trade and join in. who all appeared to be intelligent and well educated. Take a look at the diagram below and look at the impact weak belief has on the rest of the cycle. a resounding. For now. there is little belief. which means there’s a little voice inside all of you that is screaming. getting more and more powerful with each revolution. The point here is that if it’s positive from the outset. “is that you just don’t believe in this stuff. ultimately the results will be poor—which will support and confirm the original belief.” I started. it has a great chance of staying that way. the same rule applies. The spiral will continue. 5 . We’ll talk about changing beliefs later in this book. ‘I’m not convinced this actually works!’” Without real references. unless you believe trading is a viable means of making money. This in turn will result in the limited application of your resources (your time and money) and. “No!” “The problem you guys have. you’ll not take the appropriate action. You need to realise that. of course.Greg Secker & Chris Weaver Did any of you watch your mentor trade live? Can you guess the answer? Yes.” You haven’t seen it work with your own eyes. concentrate on surrounding yourself with positive examples—our Web site is a good place to start. Look at the belief cycle below to see how strong belief impacts the rest of the belief cycle—this is what you want. if it’s negative. This is a sure way to lose money 6 . and one who has a clearly defined destination. one who is going in the right direction. This vehicle needs a driver. to where you want to be. This is especially true for trading the Foreign Exchange (FX) market. The first question that you have to answer before reading any further is one of the most important questions of all: WHY? Why Forex trading? You must have a strong “why” or “reason” to trade.Financial Freedom through FOREX Who You Really Are and How You Really Feel about Making Money It is critical to establish and discover what trading means to you personally. This reason why is your psychological whip to drive you to pursue trading with the enthusiasm and commitment that is required. A lot of people like to dabble in this market. This concept has everything to do with your personal moneymaking ability and the understanding that trading is simply a vehicle—the most efficient vehicle to get you from where you are. I will be demonstrating the power and significance of a motivated and dedicated driver in great detail later in the chapter. In the box below. ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ ____________________________________________________________________ You must also have an initial target in terms of monthly income. But if you are willing to be committed. So. How committed are you? 1 2 3 4 5 6 7 8 9 10 11 We all know that saying we are serious about something is one thing. and invested. In the next few minutes you will know exactly how serious you are – no more pretending. but being serious is completely different.Greg Secker & Chris Weaver fast. please write the amount of money that you would like to draw from your trading account each month. Most people are desperate to believe that they are serious about increasing their wealth or dramatically changing their financial circumstances. rate your personal level of commitment to your trading experience by circling one of the numbers below. determined. The reality is that most people are NOT. How do I know this? Because most 7 . Be as specific as you can. what is your reason for trading the FX market? Please write it below and refer to it each time you pick up this book. £ Now on a scale of 1–11 (11 being the most serious. You can think of this as your trading salary. 1 being the least serious). you can begin a new life of financial freedom and security. Let me show you the results of a student I have done some work with: Monthly target: £6. Now you must recognise that all of this starts with you hitting the initial monthly target above. In the blank box below.500 Seriousness: 9 out of 11 Amount of investment: £500 8 . An indication of how serious you are about hitting your monthly target (1–11) 3. write down how much money you are willing to invest into yourself to hit the target that you set yourself. you are able to contribute to good causes. and when. you can take a holiday. Imagine the freedom that you have. That’s why it is average. £ At this point you should have: 1. The amount you are willing to invest in order to hit your target and begin the journey to a greater level of financial freedom Great. charities and programmes that you believe in. let’s keep moving… Most people generally want large returns with little investment. For now. You can’t have the product without the process. A monthly income target 2. Everyone wants more money. but no one wants to invest the time and money into making it happen.Financial Freedom through FOREX people’s financial circumstances never experience a dramatic change! The majority of the earning population are exactly happy with their average salaries. events.” Imagine yourself being financially independent. No longer do you have to live your life according to cost. let’s get back to the “seriousness test. Not only that. You don’t have to ask anyone if. but rather according to benefit. Think of this as a “one off” investment. Most people are trying to find a quick fix. I consider any response that is an 8 or above to be very serious. Investment: £500. let’s break down this example: Monthly target: £6.500 is the joint salary of this man and his partner. There are no “correct” answers. with an investment of just £500 into himself.000 in the first year alone (£6. He would like to receive £78. Seriousness: On a scale of 1–11 he was a 9. they could both quit their jobs without having to make any financial sacrifices. How serious do you really believe you are? Ask yourself these questions and be honest: Question: Am I approaching this venture looking to gain a lot with only a little effort? If so. Look at the difference between what he wants to receive and what he wants to give. they need a complete transformation.500—this is a very reasonable initial monthly target to achieve via successful FX trading. The £6. in fact.” for their finances when. I have a couple questions for you: 1) Would you consider this guy realistic? ________________ 2) Would you consider this guy to be serious about hitting his target? _________________________________________ Now look at your answers to the exercise above.Greg Secker & Chris Weaver Before we go any further. This is all about discovery. have I approached other opportunities in the past that way—what were the results? 9 . or a “band aid. That equates to a 15.500 x 12 months). With that in mind. let me make something very clear.600% return on capital! That is really asking his £500 to perform! This is not at all uncommon. This is where it gets interesting. if he were able to achieve this amount. therefore. they need to do more exercise and eat better food. 10 . That is a devastating mistake. what action must I take? ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ The Good. As a result. They will even take small amounts of action to bring about the desired change. Here is a quick example: Most people know that if they want to lose weight. the action they take is normally just information gathering.Financial Freedom through FOREX ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ Question: Am I looking for a quick fix for my finances or a complete transformation? If I am looking for a complete transformation. but simply to make them feel better and to convince themselves that they are actually doing something. a lot of people know what to do but never actually get around to doing it! These people mistake knowing what to do with taking action. The Bad and the Ugly—The vast majority of people live their entire lives hoping that things will change financially for the better. This is done not to bring about transformation and address core issues. the Bad. But the result is not in the information. and the Ugly Let’s start with the Bad and the Ugly. Unfortunately. To be a successful trader. The second way to generate money is through the efficient use of money itself. Their knowledge must become action if they want to lose weight— Knowledge To Action! You can see how strongly we feel about this! This is why the first paragraph of the introduction makes it clear that you must lock all of the doors behind you. Let me explain. even if the exercise above revealed that you are looking for a lot and only wanting to give a little. This transition of thinking takes you from a place of lots of work for relatively little money. and it is also true specifically within the FX market (because money creates trading opportunities). you can still make it as a successful FX trader. The first is to work for it. to efficient input for a relatively large sum of money. it is necessary to shift from the traditional and somewhat dull time equals money mindset to the explosive and transformative money creates money mindset. the more money you make. If you believe you are going to make money from this.” let’s talk about the Good! The Good is that. With this approach. This is the time equals money approach. The Good—So now you know “the Bad and the Ugly. we are here to help you. then go for it! I promise you that it is possible. This is what we call the money creates money approach. There are basically two ways to generate money. the more money that you have. and will then be paid in exchange for what he or she has done or how much time he or she spent doing it. the more money you can make—money creates money. And you are not on your own. the more money you have. I am saying that trading the FX market is an efficient and effective way to manage and make money. This is true on a broader scale (because money creates opportunity). Therefore. 11 . It is the difference between having a salaried approach to your finance and having a money management approach. I am not saying you don’t have to make an effort. or until a certain result is achieved. An employee will work for a certain number of hours.Greg Secker & Chris Weaver the result is in the implementation of the information. To turn $100 million into $110 million is inevitable. You mustn’t just read this. 12 . we are constantly getting our students to selfanalyse and address core issues that are hindering financial progress or success. then the outflow or result will not change—FACT. then you must take action and start the process to change them… and that means now. If you believe that things need to change. or is it a tree full of excuses and reasons why money never comes to you? Self-transparency is important.” Let’s Keep Digging… At Knowledge to Action. or results. If you are unable to assess yourself accurately and honestly. The point of this section of the book is for self discovery and the exploration of core financial beliefs and issues. former CEO of Seagram.Financial Freedom through FOREX Let me summarise this small section with a quote from Edgar Bronfman. So. “To turn $100 into $110 is work. You must be able to assess yourself honestly so that you can move on from where you are to where you want to be. The reason we do this is because we have seen too many people try to change the fruit of their lives while neglecting the roots of their lives. you can never make an apple grow on a pear tree! If you want different fruit you must have a different tree. Let me explain. Most people constantly look at the fruit. you will not be able to improve the things that need improving. This is pointless because the fruit is simply a product of the type of tree that it is growing on. and never begin the pursuit of increasing wealth. If the inflow or source doesn’t change. This is true because the tree is the source of the fruit. And that is the key: moving on. put yourself in a box. of their financial circumstances and try to change them. what type of financial tree do you have in your life? Is it a tree that allows for constructive self criticism and transparency. Putting the blame for lack of prosperity anywhere other than yourself is a disempowering activity and never results in positive change or growth. No matter how hard you try. Please take the time to go through them thoroughly.Greg Secker & Chris Weaver Below are important exercises we do with our traders-in-training at Knowledge To Action. 13 . Financial Freedom through FOREX The exercise above shows the difference between what you think about money and what you believe about money. 14 . Private promises we make to ourselves are much easier to break than public declarations of goals. the focus must now be on… This stuff is so powerful! It is up to you to allow these simple and effective exercises to serve their purpose by taking them seriously and referring to them frequently. I would like to share with you a little saying that we have at Knowledge To Action. as a Greek Australian.Greg Secker & Chris Weaver The exercise above demonstrates where you have been. Maybe you found that you have done or achieved a lot in the last year. after one of my heartfelt trading psychology talks.. She then proceeded to hand these out to her family and friends. Needless to say. decided to take action and print new business cards. This is important because it puts some healthy pressure on you to pursue and achieve. 15 . I also suggest that you show the results to someone else so that you have some accountability. Either way. or maybe you realised that you have done very little. declaring herself a Financial Trader. Before we move on to the next section. I once had a student who. If you mess up the road test. The difference between knowing how to trade and knowing how to make money from trading is simple. What is the difference between knowing how to trade and knowing how to make money from trading? ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ _____________________________________________________________________ Hopefully. you probably just have to take the test again. you’ve learned that there is indeed a massive difference between just understanding how to trade and understanding how to actually make money from trading. The second test is actually on the road and assesses whether or not you can implement the rules of the written exam. so invest heavily into yourself!” I Want to Learn How to “Trade” the FX Market! Ask yourself this question out loud. you may put yourself or others’ lives in danger! 16 . That is because the potential consequences are so different. Think of it like driving: there are normally two different exams you have to pass to obtain your driver’s license.Financial Freedom through FOREX “Your return will always reflect your investment. by speaking your answer out loud and also by writing it down. The first test is written and assesses whether or not you know the rules of the road. and write your answer below. Question: Would you rather make a mistake on the written test or on the road test? The answer is obviously the written test. If you mess up the written test. This is called liquidity. doesn’t it? Well. This is all fine. essentially. The difference in the FX market is the number of buyers and sellers and the volume they create. and to be able to fully appreciate it you must understand something very important: The FX market is a market that is driven by buyers and sellers.Greg Secker & Chris Weaver This says a lot about which type of assessment is more critical. Sounds very simple. hard cash mounting up in your personal bank account and opening doors you once thought were impossible to open! Get this next point into your head now and don’t forget it. it is more probable. In reality. I need to know how to make money from trading This is the foundation of every trading course and personal wealth mentoring session that we offer at Knowledge To Action. but it doesn’t produce the desired results: physical. It is a remarkable—yet very true—statement. and this analogy clearly exhibits the significance of implementation over and above information. Check out this fact: The likelihood of success when trading the FX market with the dominant trend is not only higher than any other major financial market in the world. every market in the world. is driven by buyers and sellers. The Foreign Exchange market dwarfs all of the other financial markets in the world in terms of 17 . not to simply have an academic experience. this point alone is not unique to the FX market. So… Forget about learning how to trade the FX market! Yes. forget about it! People who know how to trade normally understand a lot of interesting trading topics and can handle themselves in a truly involved trading conversation. We believe that the real need is to develop skills for a richer life. I don’t need to learn how to trade. so that. It is an increase in participation. Simply put.S.S Dollar.S. they can sell off the Sterling for a profit.Financial Freedom through FOREX participation and volume. This would be true if the growth was just merely an inflation of price. Those people are limited to the strength or influence that they hold as a group. Some even argue that this is a massive boom. or against. Dollar. which is an increase in the value of Sterling against the U. Their influence and strength has increased. but it isn’t. The buyers want the chart to continue moving up.S. it would be far more likely that success would occur than if you traded in another trend in 18 . and it is currently over three times larger than all of the major stock markets in the world combined. which is bound to burst. as well as the probability of achieving or realizing what they stand for. This upward trend is incredibly strong due to the amount of participation and money that is being invested into this trend consistently. or bubble. the buyers have a “cause” or “agenda” they are fighting or trading for. making the market even more liquid. which in turn increases the probability of success on a long term basis—bring it on! Think of it like this: let’s say that there are one thousand people who are behind a specific cause. For the moment. Now let’s say that the group grows from one thousand to one-hundred thousand. If you were to trade in the direction of this trend. we normally see it trending either up or down (it goes sideways for less than 15% of the time). A lot of potential traders fear the FX market and use its size as a reason to stay away from it. Dollar. If you relate this concept to trading the FX market it looks like this: If we consider the GBP/USD (also known as “cable”) chart. eventually. which means that the buyers are in control of this currency pair and are driving the value of Sterling up against the U. This means that the Sterling is increasing in value relative to. Dollar (the next chapter has more on this concept). we will just assume it is trending up. the U. It is by far the fastest growing market in the entire world. which is to say that Sterling will continue to strengthen against the U. Greg Secker & Chris Weaver another market where there is less participation. liquidity. and money behind the move. 19 . . This means that trade sizing and reward to risk ratios (I will be explaining each of these ratios in the Risk Management chapter)—become very difficult to determine. and take profit prices. especially beginners. This. With each approach comes a set of pros and cons. With that in mind. it is very difficult for the trader to get out of the position without taking a hit. The reason pure fundamental trading doesn’t work for most people. . there are also huge benefits to trading the news. and this type of trading can prove to be very profitable to the seasoned trader. Fundamental: Fundamental analysis is based on economic data. Traders who trade fundamentally wait for specific press releases and attempt to jump into big moves. If the trader takes the bait and jumps in on a false buy or sell signal and the market moves quickly in the opposite direction. stop loss. This is very dangerous because the market is normally very volatile during a news release. Large pip scores can be made very quickly. is that it is impossible to define specific entry. Question: At Knowledge To Action. the trade is left to chance. a trader can see a perfect technical setup and place the entry and exit orders around it. and take profit prices. When this happens. This is the biggest practical difference between the two styles: fundamental trading is more free flowing. the result will be profitable. Technical: Technical analysis is based on chart interpretation. we understand that both have their limitations. Technical traders are looking for the charts to show very clear setups. fundamentally. we stay out of the market. If. You cannot leave your account open to chance. ultimately results in losses for the vast majority of traders.Financial Freedom through FOREX combined with the “fast-finger” nature of this style of trading. the result will be negative. only to see the market react to the data released and completely disregard the technical picture. In response to this. The problem with trading only technically is that the market sometimes ignores technical setups on the release of fundamental data. if the news pushes the market in the opposite direction. You need a solution. In other words. technical trading is more accurate and specific. on the 22 . there is a lot of news being released on a particular day. we have created a simple combination of the two that provides the best possible environment for success. Once a setup appears. We understand that both styles have their merits. If the news release pushes the market in the desired direction. the trader is then able to precisely determine the entry. isn’t it? Let me explain. stop loss. It works like this: 100% Fundamental—If. Equally. do we trade fundamentally or technically? Answer: 100% of both! Huh? But that’s impossible. Greg Secker & Chris Weaver other hand. however. fundamental information tells us 100% whether or not to sit in front of our computers. Basic Terminology In this chapter we are going to explore the big picture of trading and make concrete distinctions between unsuccessful and successful traders. A unique aspect of the FX market is that the movement on the chart is measuring one currency relative to another. Technical information shows us 100% where and when to place our trades. the FX market is measured through price movements that are plotted on a chart. Before we do that. nothing of any significance is happening. where it closes and what the extreme highs and lows are over a specific period of time. then we are fine with sitting down and looking for trading opportunities. The OHLC bar shows you where price opens. 100% Technical—Once we have the “OK” from the fundamental perspective. we trade only technical set ups. In summary. OHLC Bar OHLC stands for: Open High Low Close Like any other financial market. it is important that you have an understanding of the basic terminology. Look at the example of an “hourly” bar below: 23 . This is why the “price” on the chart is referred to as the exchange rate. A buyer’s bar has exactly the same structure to it but has a higher close than it’s open. the most commonly used timeframes to establish long. medium. In this scenario. Look at the buyers bar below. the sellers have won this particular hour of trading. White bars are also referred to as seller bars. That is because they want to sell high and buy back low. This name comes from the fact that the sellers in the market are the ones who are pushing the price down. This is because the close of the bar was lower than the open. Notice that the close is higher than the open. Although these bars can represent almost any period of time. and short-term 24 .Financial Freedom through FOREX Notice that the bar is white. Buyers want to buy low and sell high. It is referred to as a buyers bar because the buyers were able to push the price up higher than the open during the given period of time. Stock markets close five times per week (each night. however is very insignificant in the overall scheme of things. For example. just in case the market shifts very quickly against you. It is a market order that instructs your broker to close a losing position at a specific price. Stop Loss and Limit (take profit) Orders: Stop Loss: You are going to have a few losing trades. low liquidity makes it more difficult) than the FX market. 2. Time is king—always. Monday through Friday). the greater the value of information it gives. The stop loss does exactly that. but also by events that occurred between the close and the open. This is not true in the stock market where “gapping” occurs quite often. or gaps. A major benefit of trading the FX market opposed to traditional stocks and shares is that your stop loss is almost always “guaranteed. that means that the buyers won the last one minute of trading. If the buyers were to win that period of time. it would tell us a much clearer story as to where the market is pushing the overall exchange rate. you need to have a safety net which cuts your losing trades short. One minute. This creates opportunity for gaps in the market.” I mean that if a trade moves against you there is very little chance that the price will “gap” or jump over your stop loss order. The longer the time duration of the bar. and 5minute charts.Greg Secker & Chris Weaver trends. That’s just life on the “trading floor.” For this reason. When a market closes. far more frequently. it must also open. which means the price jumps around much more. the issue is where it opens. The price the market opens at is not only influenced by where it closed. There are two notable reasons for this: 1. Compare that to a weekly OHLC bar. But the issue is not when it opens. 25 . Stock markets are less liquid (high liquidity makes it easier to get in and out of a position. The FX market only closes once a week (Friday night). if you had a black OHLC bar on a 1minute chart. as well as trade entry prices are: Daily.” By “guaranteed. 4-hour. 1-hour. It is important that you have a stop loss on every single trade. This makes risk management much easier and far more accurate. The first example below is a picture of the Yell Group Plc chart. This stock experiences a very quick drop in value and you can see how the bars on the chart gap.Financial Freedom through FOREX Therefore. 26 . The point here is that there are no gaps even though the price moved very quickly. Imagine if you were in a Long (buy) position (this means you make money when the chart moves up and lose when the chart moves down) and you had your stop loss placed in the middle of the gap. The second example is a picture of the 5-min EUR/USD chart. the more times the market opens and closes the more opportunity there is for your stop loss to be gapped. Each bar passes off to the following bar with no break or gap in the price action. The effect of the gap in price would be that you lose more money than you had originally intended to risk—this is definitely an issue and makes trade sizing and risk management very challenging. In a very short space of time the Euro strengthens over 150 pips in value against the US Dollar. ” rubbish! Later in the book. Limit (take profit) Order: Your limit order is the exit order that you want to trigger—it is the opposite of your stop loss. Once the order is in place. but let’s just see if I can get some more out of this trade. It is true that the market rarely opens at exactly the same price on Sunday as it closed on the previous Friday. Limit orders and stop losses are both exit orders. This is only possible because of the limit order. The limit order is an exit with a profit. 27 . The great thing about the limit order is that it will collect your money for you without you having to be in front of the computer. the stop loss is an exit with a loss. we are going to talk more about the importance of targeting and taking the money once your target is achieved. but the gap is normally so small and insignificant that it has no real impact on your trading account. “I know that was my target. The stop loss stops your losses and the limit order collects your profit at your predetermined target always nice to leave your trading desk with an open trade and come back to banked money. The limit order is also very useful for controlling the greed in you. it collects the money automatically as soon as the price hits the target. No more. 28 . the words “Pip” and “Point” are interchangeable. All pairs that exclude the JPY trade with the fourth decimal as the point. This is only because the JPY is involved. Principle: When the chart is up. the base currency is down and the terms currency is up. a currency quote may look like this: GBP/USD 1. the base currency is up and the terms currency is down.6505 USD. When the JPY is traded the exchange rate will only have two decimal places. The only exception to this is when there are trading pairs that include the Japanese Yen (JPY). In the FX scene.57 Notice that in this exchange rate the second (not the fourth) decimal is the increment that we are trading.Financial Freedom through FOREX Pip: Pip stands for “Percentage in point”. For instance. What does the exchange rate mean? Here is an illustration: So. in this illustration if you wanted to buy just 1 GBP you would have to sell 1. Take a look at the example below.6505 The 5 is the fourth decimal place and the price increment that we are trading. The currency pair in the example is the Australian Dollar with the Swiss Franc (AUD/CHF). A pip is measured as the fourth decimal place of the exchange rate. It will look like this: USD/JPY 91. When the chart is down. deciding whether there will be an increase or decrease in value. which means the terms currency is up! Trading the FX market is much different than trading equities. When trading the FX market. Or you could say the CHF is weakening against the AUD. is the NZD strengthening or weakening against the EUR? Answer: Strengthening—if the chart is down the base is down.Greg Secker & Chris Weaver Question: According to the trend. you can see the daily chart of the Euro and New Zealand Dollar (EUR/NZD). When trading equities you are speculating on the value of an individual share. when the chart is up the base is up. In the next example. is the AUD strengthening or weakening against the CHF? Answer: Strengthening—remember. you are trading economies—one economy against 29 . Question: According to the trend. You will learn this ratio in the Risk Management chapter. or “going long. When the chart moves down. For instance. the EUR/NZD chart above indicates that the economy in New Zealand is getting stronger and stronger. Trading the FX market is simply trading economies. in relation to the European economy. Pips are important because they hold monetary value. it indicates that investors are buying the base and using the terms as the “funding” currency. There are two different positions you can take: Buying. For each pip you collect. each pip you give away. Let’s look at the EUR/USD daily chart below. one against another.Financial Freedom through FOREX another. Your buy price is 1. 30 . you hope). you collect every pip that the exchange rate moves up. and each pair is its own entity. The process of calculating the value of your pips is called trade sizing. it indicates that investors are selling the base to buy the terms. We know this because investors are selling the Euro in order to buy the New Zealand dollar. When the chart moves up. assuming that you are hoping to achieve a 150 point move. you lose money. Buy and Sell (Long and Short) Collecting or losing pips is determined by the relationship between the type of trade you place and the subsequent movement of the market. you receive money.” is an order that you place when you believe the bars on the chart are going to move up (the base will increase. When you go long.4255. ” is an order that you place when you believe that the chart is going to fall down. the chart. is going to move up. then you enter a buy position. Note: You do not need to have bought to sell.4405 (1.4634. 31 . Your sell price is 1.0150) In summary if you have a technical reason to believe that the exchange rate and. assuming that you are hoping to achieve a 135 point move.4255 + . or “going short. From there you collect every pip that the exchange rate increases by and you lose every pip that the exchange rate decreases by. Selling. what price would the exchange rate need to rise to? Answer: 1.Greg Secker & Chris Weaver Question: To collect 150 points. up or down. When you go short you collect every pip that the exchange rate falls. If that concept confuses you. then just use long and short—all you are doing is speculating on a direction. therefore. Let’s look at the USD/SGD daily chart below. 4634 .0135) In summary. Support and Resistance Profitable FX traders use support and resistance levels more than any other FX price level. the chart is going to move down.4499 (1.Financial Freedom through FOREX Question: To collect your 135 points. This is because support and resistance levels are 32 . if you have a technical reason to believe that the exchange rate and. therefore. what price would the exchange rate need to fall to? Answer: 1.. then you enter a sell position. From there you collect every pip that the exchange rate decreases by and you lose every pip that the exchange rate increases by. or they will wait for the support level to be broken and go short to catch the next move down. That means that when the price falls into a support level. the support level is recognised as being 33 . Support levels occur naturally in the market when the price of the currency pair becomes too cheap.11 below: In this example. traders begin buying it back. This means that every time the exchange rate hits this price. traders will trade it in one of two ways. which pushes the price up. Remember that a falling exchange rate indicates a sell off of the base currency as described earlier in this chapter (when the chart is down the base is down). Now that this support level has been established. whereas selling decreases the value of the currency because no one wants it. it is held up by it and does not fall below it—it is supported up. buying increases the value of the currency because everyone wants it. Either way. When the base currency becomes too cheap. They will either go long the very moment that the price action hits the support level in an attempt to get the most efficient entry. Refer to 2. the price action has been supported three separate times at a specific price level. the market responds by buying back the base currency. A support level is confirmed when this price that is too cheap is recognised by the market at least three times. This is what causes the price action to bounce off of the support level. Support: A support level is a price that acts as a floor for the price action (black and white bars).Greg Secker & Chris Weaver very useful for determining heavy price movements and also provide a road map for the traders to trade within. 34 . A resistance level is confirmed when this price that is “too expensive” is recognised by the market at least three times. Resistance levels also occur naturally in the market. traders begin selling it back. Moving Averages Moving averages are used to smooth out short-term market fluctuations to assist in highlighting longer-term trends. it is pushed down by it and does not allow it to go any higher. which would contribute to the upward push. Refer to 2. When the base currency gets too expensive. it is easy to see that the market is treating this specific price level as a resistance. If the price were to break up through this ceiling. but unlike support levels that form when the price of the currency pair becomes too cheap. as described earlier in this chapter (when the chart is up the base is up). Resistance: A resistance level is a price that acts as a ceiling for the price action. in turn. Remember that a rising exchange rate indicates buying the base currency.Financial Freedom through FOREX significant. it would be a strong indication of strength for the base currency and traders would begin buying (normally after the resistance level has been re-tested—more on that later!). That means that when the price rises into a resistance level.12 below: In this example. The price is resisted from going up any further. pushes the price down. and it is used to help determine entry and even exit orders for seasoned traders. resistance levels occur when the price of the pair becomes too expensive. which. If the price is above both the 50 EMA and the 200 EMA. If the price is below both the 50 EMA and the 200 EMA. The 50 exponential moving average (EMA) and the 200 EMA are the moving averages most commonly used to determine the trend. At Knowledge To Action. Simple moving averages calculate the average of the prices during a given period with all the periods given the same weighting. then the trend is considered down.Greg Secker & Chris Weaver The moving average will be calculated on either a simple or exponential basis. Exponential moving averages apply more weight to the recent prices relative to the older prices and therefore reduce the lag that is common with simple moving averages. 35 . then the trend is considered up. we always use exponential moving averages. To determine the trend. 36 . This means that we trade long on this pair. Once you have done that. you can then determine if you have a trend line that is in agreement with the moving averages. you must first know where the price is trading in relation to the 50 and 200 EMAs—either above or below.Financial Freedom through FOREX To trade using the strategies that I am going to show you. Notice the agreement between the trend line and the moving averages. Trend Lines A trend line allows you to establish the general direction of the price action and provides you with a guideline to trade from. A trend line must have three touches on the price action to be in agreement. The trend line is up and the price is above the 50 and 200 EMAs. you must understand the relationship between the price and the 50 and 200 EMAs. Here is an example of a downtrend with agreement between the trend line and the moving averages. Look at the example below. When a currency is bought in large quantities. Bearish moves are normally very strong as they are fuelled by fear and anxiety in the market. Bearish: A bearish currency is the opposite of a bullish currency. This means that we trade short on this pair. Dollar is going up. Another function of the trend line is to allow you to determine what phase of the current price cycle you are in. Bullish: A bullish currency is one that is being bought by traders and investors. 37 .” These words are normally used to describe the general direction of a particular market. The Angry Bear. Dollar. The trend line is down and the price is below the 50 and 200 EMAs.S. Dollar (which means they are selling other currencies) and the value of the U. You must always know which trend is dominant on the daily chart because that is the trend that informs you whether to trade long or short. Bullish and Bearish Most people who have had some exposure to trading or investing have heard the terms “bullish” and “bearish.S. I am going to describe both words in the context of foreign exchange. if the market is bullish toward the U.S. This is one of the most important aspects of trading and will be covered in the section entitled Cyclicity. presents a strategy built entirely around the fast-paced movement of a sudden bearish sentiment towards the U.Greg Secker & Chris Weaver Notice the agreement between the trend line and the moving averages. stock. Chapter 5. or currency. Therefore. that means traders are buying the U. This means that the particular currency is being sold off by traders and investors (which means currencies are being bought) and the value is therefore decreasing as the supply of the bearish currency increases. Dollar.S. the value of that particular currency increases as the supply decreases. 38 . I am going to take you through the topic of cyclicity and give you some incredible methods to understand the natural flow of the FX market. Reversal bars are important because they are the first sign that there is a change of sentiment toward the currencies being traded.” and a specific FX strategy! It is going to blow your mind! Price is predictable—FACT. Just wait until you combine what you learn about cyclicity with “the bounce. Cyclicity It is impossible to trade successfully without an understanding of price cyclicity. or an undersized black bar proceeded by at least three white bars. Anyone who disagrees with that statement does not understand the smooth rhythms of the FX market. I’ll give you the big picture of what is actually going on—imagine that. It is critical to know what a reversal bar is because you need this knowledge to execute trading strategies and to understand cyclicity. In this section. Here.Financial Freedom through FOREX Reversal Bars A reversal bar is an undersized white bar proceeded by at least three black bars. or directionless. and vice versa. DO YOU HAVE THEM? If you are holding this book then you must have at least one lung. they inhale sell orders and exhale buy orders (or vice versa. You can do this because of the natural rhythm of their breathing pattern when they are in a perfectly stable state. It is. and these humans have lungs! And it is because of this that the FX market also has lungs—rather large lungs. In the same way that our bodies could not survive if we could take in oxygen but were unable to release carbon dioxide.Greg Secker & Chris Weaver The moment that the full revelation of the power of cyclicity makes its way onto your trading desk is the moment when things really begin to kick off. however. then you know that it is possible to predict the end of their inhalation and the beginning of their exhalation. 39 . You can also predict the end of their exhalation and the beginning of their next inhalation. very difficult to do this when people are running around out of control in a volatile. Why? Because we are human and humans have lungs – we need them to survive. The main difference being that the lungs of the FX market do not inhale oxygen and exhale carbon dioxide. depending on the trend). the FX market could not survive if it could only take in buy orders without the ability to release the sell orders. Question: Who is trading the FX market. So how does this help you to make money? Have you ever heard people breathing while they were in a deep sleep? If you have. state. rather. Trending pairs are stable to trade—this is the deep sleep breathing that makes price predictable. and if I am writing this book then I must also have at least one lung. LUNGS. machines or humans? Answer: Humans Yes. in fact. This is just one of the phenomenal parallels that the FX market has with the individual traders who trade it. the trend is only your friend half of the time.Financial Freedom through FOREX Markets that are not trending are volatile. Why? Because the market inhales and exhales as it trends. you have probably heard the phrase. Actually. or directionless. Let me show you an example: Notice how the currency exhales out in the direction of the dominant trend. Trend = Predictable inhalation/exhalation No trend = Unpredictable inhalation/exhalation Therefore you should only trade during a trending market wherein the price cycles are predictable. If you have any exposure to FX trading.” This is true but incomplete. in the same example. that the currency also inhales against the dominant trend? 40 . Can you see. “the trend is your friend. markets—this is the running type of breathing. which means that price is not predictable. The other half of the time it is your enemy. we have taken a simple truth and refined it to create yet another scenario that provides you with the best environment for trading success. Phase 1 represents the exhalation and phase 2 represents the inhalation. Phase 2 is always countertrend and certainly not your friend. This concept demonstrates very clearly that profitable trading is much more about the quality of your trades and much less about the quantity of your trades. With this understanding. You now know that trading with the trend is profitable half of the time and the other half of the time it is not. Friend) and Phase 2 (Inhale. 1-2 count. it is clear that if you are going to trade with the dominant daily trend (which you are). When you combine both phases you have a complete cycle. Enemy) Take a look at this example: 41 . then it only makes sense to trade during phase 1. There are Two Phases in a Cycle: Phase 1 (Exhale. phase 2 begins. Once phase 1 is finished. This concept can be simplified even further with our classic 1-2. and the inhalation goes against the dominant trend (…only half of the time).Greg Secker & Chris Weaver In the example above you can see that the exhalation is in line with the dominant trend (the trend is your friend…). From the examples above. ” which is the beginning of phase 1? This is certainly the correct phase to trade in. Can you see how all of the charts agree on the “push up. What next? By now it should be clear that you trade with the direction of the dominant daily trend.Financial Freedom through FOREX To trade successfully you need to know how to read which phase the daily chart is in.S. If all of the time frames are in phase 2. that is a powerful signal to trade. From there you can scale down through the 4-hour and 1-hour charts to look for agreement. Check out this next example of the British Pound against the U. that is a powerful signal not to trade. If all of the timeframes are in phase 1. The daily chart gives you the clearest understanding of the big picture. To determine the trend you simply look at the price action in relation to the 50 and 200 EMAs and then establish 42 . Dollar (GBP/USD). you can begin to look for specific setups to place your orders around. Phase 1 is tradable. That’s why I described the best time to enter a trade as the “moment phase 1 begins. The next step is to understand which phase of the cycle you are currently in. which is why we are not interested in calling the high and the low—it’s impossible. Once you have established all of this information. Question: What is the best time to enter a trade? Answer: At the very beginning of phase 1. It comes down to this: if you can spot the changing of direction in the context of the 1-2 cycle. on “the bounce.” rather than the “moment phase 1 ends. it is much easier to spot the beginning of a phase than the end of one! What does all of this mean? The market does not provide us with the ability to spot the exact moment that a phase or trend ends.” Question: What is the best time to exit a trade? Answer: At the very beginning of phase 2.Greg Secker & Chris Weaver whether or not you have a trend line. Phase 2 is not.” rather than the “moment phase 2 ends.” 43 .” In other words.” Let’s break this down. this is what everyone is trying to do—calling the high and the low. the most suitable time to exit a position is the moment phase 2 begins. then the most suitable time to enter a position is the moment that phase 1 begins. And if phase 2 is the inhalation and the inhalation is in disagreement with the trend. If phase 1 is the exhalation and the exhalation is in agreement with the trend. then you will experience massive trading success. or a “bounce. on “the pull back.” It’s also why I described the best time to exit a trade as the “moment phase 2 begins. but how do you do it? Surely. It does this by showing us a corner. That is exactly correct. instead it provides us with an indication of the beginning of a new phase. This all sounds great. which is in agreement with the dominant daily trend. By agreement I mean a black undersized bar in an uptrend. The “bounce” is what is likely to happen once it gets there. Take a look at the example below: Let’s see what happens next… 44 .Financial Freedom through FOREX The “Bounce” The price action will always come back to either a trend line or a moving average (sometimes the trend line is the moving average). The first indication that a bounce is likely is when an undersized reversal bar. forms on the trend line. or a white undersized bar in a downtrend. Do you know what it is? Question: Why is this potential bounce not as likely as the previous example? Answer: Because it has not yet pulled back to the trend line or moving average! Now. A reversal bar sitting on. As you can see. but when you are trading you must have some rules! The purpose of this book is to show you the trading setups with the highest probability of success. Those white bars represent the market inhalation. you get a nice even bounce! This is the beginning of phase 1 or the exhalation—time to trade! If you look at the same chart. which is the profit-taking from the previous long position. Maybe so. or even just below. 45 . There is one thing missing though. Once the spring is opened. you will notice another potential bounce. this is very easy to spot in a stable. trending market. the support for the spring will be far more successful than one that has not yet reached it! Remember: quality over quantity. provides the lower support. Recalling the analogy of the lungs wherein the market inhales to sell orders and exhales to buy orders (in an uptrend). You may have also noticed that the small reversal bars in the above examples followed large white bars. The spring is pushed down very far and the trend line. these little reversal bars indicate that the buy orders are beginning and phase 1 is underway.Greg Secker & Chris Weaver You can view the small bar as spring. you may look at this chart and point out that there is a nice 350 point move that could have been easily traded at a nice 3:1 reward-torisk ratio. as well as the moving average. really. The message here is that the profit-taking is occurring much faster than normal because of the uncertainty about the future direction of the currency pair. 46 . Investors are no longer willing to wait as long before they cash in and take their profits. Note: when the cycles begin to tighten up. the buyers are pushing the market up. So. big money) begin selling. and therefore the cycles. At some point. At this point. the buyers need to collect profit and exit the buy trades. Question: Are you getting really excited yet? Answer: _____________________________ A downtrend works exactly the same way—just in the opposite direction. much like we see in the USD/CAD chart below. the market corrects and retraces back to the trend line. trending market creates similar-sized cycles—it is an attempt by the market to recreate a profitmaking setup. the selling begins again in an attempt to generate more profit and recreate the last successful trading cycle. so the chart is pushing down. people are greedy and investors want more! They will try to duplicate the exact scenario that just generated huge profit. The private trader has the opportunity to see these cycles forming and cash in on the predictability of the movement. but only until the profit-taking begins and then the buying kicks in. Notice that the phases.Financial Freedom through FOREX Think of it like this: In an uptrend. when the investors who control the market (big money—really. the sellers are in control of the pair. are beginning to shrink in depth and length. In this example. Now. The large money is selling. we learn other very useful information. This causes the exchange rate to retrace back to the trend line. This is why a stable. To exit a buy trade you must sell. When buyers push the chart up very quickly. If you can become a master of the cycle. The selling then pushes the chart back down and triggers all the stop loss orders of the amateur traders who are forced to exit with a loss. amateurs will look at very large up. bars to give them the certainty. or black.Greg Secker & Chris Weaver Cyclicity is what makes the price predictable and predictable price is what provides reliable trading setups. you can make big money in the FX market. Professionals get richer and amateurs get poorer. 47 . The problem with this is that. For instance. An oversized bar must be seen as the move itself and not the entry for the move. Let me show you an example from the daily chart of the USD/CHF. or “proof. once the bar pushes up a significant amount. Trading the Small Bars (and Why Amateurs Don’t) One of the most common mistakes an amateur trader makes is trading off an oversized bar. which means at some point they must sell. the next move is likely to be down—large buying will be ended by quick selling! Think about this in terms of cyclicity. they are planning to take profit at some point.” they need before entering long positions. money management. The Psychology of Influence and Persuasion. and wealth building programmes all over the world. Most of the time. So why does it happen time and time again? The answer is simple. social proof influences the majority of our decisions. At Knowledge To Action we teach trading.” Social Proof The definition of social proof explains that. Personally. HarperCollins Publishers) As humans. It invites us to look deep into our very makeup as humans. It’s called “social proof. I get very 48 . but it goes far beyond trading and chart interpretation. “one means we use to determine what is correct is to find out what other people think is correct” (Cialdini. Let me share an example with you. other times we stay closer to home. This is not necessarily a bad thing. social proof assists us in making decisions which might otherwise be very difficult and time consuming. Sometimes we travel as far as New Zealand. like when it comes to your personal finances. the average person lacks confidence and certainty. and begin to follow. you naturally look to the crowds to show you what action to take. This is why the amateur traders wait to see the price move a long way before they enter a position—whether they want to admit it or not. find out where everyone else is walking. social proof is normally a good thing.Greg Secker & Chris Weaver confused when I visit a new airport. you do not need to follow what other people are doing. If. because I am uncertain. they have no confidence in their ability to accurately predict price movement. As I said earlier. There are. I don’t even know these people. Because of this. It is then that I look around me. you feel uncertain. scared money never makes money. Studies by psychologists have shown that only 5% of the population are initiators whereas the remaining 95% of the population are imitators. Chances are. however. I never know where to go to get my luggage. but I believe that if they are all walking in a particular direction. or even the handling of personal financial matters. times when it is not beneficial to look for social proof. By doing this. but. they naturally look for social proof. like in this type of situation. then I wouldn’t look to others for direction. I follow the crowd. however. who is not concerned with the noise of the crowd. they exchange efficient and objective entry positions for emotional security and social proof. It is just like me at the airport. only a small percentage of people make things happen. trading. then it must be the right way to go—and it is! I follow the crowd and I end up being led directly to my luggage. to buy into a sell off or to sell in to a buy off—this is professional trading! And remember. I normally stand still for a few moments wondering which direction to walk before I revert to social proof. If I were certain about where to go. Think about it like this: if you know the correct thing to do. Then relying on social proof can actually be devastating! Let me show you what I mean: When it comes to investing. It takes a confident trader. instead they wait to see what “everyone else” is doing. The environment where the need for social proof thrives is one of uncertainty. 49 . In other words. you may not even notice what others are doing. This is why only a small percentage of the population are wealthy and the rest are not. because that is where all the money is. This is how we trade at Knowledge To Action—with certainty and confidence.uk). Money is made when you have the confidence to trade the move before it happens.knowledgetoaction.Financial Freedom through FOREX the rest simply wait to see what happens.co. to equip you with the confidence and certainty to be an initiator. Allow us. Initiators are action takers! This is a financial principle that can be seen very clearly in the FX market. or through one of our programmes (www. This is possible because our strategies and principles model what professional money is doing. There is no money in imitation. through this book. not after it has come and gone. it is all in initiation.. the more money you can make… So. Here is a simple way to remember it: . given the fact that you must have money to trade. The less money you lose. it’s logical to conclude that the more money you have. the more money you can make. It is essential that this concept is understood in relation to funding your trading account. you could say it like this: The less money you lose. the more money you can make. From a risk management perspective. and that your ability to earn decreases when you lose money and increases when you generate money. Question: What happens to your ability to generate a profit if you lose some of your money? Answer: Your ability to generate profit decreases in line with what you lose. and this is not a sustainable form of profit generation or the way a professional trading account should be managed. This type of trading puts you on a rollercoaster that grips your emotions and can leave you feeling deflated with little or no money. but that this was his first week of trading. “That is far too much. You can just imagine his excitement and enthusiasm. Don’t get me wrong. At the beginning of the week he had £5. Principle: Large amounts of money are only made when small amounts of money are risked. He was completely bursting.000 in his trading account and he now had over £12. erupting with energy and excitement. “I have more than doubled my account!” he shouted. Let me give you a real life example from one of our students at Knowledge To Action: I’ll never forget seeing this man come running over to me.Financial Freedom through FOREX Capital Preservation → Income Generation Too many people believe that the way to profitable trading is through monster-size winning trades. huge returns are great and you can expect to see them in your trading account if you stick to the rules in this book.” I said very carefully.000. But you have to keep yourself in the game long enough to seize these extraordinary opportunities when they arise. That is a 240% return in one week! That’s awesome. right? My response shocked him. The problem is that huge winnings generally involve huge risks. “And I have done it in just one week!” He then went on to explain that not only had he doubled his account in one week. you have done something wrong. and that can only be achieved through proper risk management. 52 . If winning trades equate to only $10. these things are common. This type of trading is nothing more than gambling on the market. but he will also lose the £5.000. before we go any further. It was only a matter of time until his luck. This is a massive difference and will psychologically impact the individual who is trading. Not only that. IMPORTANT POINT: The money in your trading account is what allows you to control or leverage positions in the FX market. or never get a fair chance to get started. This is good. The interesting point is that a 1% increase on either trading account is likely to come from the exact same trade setup. and the losses are all so small that there is a natural tendency to increase risk. He did not want this to happen and he graciously took my advice.Greg Secker & Chris Weaver Now. you can create large positions through the available leverage. obviously. would run out.000 trading account is only $10. This level of growth in one week. whereas a 1% increase on a $1.000 he just made. The difference. This is because the winning trades do not have the immediate power to enhance your lifestyle. This type of trading just doesn’t work. This is 53 . along with his money. is the size of the account.000 that he started with. For instance. and that is what we want. ____________________________________________________ Side note: Too many FX trading accounts are blown out. This amount of uncertainty is unnecessary and unproductive. I went on to explain to him that if he continues to trade like this. you must understand that at Knowledge To Action we embrace enthusiasm and we especially love to hear about huge returns for our graduates. real growth. a 1% increase on a $100. his account was either going to grow at an outrageous rate or disappear at an alarming speed. however. but it can be deadly. The truth is this: within that first week.000 trading account is $1. is not sustainable. Due to the liquidity of this market. the likelihood of persevering and sticking to the money management rules decreases. due to underfunding and lack of commitment. he will not only lose the £7. What is important is that you understand how to utilise it for your benefit. unless you bought the house with $500. You just have to start small in order to finish BIG! ______________________________________________________________________ Leverage Leverage. Your capital controls the direction and destination of the larger market position.000 for a house. What is it and how do you use it to increase rather than decrease your trading account? Only certain aspects of leverage are really necessary to understand. you probably used a mortgage to leverage your deposit (capital) and to enhance the size of your position in the housing market. is very large. yet the direction that it moves is controlled by a relatively tiny wheel. When you buy a house you are creating or opening a position in the housing market.000 cash. You have opened a $500. Now. The small wheel controls the direction and destination of the larger ship. in its entirety. There are plenty of complex definitions of leverage.000 position in the housing market. but the simplest (best) definition is this: Leverage is the ability to trade as if you have more capital than you actually have. The ship. Possibly the most familiar type of leveraged financial contract is a mortgage. Leverage gives you the power to control positions in the market that are much greater than your current capital position. You can also think of leverage like a large ship.Financial Freedom through FOREX not to say that it is impossible to grow a small account into a larger account. The scenario looks like this: 54 . Let’s assume you pay $500. All FX brokers offer plenty of leverage to their clients.Greg Secker & Chris Weaver $500.000 position with a deposit of only $50. In other words. (Some brokers will even offer up to 500:1 leverage—this is crazy!) 100:1 leverage means that you can trade as if you have 100 times the amount of capital that you actually hold. you are controlling 100% of the position with only 10% of the capital—a 10:1 leverage position. The usual amount of leverage available is 100:1. You should be able to recognise 55 .000 mortgage required 10:1 Leverage The mortgage finance allows you to complete the transaction by only depositing 10% of the true capital required to buy the home. you are now controlling a $500.000. Therefore. The small wheel is controlling the direction and destination of the larger ship.000 Deposit (10% of purchase price) $450.000 housing market position/cost of house $50. because of the mortgage. how much faster can you lose money? Answer: 100 times faster! (Is this bad or terrible?) Think about the effect that leverage has on your profit or loss within the home purchase example from before. Question: If leverage allows you to trade as if you have 100 times the amount of capital that you actually have. Your capital controls the direction and destination of the larger market position. how much faster can you make money? Answer: 100 times faster! (Is this good or great?) Question: If leverage allows you to trade as if you have 100 times the amount of capital that you actually have. because of the leverage.000. What results do they achieve? The answer is obvious.000 position with just $10.000. resulting in a complete loss of the original investment. of the two. New traders will generally accept the implications of leverage in one of two ways: with love and faith or with hate and fear. Unfortunately. To do this just doesn’t make sense.000 position with only $100. What is your return on the $50. every 10% movement in the value of your home is multiplied by 10—10:1 leverage. it would only be worth $450. 56 . the market sometimes moves against you.000 invested? 100%! Yes! You invested $50. as professional traders and trading coaches. The leverage is too great. Had the property dropped in value by 10%.000 or a $10.000. Some people refuse to get serious and really learn how to trade. we meet these people all of the time.000. Therefore. Keep in mind that this example is only leveraged at 10:1. This is great when the market moves in the desired direction.Financial Freedom through FOREX very quickly that it only takes small percentage movements in the housing market to dramatically impact the profit or loss on your initial investment of $50. we need to be far less extreme than that and have a mixture.000. Let’s consider a 10% increase in the value of your property over 12 months. The truth is. Unfortunately.000. This is serious business for serious traders because serious money is at stake. In the FX market we are afforded much greater leverage than that—even 100:1! Imagine being able to control a $1.000 of your own capital and. They then proceed to blindly jump into the most dynamic financial market the world has ever known and then wonder why they end up bust. Now imagine all of those traders who have not invested the time or money into learning how to responsibly handle and manage their trading accounts. you were able to open a position ten times the size of that. or balance. The property is now worth $550. There is also a massive lack of confidence and belief in their pursuit of financial independence through trading. they believe that leverage will always work for them. We need a system of getting the information from this book straight into your trading account. They believe that leverage is there to punish them. Can you imagine having a tool that automatically gives you the exact balance required to trade and totally transfers the focus from leverage and risk to consistency and sustainability? Would you consider this worth knowing? Yes. not only that. GREAT. BALANCE The fact is. It doesn’t work that way. Getting this type of balance is only possible by adhering to the money management strategies. but with no practical way to implement these principles it will be of no use to you. but this kind of trading also comes with risks. Instead. only a wish and a hope of getting rich quick. This is a very romantic way to consider the power of leverage and will result in an empty trading account. Hate and Fear Amateur traders who completely fear leverage are missing out on its benefits. of course! The solution is trade sizing! This is a practical aspect of trading that will help make your dreams come true! 57 . Understand that trading in this market with strategies that work can generate large amounts of cash.Greg Secker & Chris Weaver Love and Faith Amateur traders who love leverage strongly believe that leverage loves them back. And. AND… This is all very exciting. they live out a completely timid and unproductive trading experience. It is because of this that they never grow their account. With this approach there is no respect or reverence for leverage or the market. you need to trade with a healthy balance of faith and fear. 1% = $100.000. what percentage increase would you need to achieve to bring you back to even? Answer: 100% 58 . A 10% increase in your account will only get you back to $99. The Money You Lose Is Harder to Get Back—FACT! How can this be? Let’s look at an example: …The money you lose is Assume that you are trading harder to get back! with a $100. yet possible to make big money.000 ($90. sustainable income. we need to further explore a remarkable truth about trading. Question: If you were to lose 50% of your account. However. as well as how a successful trading account will grow exponentially).000 – 10% = $90. This is a truth that is completely unknown to amateur traders and completely understood by professionals. right? WRONG. It minimises losses and assists in creating real. the more money you lose.000 account. So you are now 10% down on your original $100. you place several losing trades that bring your account down to $90. The fact is. Unfortunately. This is the negative power of an account compounding in the wrong direction. the more difficult it becomes to get back to breaking even.000).000 + 10%).000 ($99.000 account.000). This is a 10% loss on your account ($100. (We will look at the positive power of compounding our trading accounts in a later chapter. It is important to remember that your trading success or failure is always measured in percentage earned or lost—just like any other fund or investment. 10% of $90. all you need to do is get the 10% back. It is as simple as that.000 + 11.1% on your account balance of $90. before we look at the exact method of trade sizing.Financial Freedom through FOREX Trade sizing is the most effective way to make it difficult to lose money. If you don’t trade size accurately you will not be a successful trader. No problem.000 (your new account balance) will not restore you to your initial account balance. You must earn 11.000 to bring the account back up to $100. what percentage increase would you need to achieve to bring you back to even? Answer: 400% Check it out below! Initial trading investment $100. To do this. but let’s look at the true result: 59 . your maximum risk on the trade is $1.000 account and you lose on ten consecutive trades (if you finish this book. To increase your new account balance back to your original balance you must double it (a 100% increase!). you must know the Golden Rule: 1% Max Risk per Trade.000. secure approach to risk managing your account. this is very unlikely to happen to you).000 x 1%). have a look at this example: Let’s say that you are trading with a $100. Why 1%? Question: If you are risking 1% of your account balance per trade. how many consecutive trades must you lose to completely wipe out your account? Before you answer. 1% Maximum Account Risk per Trade The key to exercising proper risk management consistently is to use an objective method of determining risk. As stated above the purpose of this chapter is to create a sound. You should never risk more than 1% of your trading account on any single trade! If you are trading with a $100. With a 1% account risk you may expect your account to be down 10%.Greg Secker & Chris Weaver Question: If you were to lose 80% of your account.000 – 50% in losses = new balance of $50. This is not meant to scare you away from trading. The information contained in this chapter so far has shown you why risk management is essential: real money is at stake and can be lost quickly.000 account.000 ($100. this is meant to direct you into professional account management. I often see amateur traders managing their risk basis on nothing more than a particular amount of money.500 per trade.60 – 1% = $95. as the account increases. the opportunity and ability to generate profit increase with every single trade—it is literally the best of both worlds. This is the basis of our risk management setup.206.274.274.206.059.148.47 $92.099.53 $93.21 Now before you give your answer to the question above.72 $91.010 $98.059. This is wonderful because we understand that we must have money to trade. especially when there is certainly no guarantee of success. in terms of percentages instead of just figures. It is utter madness to risk 10% of the total account balance on a single FX trade. it is physically impossible to lose all of the money in your trading account.000. wins and losses.01 $94. Capital Preservation → Income Generation. and large amounts of money are only made when small amounts of money are risked). and here we are making sure that we can never lose all of our money.148.47 – 1% = $91.438.351.010 – 1% = $97.90 $97. Conversely.01 – 1% = $93.01 – 1% = $94.029.Financial Freedom through FOREX Trade 1: Trade 2: Trade 3: Trade 4: Trade 5: Trade 6: Trade7: Trade 8: Trade 9: Trade 10: $100. For instance.000 – 1% = $99.099. such as $2.53 – 1% = $92.000 $99.029.351.01 $95. consider the trend over the ten losing trades with the 1% risk. the risk decreases (remember. It is mandatory that we create a system in which every time there is a loss the amount of money that is at risk on the following trade diminishes in line with the account balance.72 – 1% = $90. As the account decreases.90 – 1% = $96. and it puts you in the best possible position for trading success! 60 . Answer: It never happens! Exactly right! With a 1% risk.60 $96.000 – 1% = $98. It is also one of the reasons why it is so important to manage your account. The problem with this style of risk management comes when the account is trading at around $25. you will come to the natural conclusion that much is possible and that this trading stuff really works! There is one rather striking similarity between professional FX traders and professional black jack players—risk management/trade sizing. “How much money can I afford to lose on this trade?” Successful traders use the power of negative thinking! In other words. because there is now a shift from a proactive approach (in control). rather than the important question. as well as the negative impact of “revenge trading. This is seriously dangerous ground. These players understand the sensational effects of compounding numbers and the consequences of doubling and tripling up. They ask themselves. 61 . Risking more after a losing hand or trade is the first sign that a player. to a reactive approach (out of control) and a move from order to chaos. they are emotionally driven and they create an environment that quashes their ability to think and play objectively. If you become a gambler.” If you allow yourself. or trading from a mind set or position of revenge.Greg Secker & Chris Weaver Amateur traders get this risk management thing wrong all the time. “How much money can I make on this trade?” and focus on potential profits. The player begins to feel as the though the table. Most people don’t realise this. is no longer losing only money but is also losing the most important battle of all. are not in control of their situation. you will come to the natural conclusion rather quickly that the “house always wins. which is. The reality is that it does not— winning or losing is never personal. patient. owes them something. however. to mature into a professional. or trader. or market. and successful trader. it is just feedback based on your actions. the battle of the mind. Far from it.” Players who are doubling and tripling up. we go into every trade expecting to lose! And we manage it from there! That is why the majority of amateur traders lose money—quickly! They turn their trading desk into a gambling den. but a successful black jack player will only ever risk 1% maximum of his current balance per hand. I am assuming you have chosen to think with your head. it becomes very difficult to do things that make sense in your head. The professional trader understands logically and objectively that the best way to recoup lost money.Financial Freedom through FOREX Risking more after a losing hand/trade is the first sign that a player is no longer losing only money. or just simply make money. when money is on the table. is to decrease risk during losing periods and increase earning potential as the account balance grows. the battle of the mind. Trading Sizing Trade sizing is the process of determining the value of the pip or basis points you are trading. This is done by remaining in control and continuing to do the things that have proven to be successful in the past. Does this make sense? The problem is that. and much easier to do things that “make sense” in your gut. The ratio itself looks like this: 62 . Good choice. but is also losing the most important battle of all. Now let’s look at the practical application of risk management and how we implement our 1% risk rule through trade sizing. Question: Which would you rather do your thinking with? Please circle your answer below. the next step is to understand where to find the account risk and the trade risk so that you can plug them into the ratio. The easiest way to find the account risk is to take your account and multiply it by 1%. we are going to go through a couple of practical examples using typical trades. To do this. we move on to find our trade risk. Trade Risk The trade risk is the number of points that a particular trade has at risk.6500 Stop Loss: 1. Once our account risk is determined.6450 63 . $100. Example number 1: Account Balance of $100. It is worked out like this: Stop Loss – Entry = Trade Risk Example 1: Long Position on GBP/USD (Cable) Entry: 1. You already know that your maximum risk on a single trade is 1%.000 Account Risk The account risk is simply the amount of money from your account that you are prepared to risk on the trade. We will do the first example together.000 x 1% = $1000. This is your account risk (AR).Greg Secker & Chris Weaver Now that you know how the ratio is arranged. 6500 Entry – 1. This means that for every point that the exchange rate moves in your favour you receive $20 and for every point the exchange rate moves against you.6502. 1. you begin losing money. you lose $20.Financial Freedom through FOREX The Long position suggests that you want the price to move up from 1.000 and the information above. you can now implement the 1% rule and work out the trade sizing ratio. .6503.6500 because if the price begins to fall.6500 to 1. 1. Question: What is the trade risk? Answer: 50 points (1. The Stop Loss is placed lower than 1. and so on.6450 Stop Loss) Based on the example account size of $100.6501. Question: If the exchange rate moved in your favour 100 points at $20 per point. Greg Secker & Chris Weaver Can you see how this works? What you have actually done is taken the amount of your account that you are comfortable to risk (1%) and broken it up into the number of points that you are prepared to risk.6600 Stop Loss: 1. you have lost only 1% of your account. all you need to do is take your account risk (the amount of money you are risking on the trade) and divide it by the trade risk (the amount of points you are risking on the trade). Example 2: Account Balance of $155. Let’s do another example to make sure you can implement the trade sizing ratio. you should now be able to trade size your position! Account Risk = Trade Risk = What is your trade size (point value)? 65 . you have ensured that if you lose on a trade.000 Long (Buy) Entry Price: 1. which is to lose the points that you risked. By doing this.6575 With this information. Remember. wouldn’t you agree? In accordance with the reward-to-risk ratio of 1:1. The minimum reward-to-risk ratio required to enter a position is 1:1. how do you work out this reward-to-risk ratio? The easiest way to work it out is to forget about the reward and risk in terms of money and think of them in points instead. The reward: risk ratio is also there to protect your account. Reward: Risk Must Be a Minimum of 1:1—Set in Stone. If you consistently risk more on trades than you expect to achieve. to discover how easy it is to do something so senseless in the heat of the moment. Never risk 1% of your account unless your target or take-profit limit will provide you with at least a 1% reward. Ask yourself this question: “Does it make sense to risk 1% for the possibility of earning less than 1%?” The answer is no! You would be surprised. This is why you must plan ahead and have rules before you enter a trade. This sounds like a recipe for success! So. however. By doing this your winning trades will not only wipe out the losing trades. your losing trades decrease your account by a maximum of 1% whereas your winning trades increase your account by a minimum of 1%. By doing this. but they will also grow your original capital invested—this obviously is the goal. when you have not yet emotionally and financially invested into it. you simply need to 66 . the losing trades will not only wipe out the winning trades.Financial Freedom through FOREX Reward to Risk Ratio The purpose of the reward-to-risk ratio is to determine whether or not a trade is worthy of you risking your 1%. but they will also eat into the original capital invested—this is obviously not the goal! You need to flip that around so that the winning trades are always at least as great as the losing trades. at this stage. we are just using random prices on the chart to illustrate the process of determining the ratio. You should also recognise that the risk element of this ratio is exactly the same as the trade risk of the ratio used to determine your point value. Reward (Points) = Target – Entry Risk (Points) = Stop Loss – Entry Let’s look at the first example we used above. It is important to remember that. you are already halfway through completing your second ratio. once you have your trade risk for the first ratio. Example: Long Position on GBP/USD (Cable) 67 . only this time we are going to assume a target price as well. In the second portion of the book we will be explaining why and where we are placing our orders. So.Greg Secker & Chris Weaver determine whether a winning trade will add at least as many points to your account as a losing trade would deduct. anything above this is a bonus.000 (1%) on each of the losing trades. a winning trade has the potential to increase your account by a huge 4%. Check this out! Assume that you have a $100.6500 Stop Loss: 1. All things being equal.6700 In the example. Question: Is 4:1 better than 1:1? Answer: Yes! Remember. Let’s also assume that your reward to risk ratio is 3:1 on each of these trades. although the 1:1 must be achieved before you enter the trade. This means that you lose $1. and profit $3000 (3%) on each of the winning trades. the reward-to-risk ratio is 4:1.6450 Target Limit: 1.Financial Freedom through FOREX Entry: 1. Results: Trade 1 Trade 2 Trade 3 Trade 4 Trade 5 – 1000 – 1000 – 1000 + 3000 – 1000 68 . you need a minimum of 1:1. In this example.000 trading account and you place ten trades into the market at exactly the same time. whereas a losing trade will only decrease your account by 1%. winning on this trade will pay for your last four losses! The point here is that. the more setups you have with a greater than 1:1 reward-to-risk ratio the better. This is the effect of interest on interest. they must pay for your losses. This is tremendous! Let’s give this some context. the compound interest would provide an even larger return of over 29% ($129.360.66). This should give you just a tiny glimpse at how the implementation of something which is so simple—yet so effective—can totally revolutionise the results in your trading account! Just wait until we get to the actual strategies…. by spotting trading opportunities that provide you with a reward-to-risk ratio of greater than 1:1. If you can manage to place ten trades per month at a 3:1 reward-to-risk ratio and achieve even a poor success rate of 30%.000). which we are going to talk about in greater detail later in the book. If you allowed the account to grow for a year. 69 . But there’s more. I have done this to dramatically emphasise the point that a reward-to-risk ratio of 1:1 is only the minimum requirement to trade and that.Greg Secker & Chris Weaver Trade 6 Trade 7 Trade 8 Trade 9 Trade 10 Total Loses Total Wins Bottom Line – 1000 + 3000 – 1000 + 3000 – 1000 – 7000 + 9000 + 2000 You have only won on 30% of your trades.000 + 24% = $124. the annual simple interest return on your capital would be 24% ($100. You may be slightly confused as to why I have suddenly jumped from 1:1 to an example of 3:1. your account will grow even faster! Before your profits can pay you an income. which is 2%. but you have increased your account size by $2000. your point wasn’t worth anything.Financial Freedom through FOREX A Quick Recap of the Two Significant Ratios Let’s summarise how each of the above ratios enhances your trading experience.” You do this only to watch the trade move straight into your take-profit order. It makes it possible for your winning trades to cover your losses and pay you an income. Strategic-based trading = objective and consistent Gambling-based trading = subjective and sporadic Reward-to-risk ratio: The purpose of this ratio is to make it very clear whether or not the trade is worth entering. The only problem is. It is massively important to trade size every time because it eliminates temptations that are common to all traders. and a trade that should have recovered a large portion of your last three losers hardly impacts your trading account—this is gambling. Maybe I should just go in at $1 a point because I am not feeling very confident right now. Trade sizing: this simply determines the value of your “point. things have not been going my way lately.” This ratio is based on the two most important things: the trade setup and the size of your account. but I have lost on my last three trades. baby. instead of objective decisions based purely on trade setup and account size. I have not only had a bad day at work. but the traffic was terrible coming home. The other side looks like this: “Aww. Gambling on a trading account can be defined as making subjective decisions based only on emotions created by circumstances. 70 . I feel good today and I have won on my last five trades! I am going in at $100 a point on this one. yeah!” You do this only to watch the trade go directly against you for a large percentage loss that claims the profits of your last five successful trades—this is gambling. and it stops you from doing things like this: “Man. I really want this trading thing to work. This is definitely not true as you can see from these two simple ratios. but these can all be computed on a simple spreadsheet. It is true that there are other aspects of trading that require some basic mathematics.Greg Secker & Chris Weaver A common misconception is that you must be a mathematical genius to be a successful trader. all of the math is done! 71 . Once you understand how to work out the two ratios above. . however. Instead. is that before there is discretion. and then through the understanding of the purpose and the principles that the strategy is based upon. Unfortunately. most traders who begin the journey of trading the FX market never reach this goal. we teach our students how to “trade within the triangle. . The maturity comes first from a respect for the rules of a strategy. you can introduce discretion to your trading to catch more moves in the market than the basic rules of the strategy would normally allow. Once you understand that. at Knowledge To Action.” Trading within the triangle is the most suitable way to grow your trading seed capital. This is why. The crazy thing is that most people do not fail because their strategies are bad. there must first be a complete reliance on the rules. The important point. they fail because they lack structure in their approach to trading. It minimises frustration and encourages a natural maturity within your long term trading experience. Defining the long term structure of your approach to trading puts you in a secure position and enlarges your probability of success. In theory this 74 . Once you understand why the rules are what they are. you will then. be able to take certain freedoms and liberties outside of the rules for larger. and that more trading opportunities mean more money. By trading within the restrictions of the rules consistently. gradually. This type of structure allows your trading to evolve methodically and naturally.Financial Freedom through FOREX Let me show you an illustration to help demonstrate this point: As you look at the illustration of the triangle. They do this with the belief that more strategies mean more trading opportunities. you will begin to develop a very strong understanding of the core principles of the rules. more professional profits. Consistency is the key. notice that you are moving from “restrictions” to “freedoms” and not the other way around. It also removes the burden of trying to “get rich quick” and takes you out of crisis-style trading. which suggests that you need to make a fortune immediately. A successful journey up through the triangle requires strong consistency in two major areas: Consistency of Strategy One of the biggest mistakes that traders make is trying to learn how to trade too many different strategies at the same time. You must get consistent with the strategies in this book and find the value that each brings. every day with this person for the next year. you will develop an instinct to execute the strategies more efficiently and effectively. They are sufficient for big money trading.M. Remember. By committing not only a certain amount of time. you develop the subconscious skills of reading and feeling the market and its shifts in momentum. In fact. As you trade the strategies consistently. take the strategies that are given in this book and commit all of your trading time to mastering them. we recommend our students spend a maximum of one to two hours per day in front of the trading screen. you know nothing at all about the person. If 75 . but also a certain slot of time. between 10:00 A. Time in front of the trading screen builds a log of visual experiences that you can call upon in the future to help you execute similar trades for a profit or to stop you from entering trades with a lower probability of success. Consistency of Time Consistency of time is not only the amount of time you spend in front of the trading screen.M. though: the key is consistency. This is especially true for intraday trading (entering and exiting a position in the same day).” Imagine that you have just met someone for the first time and you decide that you are going to spend two hours.. but also poor trading results.Greg Secker & Chris Weaver sounds somewhat logical. Trading during the same timeslot each day builds your understanding of “currency characteristics. Spending too many hours in front of the trading screen normally causes not only anxiety and stress. you can begin to move up through the triangle and exercise more freedom and discretion to generate larger profits. On the first day. but in practice this approach fails miserably. From the foundation of successful trading within the rules of these strategies. not a devotion of 14–18 hours a day! At Knowledge To Action. it is also the allocated timeslot you spend trading. and 12:00 P. To get started. this simply creates a chaotic trading experience and leaves you with little success to build upon. during the same time slot. frustration.Financial Freedom through FOREX you had to guess what they were going to have to eat for lunch on that first day. You don’t even know what time they eat their lunch! You wouldn’t know if they were very active during this timeslot or whether they used this time to rest and relax. we 76 . If you do this consistently. then half an hour in the late afternoon. Amateurs do not understand this concept of time consistency within the triangle and. and compassion—the things that “move” this person during this period of time. As time goes on. Suddenly. normally make the mistake of not designating a timeslot to trade. This puts you in the best possible position to begin working your way up the triangle. They may spend a few hours in the early morning. the price action will become predictable and your confidence level will increase. then maybe a whole day tracking the currency pair followed by a late evening session on a different night. during the same timeslot. anger. you would have no idea. It is with this type of assurance that you can continue working your way up through the triangle. pain. laughter. You are completely in the dark because you have never spent any time with the person. You must be spending the same amount of time. in the triangle. This causes problems even if they are trading during a specific timeslot. Remember that. stress. Another issue most amateurs have is trying to trade and learn too many different currency pairs at once. This is the result of having too many freedoms. Spending the same amount of time. You could predict with a high level of accuracy what this person would have for lunch on any given day. This is key. you begin to develop an understanding of the person’s characteristics. They end up with nothing that is really tangible or productive. you will know exactly when the currency is moving and when it is not. and you must trade the same currency pair each day. That is exactly what you want to achieve in your FX trading: the ability to recognise what moves the currencies and to what extent. sympathy. therefore. The result of all of that time is just a vague idea of the currency pair they are trading. with the same person each day gives you the ability to predict what that person is going to do and how that person will react to certain events. however. You would also know the things that cause excitement. a lot of traders work their way through the triangle in the opposite direction—freedoms to restrictions. the loss of money. So. This means that you are going to be restricting yourself from learning the movements of every single currency pair in order to focus on just a few so that you can master them—restrictions to freedoms. stop loss. you can then begin to exercise a certain level of discretion or freedom as you trade. Trading within the triangle can be summed up as moving from restrictions to freedoms. Regrettably. lack of belief.Greg Secker & Chris Weaver move from restrictions to freedoms. 77 . he or she can go on and create musical opportunities above and beyond what the basic rules of music theory permit. This is the way to build a reliable income stream from trading. Know the rules and follow them until you have an instinct for them. yet still produce a good sound. This causes tremendous amounts of frustration as it results in low confidence. From there. The only reason he or she is able to do this is because he or she first learned the rules and why they exist. ultimately. Every profitable trader has strategies that clearly define the placement of the entry. This is a powerful trading approach and establishes a very strong foundational knowledge of the currency market to build long-term trading success. whether you are just beginning your FX trading journey or simply continuing it. You must trade the strategies like this until you build a solid foundation of profit and knowledge. A good musician understands exactly how to “break the rules” of music theory. From this point. you are going to be learning two proven and specific FX trading strategies. and. In Part 2 of this book. make sure you remember that it is an ongoing process— know where you are in the triangle. You can make a lot of money from these strategies if you are willing to trade responsibly. you can find quality trading opportunities above and beyond the basic rules to generate larger amounts of professional profit. and profit target for each trade. Initially the rules of the strategies will be restrictive and will allow no room for discretion. Without a strategy there is no foundation to justify any potential trading opportunity. Take the example of a musician. The same is true for a trader. From there. Without normality there can be no flexibility. but it is also very safe and secure because you are following the rules of the strategy perfectly. The very top of the triangle provides endless amounts of freedom. The bottom of the triangle is very restrictive. A lot of traders are terrified of becoming robotic within their trading and therefore very quickly forsake the rules of wellestablished strategies—this is a mistake. when it first has an established position. and discretion to do what you want. I am telling 78 . Notice how The Flex is always away from the norm and how it always returns back to it. Something can only flex. Flexible or Chaotic? Profitable traders understand the importance of establishing normality within their trading as well as the difference between chaos and flexibility.Financial Freedom through FOREX You must remember to trade within the triangle. Refer to the example below. As you learn the strategies in this book you must commit to comply with them. but it is not very safe for the beginner. we can have The Flex. responsible. Because we first have The Norm. Let me demonstrate. Consider the horizontal line below to be The Norm. strategic trader. This concept goes on to emphasise the journey of trading within the triangle. Here is the principle: the only way to exercise judgment above and beyond the rules at the top of the triangle is by first learning to submit to and obey the rules on your journey up through the triangle. judgment. This is the methodical process of becoming a mature. only chaos. This is how you use strategies to trade. Unfortunately. If you are willing to take too many freedoms with the rules of the strategies. most people trade chaotically in the name of flexibility. Chaos will never magically result in order. there will be a tightening of policy and an uncomfortable journey from the freedoms to restrictions. and the results are always the same: early loss of money. Too many freedoms create chaotic trading. followed by lack of belief and frustration. That is unnatural and discouraging. Those “flexes” are your freedoms as you work up through the triangle. Successful trading is for the long term. there is a strong possibility that you will learn the rules of the strategies from the following chapters and. They believe that sticking only to the rules hinders profit. This is a very short-term view. if I don’t. you are more likely to take too many freedoms with the risk management and trade sizing rules—this will have a devastating effect on your account. and chaotic trading will wipe your account out fast.Greg Secker & Chris Weaver you this now because. Have a look at the chaotic chart below: Notice how there is no stability or consistency with this approach. Order must be established. The trader’s approach is dynamic in that it is constantly changing. If the trader has the drive to continue pursuing the trading venture. apply discretion far beyond your familiarity with the strategies. in no time. 79 . if used properly. it is going to produce profit. 80 . it will become a longterm source of income that will produce profit time and time again.Financial Freedom through FOREX A consistent and proven FX trading strategy is precious to the trader because. Approach each strategy as if it were a new person you are meeting. If you are wise with it. Take some time to get to know it thoroughly. they will make you money. if you ignore them. In Part 2 of this book. If you stick close to them and do what they say. they will give you nothing.Part 2 Strategy—The Ultimate Implementation A good strategy is a trader’s best friend. . I am going to introduce you to two of my best friends. . Chapter 5 THE ANGRY BEAR . . and refers to trades that are opened and closed within the same day. even the smart traders can succumb to what the former chairman of the Fed. Alan Greenspan.” But when the herd changes direction. From greed to fear. negative swing from buyers to fear-driven sellers. famously termed “irrational exuberance. This is Intraday trading at its best. but at the root it’s all about herd psychology. And when that happens there is an opportunity to make huge profits. Fear overwhelms all rational thinking—until it’s over. they are prone to mood swings. a short term trading strategy that opens and closes within hours. therefore. sometimes due to nothing more than a change in the wind. In a bull market (that’s when the base currency is being purchased). The Angry Bear strategy is a fantastic intraday strategy that captures the early. the bulls turn to bears.Chapter 5 THE ANGRY BEAR Chapter Objectives: * Understand the rules of the strategy * Understand how to implement the strategy for consistent profits * Make the strategy an integral part of your daily trading plan There are all kinds of reasons for market crashes. . The point here is that markets are mirrors of the human psyche. A few traders get scared and that fear translates to the whole herd and they all take off. they just feel that fear and start dumping currency. The buyers turn to sellers. The herd stampedes. not days. Welcome to the Angry Bear. its mood can also change from euphoria to gloom. This is. like people. The rest of them don’t know why they are scared. they suffer from depression and sometimes they can experience complete breakdowns. These currency pairs have several similarities. therefore. which are: 1. Dollar as it’s the terms currency. the question of “who are you trading?” refers to the currency pairs that we use to trade the strategy. Let’s start off with the who? part of the strategy and.S. Over 80% of all foreign exchange transactions include 86 . To do this. in the “Basic Terminology” section for a reminder of this concept). you must fully understand each aspect of the strategy. Dollar). Each pair has a major European currency as its base: Euro and Sterling. The EUR/USD pair accounts for nearly one third of the daily FX volume. we can work our way through each of the other questions. Each pair has the U. Dollar). we are going to ask the questions who? when? what? where? and why? in relation to the strategy. Who Are You Trading? Trading FX is simply trading one currency’s economy against another (refer to Chapter 2.S. 2. but also paints the big picture of what the strategy is all about and why it works. Therefore. These currency pairs are the most liquid and.S. but it is really only important that you understand the main two. the Euro and Sterling are the currencies of the two largest economies in Europe.S. you can build an incredible amount of skill in executing the strategy profitably and consistently as you work your way up through the triangle. Dollar is the most commonly traded individual currency in the world. whereas the GBP/USD pair accounts for nearly one quarter of the daily FX volume. The U. From here. The first is the EUR/USD (Euro against the U. and the second is the GBP/USD (Great Britain Pound against the U. from there.Financial Freedom through FOREX Exploring the Strategy To profit from this FX strategy consistently. There are two particular currency pairs that we trade the Angry Bear strategy with. most commonly traded currency pairs in the world. In fact. Understanding each aspect not only provides you with the standard rules for the strategy. Greg Secker & Chris Weaver the U. 87 . market kicks in— that is the move that we want to catch.S.S. Dollar. This rejection during the European market’s open. The visual proof of this shows price that has reached new highs for the session only to be followed by a severe selloff.m.S. The reason that this strategy is specific as far as when it is traded is due to the behavioural characteristics of the currency pairs during this period of time.. You trade this strategy with the EUR/USD and the GBP/USD only—no other pairs permitted. Dollar—there will be more on this in the following chapter. dollar. Similarities in the nature of the pairs means similarities in the character of the pairs—this is important because it tells us that their behaviour is likely to be the same and is therefore the reason why this strategy can be traded with either pair. sets the tone until the U. Trade between 6 a.m. strategy essentially captures the early rejection of a bullish trading session of either the Euro or Sterling against the U. and 10 a.S. GMT What Are You Looking For? A high test bar is what you are looking for because it demonstrates that there was a bullish trading session. immediately followed by a selloff of the Euro and or Sterling against the U. Remember that this high test bar must be located between 6 A. The open and the close must be in the lower half of the bar.Financial Freedom through FOREX Plus. This demonstrates that the price rose and fell very quickly—this shows the rejection of the bullish move and the setup of the Angry Bear. What you are looking for is a solid test and rejection of the high. not whether or not the close is higher or lower than the open. Two specific features are present in a high test bar: 1.M. 2. the high test bar is also what you use to define your entry and stop loss on the trade. This means that the high test bar can be a seller (close is lower than the open) or buyer (close is higher than the open) bar—it makes no difference.M. and 10 A. This shows you that price has overextended to the high—this indicates the bullish move. The bar must be large relative to the bars that precede it. One thing that is not important for the high test bar when trading this strategy is the colour of the bar. The following will demonstrate what a high test bar actually looks like: High Test Bar (Seller) 88 . they are not restricted only to this time frame.Greg Secker & Chris Weaver High Test Bar (Buyer) Never underestimate the power of these bars—especially on the hourly chart. You can have a high test on any time chart imaginable. This is why the strategy is so effective. exploits the power of the high test from the hourly perspective and has proven. They give you a tremendous insight into what the next few hours of trading hold. 89 . however. Although we are trading the high test bars on the hourly chart. Below is a midway recap of what you have learned so far of this strategy. This strategy. time and time again. to generate large profits. M. Dollar.M. You can see the result that this had on the rest of the European morning session: a continued selloff of Sterling against the U. We know this because the 6 A. There is never a long (buy) scenario.S. We also know that this buying of Sterling was rejected dramatically within the same hour because the close is in the lower half of the bar. between 6 A. The location of the sell market order is 1 pip below the low of the high test bar. The difference between a market order and an instant execution trade is that a market order instructs your broker to trigger you into a trade at a certain price.M. whereas an instant execution entry is placed live at the market touch price. which was triggered by the high test on the 6 A.M. hence the name Angry Bear (there is no happy bull equivalent!). where? Where Do You Place Your Entry and Exit Orders? The Entry Price The entry point on this strategy is done through a market order. Refer to the figure that follows: 90 . In this example.Financial Freedom through FOREX The chart above should bring together the rules of the strategy so far. on the GBP/USD hourly chart! Remember that the high test bar is showing you that there was heavy buying activity on the Sterling against the U. and 10 A. This strategy is unique in that you will only ever trade it short (sell).S. bar. which started at 6 A.M. as opposed to an instant execution. We have a high test bar. Dollar. the close was also the low of the bar—very powerful. bar is larger in relation to the bars that precede it. Find the high test bar Let’s dig deeper into the strategy by asking the next question. This break is especially strong given that the price action is pushing down very quickly after a rejection of the upward push.Greg Secker & Chris Weaver We are treating the low of this high test bar as a support level and.” and we tell all of our students to stay away from it! The actual location of the stop loss is 5 pips above the high of the high test bar. the very moment it beaks its low (support) is the most efficient time to enter the trade. That is a common mistake amateurs make. The Stop Loss The stop loss is also done through a market order. Refer to the figure that follows: 91 . Trading this type of entry with a market order is the best way to hit the exact entry point every time. therefore. This creates an automatic exit and removes the temptation of moving your stop loss further and further away if the position begins to turn against you. We call it the “dynamic stop loss management system. we are treating the high of the high test bar as a resistance level.Financial Freedom through FOREX As far as the stop loss is concerned. This is the number that is plugged into the two ratios that you learned about earlier: trade sizing and reward-to-risk. and. if the resistance of the high is broken by 5 pips. and we are allowing a 5 pip buffer. 92 . which is the key to this strategy. the trade risk—all based around the high test bar. stop loss. as a result. The following figure shows the entry. It is important to remember that the difference between the entry price and the stop loss price is your trade risk. we are assuming that we are wrong on the direction of the move and need to exit the position. In other words. to the high of bar 4 plus 5 pips. you begin trailing your stop loss order to the high of each bar from the completion of the fourth bar (the bar you enter on is bar one).).Greg Secker & Chris Weaver Taking Profit To take profit is to exit the trade with more money in your account then there was before you entered the trade—this is. Continue to do this on each bar from this point on until the price triggers your stop loss. you move your stop loss order down to lock in the profit until the position eventually comes back against you and triggers your stop loss—for a profit! This is known as “trailing your stop. Remember. session gets involved. The longer it runs the better! The reason we hold the position open for four hours before trailing the stop loss is to allow the Angry Bear to run without interruption before the U. once the fourth bar has formed.M. So.M. and 10 A. obviously. With the Angry Bear strategy. bar one can be anywhere between 6 A.” and the major benefit to doing it is that you are locking in profit as you trail without removing yourself from the trade and removing the possibility of generating more profit from the trade. 93 . Instead. what you want! For this strategy. you do not place a take-profit order to collect your money at a certain level. move your stop loss from the high of bar 1 plus 5 pips.S. Notice that you wait until the completion of the fourth bar before you start trailing your stop loss and. you would have moved the stop loss to the high of that bar. from then on.Financial Freedom through FOREX Look at the following figure for an example of how to trail your stop lost effectively. 94 . This example is a zoom in on the same chart that was used in a figure from earlier in this chapter. So. Here is a practical example for you. and so on. it is after each bar until your stop loss is triggered. if bar six would not have hit your stop loss. The next figure shows the profit on the trade as the difference between the trigger of the entry and the trigger of the trailed stop loss order. M.Greg Secker & Chris Weaver 1. the high test that triggers doesn’t occur until the last of the four possible bars. bar.M. bar was a high test bar that didn’t actually trigger in the trade—that’s fine. This is also a good example of why you need to wait for the low of the high test bar to be broken before you 95 . 2. On this particular setup. Notice also that the 7 A. Remember that our key high test bar can be anywhere between 6 A. What is the number of the hourly bar that eventually stops you out? _________________ Here are the answers: The trade is not ended until the 15th bar because that is the first time that an hourly high exceeds the high of a previous bar. Circle the key high test bar that triggers the trade.M.M. and 10 A.M. which is the 9 A. to 10 A. Who to trade? When to trade? What to look for? Where to enter and exit? Answer EUR/USD (Euro) & GBP/USD (Cable) 6 A.M. to 10 A.M. GMT hourly chart High Test Bar (black or white) Enter on low minus 1 Stop Loss on high plus 5 Trail Stop from 4th bar plus 5 Time and Pair-Specific Currency Behaviour Why does this strategy work? Remember that the key to success with this strategy is consistency. Look for the setup every trading day and trade it 100% according to the rules. It will make you money if you trade it correctly. 98 Chapter 6 FOREX ECLIPSE . . a Forex Eclipse is just as beautiful as an astronomical eclipse is to a seasoned astronomer. This will make the learning process much easier for you. To a professional FX trader. all of the same currency pair. The first question is: . Let’s check it out! Exploring the Strategy We are going to explore this strategy in the same format as we did for the Angry Bear. what?. where? and why? in relation to the strategy. when?. which is to ask the questions who?. S. Dollar. Take for instance the price of oil. The U. which is also known as the “Euro” USD/CHF—U. That is what we want. which is also known as the “Yen” These four pairs are often referred to as the “4 Majors” because they are the four most commonly traded pairs on the planet—large participation creates predictable price action. Oil is valued at a certain number of U. which is also known as the “Swissie” USD/JPY—U.S. Liquidity is important because it demonstrates that traders are trading large volumes of the currency. Dollar. which is also known as “Cable” EUR/USD—Euro with the U. Dollar has made it the most liquid individual currency in the world. Dollar with the Japanese Yen.S. This is because more transactions and major global resources are denominated or valued in U.Financial Freedom through FOREX Who Are You Trading? Remember that trading FX is simply trading one currency’s economy against another. Because of this.S. Dollars per barrel. The global influence of the U.S.S. Therefore. Dollar is referred to as the global central currency.S. Trade only the 4 Major pairs: GBP/USD EUR/USD USD/CHF USD/JPY 102 . specific price patterns form. Dollars than any other currency. the question of “who are you trading” is referring to the currency pairs that we use to trade the strategy.S. Dollar with the Swiss Franc. You will use the following four currency pairs to trade this strategy: GBP/USD—Great Britain Pound with the U. and vice versa. The “agreement” among these timeframes refers to the location of the price action in relation to the 50 EMA. This is because the strategy is not based around time but. You can trade this strategy whenever you like within the week. we know that we will be going short when we get the setup.Greg Secker & Chris Weaver When Are You Trading? The Forex Eclipse is similar to an astronomical eclipse in that it can occur at any time. Remember that the daily trend is the strongest indication of direction for the currency pair. we do not consider the 200 EMA. which is a setup and time-specific strategy. 103 . If the price is below the 50 EMA on the daily chart. The FX market is open from Sunday night until Friday night. That means that there is no specific time of the day to trade this strategy. we look to the 5-min chart to determine our entry. Once we have the agreement of these three timeframes. around a specific setup. Trade anytime you see a setup What Are You Looking For? The foundation of this strategy is the agreement or alignment of the daily. Let’s consider a possible Forex Eclipse setup on the GBP/USD. instead. This chart ultimately tells us whether or not we will be going long or short. we are only concerned with where the price is in relation the 50 EMA. Is it above it. The first chart we need to look at is the daily chart. or is it below it? That is the key. When trading this strategy. This is different from the Angry Bear strategy. 4-hour and 1-hour charts. We now have agreement between the daily and 4-hour charts . 104 .Financial Freedom through FOREX We notice that the daily chart is in a nice downward trend and that price is below the 50 EMA. Next.Now it is time to look at the 1-hour chart. This means we are looking for a potential short position. we look at the 4-hour chart to see if the price action is also trending down with price below the 50 EMA. The price action is below the 50 EMA on the 4-hour chart and there is a clear downward trend. you can trade the move or the journey from “out of place” to “in place. When you notice that the price is where it should not be. We now have agreement on the three major timeframes. if the price action is below the 50 EMA on all of its major timeframes. But what if the price on the 5-minute chart is above the 50 EMA and disagrees with the other charts? What would that be showing you? I would like to take this opportunity to introduce you to a little secret. you need to understand where the price should be. where do you expect the 5-minute chart to be trading—above or below the 50 EMA? The obvious answer is that you expect the price to be below the 50 EMA. Remember that time is king.” To be able to do this. Let me ask you a question: If the daily chart is trading below the 50 EMA. The majority of successful trading strategies are based on identifying the price when it is “out of place. the 4-hour chart is trading below the 50 EMA. and you know where it should be. and the 4-hour chart gives a stronger signal than the 1-hour chart.Greg Secker & Chris Weaver The price action is also below the 50 EMA on the 1-hour chart. The daily chart gives a stronger signal of the overall market view than the 4-hour chart. and the 1-hour chart is also trading below the 50 EMA.” Coming back to the example. but it is above the 50 EMA on the 5-minute 105 . The figure that follows is the first area of temporary disagreement from the previous figure. For now you just need to know what the setup looks like. then the price on the 5-minute chart is considered out of place. These are our entry signals. The next section deals specifically with where to enter and exit.Financial Freedom through FOREX chart. Look at the 5-minute chart that follows: This picture is a perfect example of exactly what you should be looking for when you are trading this strategy. This strategy captures the 5-minute chart moving back into place. there are small areas of temporary disagreement. 106 . Although the general push of the price action is downward. The strategy is traded this way because we know that the strength of the downward pull from the larger and stronger timeframes is likely to pull the 5-minute chart below its 50 EMA. This is exactly what you need to see. depending on the setup.Greg Secker & Chris Weaver Notice the close of the 5-minute bar on top of the 50 EMA. it needs to close above it. 107 . we are going to define the actually entry. In summary. It is not good enough for the price to just trade above the 50 EMA. Agreement among the 3 major timeframes with a temporary disagreement on the 5-minute chart Where Do You Enter and Exit? Entry This strategy is traded with an instant execution style entry. 4-hour. You then need to see the 5-minute chart demonstrating areas of temporary disagreement with the other charts by crossing the 50 EMA to the opposite side and closing. This is the trade setup. This means that you hit sell or buy. Next. you are looking for the daily. the moment you want to enter the trade—no entry orders. and 1-hour chart to all agree on where they are trading in relation to the 50 EMA—above or below. You are entering the moment it crosses down through the 50 EMA. except that it also includes where to enter the trade. Step three is the entry of the trade which happens the moment the price action falls back down into the 50 EMA. the price action retraces back toward the 50 EMA.Financial Freedom through FOREX As we are going short in this example. Here is the second entry signal zoomed in from above. This is the beginning of the 5 minute chart coming back into agreement with the other charts. 108 . Second. This figure demonstrates the three-step process that takes place on the 5minute chart. That would be too late. stronger charts are all trading below the 50 EMA. Fill in the boxes to describe what is happening and draw a line to show where the entry is. Notice that you are not waiting for the bar that crosses through the 50 EMA to close. This is where the price comes back into agreement with the larger time period charts. the entry is described as the moment that the price action crosses back down into the 50 EMA from above. and now the 5-minute chart is on top of it. the price action crosses through the 50 EMA and closes above it—that is the disagreement. the 5-minute chart out of place. First. Here is the same picture that we saw earlier. Remember that the larger. Greg Secker & Chris Weaver This shows how the price retraces back up to the 50 EMA. This signals your immediate entry into the trade. Step one shows the price action retracing back up to the 50 EMA. then breaks back through the 50 EMA to come into line with the stronger time charts. closes on the other side of it (which confirms the disagreement). Step three confirms the entry as the price action crosses back down into the 50 EMA. Step two demonstrates the temporary disagreement through the close of the 5minute bar on top of the 50 EMA. Your chart should look like this: 109 . This is exactly 20 pips above your entry (1.5325). Fill in the exact price of your stop loss below.5285 (1. three-step process of entering the trade. when you are going short.Financial Freedom through FOREX This is the simple. your stop loss is 20 pips below your entry. 110 .5325. and when you are going long.0020 = 1.5305. Remember. Next.0020 = 1. What would the exact price of your stop loss be? __________________________________________________ The answer is 1. your stop loss is above your entry. This shows an example of a short entry price.5285). Stop Loss The stop loss placement for this strategy is very simple.5305 + . The stop loss price is 1. we must define our stop loss and target prices. your stop loss is below your entry. you place the stop loss 20 pips above your entry. It is as easy as that! Let’s assume that you were going to trade this strategy long and your entry price was 1. When you are going long. When you are going short.5305 – . 5:1 reward-to-risk ratio every time you trade this strategy (30 pips of reward / 20 pips of risk). The target price is 1. Had you been spending the afternoon trading. Refer back to the figure above that shows the four entry signals.5305 – . This strategy is meant to be a quick “in and out. Target The target on this strategy is 30 pips from your entry price.Greg Secker & Chris Weaver Let’s take a look at how to target this trade for a profit. This gives you an automatic 1.5275). You know the entry and the stop loss prices. It is as easy as that! Remember. when you are going short.” and if your entry is executed correctly you are normally out of the trade with profit in 10–30 minutes. 111 . and when you are going long. now see if you can fill in the exact target price. your target is below your entry. This is exactly 30 pips below your entry (1.0030 = 1. and none of it takes more than 30 minutes to bank. You will see that each entry signal results in a successful trade.5275. your target is above your entry. you would have collected 120 pips from GBP/USD alone! The next figure is an expansion of the idea from earlier. This retracement is the result of short-term profit taking in the market.Financial Freedom through FOREX Enter when the price action breaks back into the 50 EMA. the 50 EMA on the 5-minute chart naturally acts as a guideline for the price action to retrace back to before continuing on with the dominant trend. not a signal of the changing of the trend. the probability of the trend continuing is very high. 20 pip risk and a 30 pip target Why Does This Strategy Work? When a currency pair has a dominant trend that is confirmed by its daily. During periods of short-term retracement. Exploits the power of timeframe and trend agreement 112 . 4-hour and 1-hour time charts. and 1-hour charts all trading above the 50 EMA. 113 . It will make you money if you trade it correctly. You enter the trade as the price action crosses back up through the 50 EMA. Stop Loss 20 pips. 4-hour and 1hour charts with disagreement on the 5-minute chart to enter and Enter as price breaks back through the 50 EMA. For a long setup. it can be traded just as successfully going long. Question Who to trade? When to trade? What to look for? Where exit? Answer EUR/USD (Euro) & GBP/USD (Cable) USD/CHF (Swissie) & USD/JPY (Yen) Anytime—this strategy is not time specific Agreement of daily. you would have to have the daily. Look for the setup every trading day and trade it 100% according to the rules.Greg Secker & Chris Weaver Strategy Summary The chart below is a quick summary of the strategy. Your stop loss is 20 pips below your entry and your target is 30 pips above your entry. Target 30 pips Catches the trending pair natural flow of a Why does this strategy work? Although the strategy in this chapter was demonstrated through short examples. 4-hour. Remember that the key to success with this strategy is consistency. with the 5-minute chart dipping down below the 50 EMA to indicate temporary disagreement with the other charts. . He went on to tell me that. Benefit destruction I had a conversation once with a man who told me that.K. housing market experienced a massive boom that began in the early 1990s and lasted nearly twenty years. he had a chance to buy a piece of land that would have allowed him to build 10–15 houses on it.Financial Freedom through FOREX destroyed. have made a considerable amount of money from (or witnessed growth in) their property portfolio (assuming they had managed it relatively well). in the early 90s. it was his focus on the cost that ultimately destroyed any energy being released into the 118 . This tells us two things: First. it tells us that anyone who declined the investment opportunities of the early 90s will be financially worse off than their counterparts (unless they generated large amounts of money elsewhere). unobtainable. twenty years later. which are certainly things to be considered. anyone who invested heavily in the early 90s will. he would now be a multimillionaire—but he didn’t and he isn’t. He made it very clear that he recognised that he had a great opportunity to make money. Unfortunately. “I knew that I had a great opportunity before me. Let me give you an example of benefit destruction in an investment scenario. His lack of action was not determined by his belief that the benefit was unreasonable. in fact. he is still working and paying off the mortgage on the single property he decided to buy in place of the piece of land. The U. Instead. He continued by saying. but I just didn’t have the money to get in. had he bought the land. Second. or even unrealistic.” Question: Was his focus on the cost or on the benefit of the transaction? Answer: The cost. People believe they are thinking rationally or even objectively when. How do we know this? We know this because his decision about whether or not to part with his money was based on the cost of the transaction. they are being swayed by emotions of fear and anxiety. Once you are thinking objectively. he will feel the same way in this example. His focus looked like this: Now.e. which I call benefit construction. i. What a tragic story… but it happens all of the time. or even trade. you are then able to properly weigh the positives and negatives of the potential transaction. Had the man taken this approach. and when you invest. let’s take the same example and look at the alternative mentality. costs and benefits are exactly the same. you are also able to focus on the benefits of the potential transaction and construct a positive result. Now we must be realistic. he would have come to the conclusion that the benefit outweighed the cost. and he would now be a millionaire. he missed out on a shot at becoming a millionaire.Greg Secker & Chris Weaver benefit. but the feeling or fear should have nothing to do with the action! The deciding factor is whether or not he will allow that feeling of not having the money to stop him from pursuing the investment opportunity. Because of this. Feelings have to do with emotions. His focus should have looked like this: 119 . The only difference is the focus. you must remove emotions and function objectively.. Benefit construction It is important to remember that we are considering the same person with the same scenario. At this point. If the man felt like he didn’t have the money in the first example. once they begin making money. To do this effectively. The reason I am telling you this now is because I believe you are actually going to make money once you have read this book—be prepared! 2. This rarely ever happens because. you will naturally have more investment opportunities—great! With these opportunities comes the process of weighing the costs and the benefits to increase your financial position. I am saying that once the focus is on the benefit. Isn’t this the reason you are looking for financial increase? 120 . Once you have more money. We tell this to our students all of the time. Trading is not the end. Privilege and fulfilment. you will make money. you must be an investor who is able to focus clearly on the potential benefits so that you can place the costs in their appropriate places. One thing I have noticed about many of our new students is that they have a belief that once they make money from trading they will just sit at home and trade for the rest of their lives. 1. So why is this important? There are two reasons. you are able to put the cost in the correct context in relation to the benefit and make objective financial decisions. not doing a great deal of anything else.. it is the beginning.Financial Freedom through FOREX I am not saying that you should neglect the cost and only consider the benefit. Greg Secker & Chris Weaver Up until this part of this section. And that is true: money doesn’t guarantee happiness. because of a lack of money. That is an example of privilege and fulfilment— doing and having what you want. By increasing your wealth you are able to experience life from a more privileged and comfortable perspective because you can do more things that you want to do! This is because an increase in available money allows you to focus more on the benefit. because you shouldn’t spend money you don’t have on clothes. Remember. or what you want. You were forced to focus on the cost. And it makes sense to do that here. you decided that you would have to settle for something that costs less. to the detriment of the benefit. also removes financial stress. 90% of problems can be solved with a cheque. and less on the cost. But after looking at the price tag. But it does remove restrictions and create opportunities for privilege and fulfilment and. and you would be able to buy the article of clothing. This is an example of lack of privilege and fulfilment because of insufficient funds. or what you are afraid of and don’t have. however. but there is another context to discuss this concept: privilege and fulfilment. a problem that can be solved by writing a cheque isn’t a problem anymore—and in my experience. if managed properly. you would be able to focus more on the benefit and less on the cost. 121 . Here is a very small and practical example of the point I am making: At some point in the past you have probably walked into a shop and wanted a particular article of clothing. If you had more money. Some people might take this type of talk as materialistic and might argue that doing what you want and having more money doesn’t guarantee happiness. It is time to stop living according to cost and start living according to benefit—experience life the way you want to. I have focused mainly on the cost and benefit concept in terms of investment or money-generating opportunities. it sounds alright.Financial Freedom through FOREX The Trader with the Most Pips… True or False: The trader who collects the most pips collects the most money.0510 Stop loss 1. and that the value of the pip is relatively small.8970 Stop loss 122 . Risk Management. but you mustn’t forget that you determine the value of the pip before entering the trade. They understand that a pip is worth money and. Because of this. therefore. On the surface. Remember that the trade sizing ratio is found in Chapter 3.000 Account Long trade on AUS/USD Entry price . This isn’t a problem because large quantities of small pips create large amounts of money.000 Account Long trade on USD/CAD Entry price 1. Most beginner traders answer this question incorrectly. This is due to an incomplete understanding of pips. it is possible to have fewer pips and more money or more pips and less money. TRADER A £100. You will also notice that we have designed other strategies for the purpose of only collecting smaller quantities of pips that are valued much higher. the more money that you have. You will notice that some of our strategies are designed to generate lots of pips.8960 .0410 TRADER B £100. assume that the more pips you have. Look at the example below and try to answer the following questions. how much do they each profit? TRADER A £ TRADER B £ What was the significant difference between the two traders? _________________________________________________ _________________________________________________ Let’s go through the answers together.8990 What is the value of each trader’s pip? TRADER A £ TRADER B £ Assuming both traders are successful. and his trade risk is 100 pips (the difference between the entry price and the stop loss price—the number of pips at risk). Trader B’s pip value is £100 (£1000/10) 123 . To answer the first question we must work out the trade sizing ratio: Trader A’s account risk is £1000 (1% of his account balance).0610 Target .Greg Secker & Chris Weaver Target 1. and his trade risk is 10 pips (the difference between the entry price and the stop loss price—the number of pips at risk). Trader A’s pip value is £10 (£1000/100) Trader B’s account risk is also £1000 (1% of his account balance). Financial Freedom through FOREX TRADER A £10 TRADER B £1000 TRADER B 5% per week (6% a month) over a three year period. and $100. 6% monthly return doubles capital annually—100% gross return Seed Capital End of year 1 $25. and they rely heavily on it to assist in growing their accounts.000 $50. The example is based on each trader achieving only 1. The question is.000.000 $200. that to make money from trading you do not need gigantic wins every single day. what you need are small percentage gains several times a week that compound into heroic amounts of money on a monthly.000 $100. if you remember from the chapter on risk management.000 $50. That is a 100% gross return on your money. Let’s take a look at a very simple and achievable target of only 1. $50. and annual basis. you learned that you never enter a trade without having a minimum reward-to-risk ratio of at least 1:1.000 $100. Remember that a 6% return on capital per month compounds to double the capital annually. your target is just 6%. what does 6% a month actually do for you? 6% return on capital per month doubles your capital every single year.000.Greg Secker & Chris Weaver Successful FX traders understand completely the power of compounding. in a normal four-week month. or just one and a half successful trades a week—logically and mathematically. Let me give you an example. this is very achievable if you are willing to stick to some rules and be disciplined. This means the minimum you will earn per winning trade will normally be at least 1%. quarterly.000. That means. given the size of the FX market. Instead. So hitting a 6% target per month equates to six successful trades per month (net).5% per week growth in your trading account.000 127 . but to the successful FX trader it is a physical reality! Here is an example of trading accounts that are initially funded with $25. They understand. This type of return is completely unheard of as far as traditional financial products (such as savings and pension funds) are concerned. unlike a lot of so-called traders. Now. Achieving this allows you to continually increase the amount you drawdown without your actions having a negative effect on your trading account. you obviously never get to enjoy any of the benefits! So. You need to understand that. you get extremely large returns on your capital. This time. This also demonstrates the concept of quality over quantity. if you never take any money out. You could hit your first weekly target on Monday morning and be done for the week. Income and Growth The goal is to be able to draw a suitable amount of money to enjoy life while leaving enough money in the account to allow it to grow at a compounding rate. Basically. We will keep the same target of 6% growth per month.000 $800.000 The most incredible thing about these numbers is that they are realistic! Remember. Instead.000. we are only halfway there as far as our overall income and growth plan is concerned. you will feel released from chasing down every potential trading opportunity that comes your way.Financial Freedom through FOREX End of year 2 End of year 3 $100. you have just allowed the account to grow through consistent profitable trading. you do not want your trading account to shrink as you withdraw money from it—that is not sustainable account management.000 $400. That is proper fund management. at this point. at the moment. Once you fully appreciate the fact that making a lot of money doesn’t necessarily consist of placing a lot of trades. we are going to draw half of the profit out each month— 3% comes out as income and 3% stays in as growth over the same three year period.000 $200. This is just one of the reasons why successful trading can be done without spending hours and hours per day in front of the screen.000 $400.5% per week—nothing too extraordinary on its own. you have not drawn any money out of your trading account yet. Although this is a good thing. Here is an example of how to create the income and growth plan. but when you combine it with the power of compounding. 128 . Let’s consider the following figure from earlier with the starting capital of $50. this is just 1.000 $200. however. 750 Annualized Income— Year 3 $56. the more money you can make. Just imagine what that would do for the numbers above.000 Annualized Income $25. We have students achieving up to and beyond 20% growth on their accounts consistently.500 Annualized Income— Year 2 $37.000 6% Monthly return doubles capital annually—100% gross return Capital at Start Account after Year 2 of Year 2 $75.Greg Secker & Chris Weaver 6% Monthly return doubles capital annually – 100% gross return Seed Capital $50. then 12% quadruples it—400% return per year. If 6% per month doubles your trading capital annually. That is meant to encourage you to get started with whatever you have right away—you must start growing your money right now! Let’s take another look at some numbers. And here is the great news: 6% growth per month is for beginners.500 6% Monthly return doubles capital annually – 100% gross return Capital at Start Account after Year 3 of Year 3 $112.250 By now you should be getting a feel for how this thing works. 129 . Now.000 is much greater than 1% of $1.000 $112. This is why Albert Einstein described compounding numbers as the most powerful force in the universe.000. that is not meant to put you off the idea of trading if you don’t have much to start with. It should also be clear that the more money you have to trade. Just think of it like this: 1% of $100.000 Account After Year 1 $75.500 $168. Bring this Book to Life The biggest mistake you could make would be to read this entire book and then walk away and do nothing with the information.” This is the attitude of traders who achieve sustainable. This allows you to plug in figures of growth and income specific to you.Financial Freedom through FOREX Please go to our website and download our “trade your way” spreadsheet. The FX market trades several trillion dollars on a daily basis. but it would have a massive impact on your personal financial position. Imagine you were able to generate just $5.co. large amounts of profit consistently on a long-term basis. It allows you to chart the progress of your trading profit and plug it into your personal financial situation. and attended introductory seminars but failed to ever get serious. Imagine if you could grab 250 pips a day out of a market that provides an endless amount of opportunities. Your profit would have no impact at all on the market. Your task is to take the tiniest percentages from this market consistently.uk/tradeyourway.” You could also look at it like this: if there are trillions of dollars moving around every single day. 130 . there is a very important saying that I would like to leave you with.000 dollars a week in profit from FX trading. The FX world is full of information gatherers who have read books. www. You cannot afford to be one of those people. This is why we can say. paid for courses. With this spreadsheet. “A small percentage of a lot of money equals a lot of money. get stuck in. there must also be an almost infinite number of pips moving around daily. you can plan with accuracy when you will be able to replace your current income and become financially free. and take action.knowledgetoaction.xls The purpose of this spread sheet is to bring clarity to your journey of financial freedom. A small percentage of a big pie… As we draw to a close. It goes like this: “A small percentage of a lot of money equals a lot of money. only has an inflow and. At the moment. Go do it now! 131 . The Dead Sea is not useful for creating or sustaining life in any way. shape. “Why is it so dead?” Less than 200 miles north of the Dead Sea is a thriving body of water known as the Sea of Galilee. everything is dead. So what is the difference? The difference is the OUTFLOW. In fact. but they are not. The Dead Sea.Greg Secker & Chris Weaver Those people never make it as FX traders because they fail to implement the information they gather. the information you have taken in through reading this book is dead.” Nature provides us with a clear and powerful demonstration of this point—it’s called the Dead Sea. The outflow stops the water from becoming stale and dense. it also has an outflow: the river Jordan. The Dead Sea is a body of water that is located in the Middle East between the nations of Israel and Jordan. because there is no outflow. It has simply served its purpose in bringing you the required information to begin trading the FX market successfully. there is nothing at all living in the Dead Sea—no plants or animals. due to the high levels of salt in the water. There are small fishing villages and even cities located all around the shores of this lush body of water. As the name implies. however. or form. given that both bodies of water are located relatively close together (fewer than 200 miles) and that the inflow of the Dead Sea is actually the outflow of the Sea of Galilee. We have a saying at Knowledge To Action that goes like this: “Information without implementation is DEAD. So don’t waste any time in getting started. the Dead Sea is so dead that. It hasn’t benefitted you or enhanced your lifestyle at all. you can actually float on top of it! The question is. The interesting thing about the relationship between these two bodies of water is that the outflow of the Sea of Galilee is the inflow of the Dead Sea—the river Jordan connects the two. you would expect both bodies of water to be similar. So. They are full of excuses and self pity. It is up to you to bring it to life! You do this by giving it an outflow. It is basically good for nothing. The Sea of Galilee has more than just an inflow. One is full of life and the other is completely dead. and keeps it fresh and full of life. You start to win the game of life by making better decisions. nobody says that good decisions are always simple. The well-used phrase. 132 . you become. I say it because it’s true. are aware that their choices may not be good for them. here is my behavioural checklist. in order to do or act in a certain way. such as the smoker getting a break from the office or getting the chance to connect and have some rapport with other smokers. after all. there are also some important things you need to decide to be. I would be able to tell how successful you would end up. yet many people don’t appear to link up their apparent lack of success to poor decision making. Attitude: Develop the right attitude and exhibit it daily. but I don’t mean it to be that. it’s everyday If I spent a day with you. As you embark upon this exciting journey as a trader. should be your heading . Only through this strategy do you give yourself the power to change the results. Some people make choices and go on to experience poor results—yet these people wonder why they can’t seem to get ahead in life. To help with this. “Be–Do–Have. in order to have. If you want to change your life. such as smokers. immediately after reading this book. be some secondary gain associated with the smoking. to allow us to have the things that we want. We must be a certain person.Financial Freedom through FOREX Success is a habit—It’s not what you do once in a while. The point here is that the secret to your success lies in your daily schedule: what you do consistently. One of those would be to take action now. that whilst you may not ‘instantly’ become. People often find this a somewhat pompous statement. change your daily schedule. but they carry on regardless—there may.daily. Be responsible: Stop blaming everyone and own your results. Other people. but they are necessary for success.” springs to mind here. It’s pretty obvious that good decisions create a better destiny. They never connect the dots. The point is. Faith: Deepen your faith and practice it daily. Commitment: Keep your promises. read e-mails at the end of the day. 133 . commit to making yourself a success every day.Greg Secker & Chris Weaver Prioritize: Do the important stuff first. This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/132662570/Financial-Freedom-Through-Forex
CC-MAIN-2016-07
refinedweb
28,869
77.13
Overview Request to an HTTP API is often just the URL with some query parameters. API Response The responses that we get from an API are data, that data can come in various formats, with the most popular being XML and JSON. Many HTTP APIs support multiple response formats so that developers can choose the one they’re more comfortable parsing. - CommandLineFu with Python - How to use the Vimeo API in Python - Extracting YouTube Data With Python Using API Getting Started To get started, lets create a simple data structure and put in some data. First we import the json module into our program. import json # Create a data structure data = [ { 'Hola':'Hello', 'Hoi':"Hello", 'noun':"hello" } ] To print the data to screen, is as simple as: print 'DATA:', (data) When we print the Data as above, we will see the following output: DATA: [{'noun': 'hello', 'Hola': 'Hello', 'Hoi': 'Hello'}] JSON Functions When you use JSON in Python, there are different function that we can make use of Json Dumps The json.dumps function takes a Python data structure and returns it as a JSON string. json_encoded = json.dumps(data) # print to screen print json_encoded OUTPUT: [{"noun": "hello", "Hola": "Hello", "Hoi": "Hello"}] Json Loads The json.loads() function takes a JSON string and returns it as a Python data structure. decoded_data = json.loads(json_encoded) # print to screen print decoded_data OUTPUT: [{u'noun': u'hello', u'Hola': u'Hello', u'Hoi': u'Hello’}]
https://pythonarray.com/parsing-json-in-python/
CC-MAIN-2022-40
refinedweb
242
57
Technical Support On-Line Manuals CARM User's Guide Discontinued #include <ctype.h> int ispunct ( int c); /* character to test */ The ispunct function tests c to determine if it is a punctuation character. Following is a list of all punctuation characters: The ispunct function returns a value of 1 if c is a punctuation character or a value of 0 if it is not. isalnum, isalpha, iscntrl, isdigit, isgraph, islower, isprint, isspace, isupper, isxdigit #include <ctype.h> #include <stdio.h> /* for printf */ void tst_ispunct (void) { unsigned char i; char *p; for (i = 0; i < 128; i++) { p = (ispunct (i) ? "YES" : "NO"); printf ("ispunct (.
http://www.keil.com/support/man/docs/ca/ca_ispunct.htm
CC-MAIN-2020-16
refinedweb
103
57.27
The varying impact of the changes put forth by P.L. 115-97 (the “Act” or the act formerly known as the Tax Cuts and Jobs Act), requires proactive year-end planning with a vigilant eye on continuing Treasury guidance on various aspects of the Act. One effect of the Act that should be addressed by practitioners and investors prior to year-end is the impact of Section 951A or the "global intangible low-taxed income" (“GILTI”) tax. For individuals and owners of flow through entities, in particular, ownership of shares in a foreign corporation may produce unwelcome results under the GILTI tax regime. The GILTI Tax While in general, the Act has the salutary effect of shifting the U.S. corporate taxation of foreign earnings to a "quasi-territorial" system, this is only a benefit for certain corporate taxpayers. For example, under Section 245A, a U.S. corporation is allowed a dividends received deduction for dividends received from an active foreign corporation. Therefore, a corporate U.S. shareholder of a foreign corporation would pay zero U.S. tax on such foreign source dividends. However, this benefit only extends to corporate taxpayers, as the Act treats flow through taxpayers that own controlled foreign corporations (“CFCs”) considerably differently than corporate shareholders. Under the regular anti-deferral regimes prior to the Act, U.S. shareholders (whether corporations or individuals) that owned 10% or more of the voting stock in a foreign corporation classified as a CFC generally were taxed on the CFC's earnings only upon receipt of a dividend. The primary exception to this rule was passive income or related party income that comprised the complex reporting rules known as the Subpart F regime. This regime required the inclusion in current income of certain types of income earned by a CFC, regardless of whether a distribution is received. A U.S. shareholder could, with proper structuring, plan around this regime and achieve tax deferral on income earned from a CFC. However, under the Act, a new anti-deferral tax regime has been implemented—the GILTI tax. Under the rules of the new GILTI tax, any U.S. person who is a U.S. shareholder of a CFC will be will be required to include its GILTI, currently as taxable income (in addition to any Subpart F income), its share of a CFC’s undistributed GILTI, possibly including the CFC’s undistributed active business income. As in the case of Subpart F income, this inclusion would occur regardless of whether any amount is distributed to the U.S. shareholder. The GILTI tax is effective for tax years starting after 31 December 2017 and applies to all U.S. shareholders, both corporate and non-corporate, (e.g., individuals, trusts, partnerships or S corporations). The Act also expanded the definitions describing the meaning of a CFC and a U.S. shareholder for CFC purposes. Consequently, more U.S. persons are now subject to the GILTI tax. Note that while the Act refers to “global intangible low-taxed income,” its reach goes well beyond intangible income and indeed could cause taxation on active business income that would have been excluded under the Subpart F regime. GILTI therefore potentially encompasses all income of a CFC that is not otherwise subject to tax as Subpart F income or income effectively connected to a CFC’s business including service and sales income. Once the amount of GILTI has been determined, a corporate U.S. shareholder may claim a deduction under Section 250, subject to certain limitations, equivalent to 50% of its GILTI (reduced to 37.5% for tax years starting after 2025). Prior to the consideration of U.S. foreign tax credits, this results in a 10.5% minimum tax on a corporate U.S. shareholder's GILTI. There is an exclusion from the application of the GILTI tax for high-taxed foreign income, but this high-taxed foreign income exclusion only applies to foreign base company income (i.e., income of a CFC that could already give rise to a Subpart F inclusion). Thus, income that is highly taxed but is not Subpart F income (and is not excluded from Subpart F by the high-tax exception) would be included in GILTI. Corporate shareholders, however, can claim a foreign tax credit for 80% of the foreign taxes associated with GILTI. Therefore, after applying the foreign tax credit, if the tested income is subject to an effective foreign income tax rate above 13.125% for tax years before 2026, and 16.406% thereafter, the corporate shareholder would have no GILTI tax (10.5%/80%= 13.125%). Flow through taxpayers, by contrast, do not benefit from foreign tax credits on GILTI, thereby, increasing the impact of GILTI on flow through taxpayers. While the amount of a U.S. shareholder's GILTI is calculated the same for corporate and flow through taxpayers, only corporate taxpayers are entitled to the GILTI deduction and related indirect foreign tax credits. A flow through or individual taxpayer is subject to the entire GILTI tax. Further, because the GILTI tax arises from foreign sourced business operations, flow through taxpayers cannot benefit from the Section 199A deduction for this income. Therefore, under most circumstances, individual or flow through U.S. taxpayers will pay a current tax on GILTI at a rate up to 37% (the highest individual tax rate) plus a potential additional 3.8% Medicare Tax. For both corporate and flow through taxpayers, GILTI is required to be included in gross income on the last day of the U.S. shareholder’s tax year. Therefore, GILTI planning must be in place by the last day of the U.S. shareholder’s tax year. The Proposed Regulations In late September 2018, the U.S. Dept. of Treasury (the “Treasury”) and Internal Revenue Service (“IRS”) issued proposed regulations (“Proposed Regs.”) for implementing the GILTI regime. Practitioners were hopeful the Proposed Regs. would add clarity, and possibly, relief for individual and flow through taxpayers affected by the GILTI tax. Unfortunately, such relief was not forthcoming. The Proposed Regs. deal with Section 951A in three categories or phases. First, the proposed regulations offer guidance on items that are determined at the CFC level (i.e., “tested income” and “tested loss,” “qualified business asset investment” (“QBAI”), and items that reduce “net deemed tangible income return (“net DTIR”)). See §§1.951A-2 through 1.951A-4. Second, they detail the rules for determining a U.S. shareholder’s pro rata share of the CFC-level items. See §1.951A-1(d). Finally, the proposed regulations discuss the obligatory aggregation rules that lead to the determination of a shareholder’s GILTI inclusion amount. See §1.951A-1(c). Moreover, since Section 951A does not contain any specific rules on the treatment of domestic partnerships that own stock of CFCs, Proposed Reg. §1.951A-5 is meant to provide guidance to such domestic partnerships and their partners (including S corporations and their shareholders) on how to compute their GILTI inclusion amounts. §1.951A-5 Domestic partnerships and their partners. Proposed Reg. §1.951A-5 provides rules regarding the application of Section 951A to domestic partnerships that own (within the meaning of Section 958(a)) stock in one or more CFCs and to partners of such domestic partnerships, including U.S. persons (within the meaning of Section 957(c)). Furthermore, the section would require a domestic partnership to provide certain information to each partner necessary for determining the GILTI inclusion amount or distributive share of the partnership’s GILTI inclusion amount. In considering the GILTI tax treatment of flow through shareholders, the IRS uses neither a pure aggregate nor a pure entity approach, but instead proposes distinguishing between non-U.S. and U.S. shareholder partners. In using a blended aggregate and entity approach, it treats a domestic partnership as an entity with respect to partners that are not U.S. shareholders of any CFC owned by the partnership, and treats the partnership as an aggregate for purposes of partners that are themselves U.S. shareholders with respect to CFCs owned by the partnership. In particular, paragraph (b) of Proposed Regs. §1.951A-5 provides rules for determining the GILTI inclusion amount of a domestic partnership and the distributive share of such amount of a partner that is not a U.S. shareholder; paragraph (c) provides the rules for determining the GILTI inclusion amount of a partner that is a U.S. shareholder. Proposed Reg. §1.951A-5 also provides the definitions of CFC tested item, partnership CFC, U.S. shareholder partner, and U.S. shareholder partnership, and discusses provisions applicable to tiered domestic partnerships. Finally, paragraph (g) provides examples illustrating the rules of this proposed section. §1.951A-6 Treatment of GILTI inclusion amount and adjustments to earnings and profits and basis related to tested loss CFCs. Proposed Reg. §1.951A-6 focuses on rules relating to the treatment of GILTI inclusion amounts and adjustments to earnings and profits and basis to account for tested losses. Particularly, this Section provides rules for the treatment of amounts taken into account in determining the net CFC tested income when applying Sections 163(e)(3)(B)(i) and 267(a)(3)(B), and also provides rules that increase the earnings and profits of a tested loss CFC for purposes of Section 952(c)(1)(A). Clearly, the Proposed Regs. did not provide any relief from the most onerous impact of the GILTI tax for flow through taxpayers owning CFCs. In addition, the Act introduced additional changes that will cause more U.S. taxpayers than ever before to be affected by the Internal Revenue Code’s anti-deferral provisions including the GILTI tax. Additional Changes under the Act As previously mentioned, prior to the Act, only U.S. persons owning 10% or more of the voting stock of the CFC were treated as US shareholders. For purposes of the GILTI tax, a U.S. shareholder is defined as a U.S. person who owns (directly or indirectly) 10% or more of the voting stock or equity in a foreign corporation. Therefore, more U.S. persons will be deemed U.S. shareholders for purposes of defining a foreign corporation as a CFC. In addition, the Act repealed Internal Revenue Code Section 958(b), which generally prevented stock owned by a foreign shareholder from being attributed downward to a domestic subsidiary. As a result, the stock attribution rules under the Act now permit treating a U.S. person as constructively owning certain stock of a foreign corporation held by a foreign shareholder (often referred to as “Downward Attribution”). This requires a careful review of ownership structures to determine if a U.S. person is being treated as a U.S. shareholder of a CFC under the Downward Attribution rules. The Act also eliminated the requirement that a U.S. shareholder must control a foreign corporation for an uninterrupted thirty (30) day period for the foreign corporation to be treated as a CFC. Under the new rules, a foreign corporation is a CFC even if it is only a CFC for a single day in a tax year. Possible Planning Strategies A non-corporate taxpayer may make the election under Section 962(a) to be taxed as a C corporation and, if such election is made effective prior to the date of inclusion of GILTI in gross income for such taxpayer (typically the last day of the taxable year), the taxpayer may generally obtain the benefits of the lower tax rate applicable to C corporations and associated foreign tax credits. However, making such an election generally gives rise to a dividend level tax that may or may not produce similar results depending on the impact of state taxes, etc. A flow through U.S. shareholder could also opt to contribute its CFC stock to a domestic corporation. Assuming that the contribution of stock is effective prior to the date of inclusion of the CFC’s GILTI in gross income for shareholders of such stock, and that the foreign corporation pays at least 13.125% of foreign taxes, the domestic corporation should generally not be subject to any additional federal income tax, resulting in a complete deferral of U.S. tax. The flow through or individual U.S. shareholder could thus receive the dividends from the U.S. corporation at the qualified dividend tax rate of 20%, meaning the CFC’s income would only be subject to 20% of U.S. tax plus the local taxes paid. Taxpayers should consult their tax advisors prior to year-end to determine if there is room to restructure their foreign corporate holdings to avoid the impact of the GILTI tax and/or other anti-deferral provisions of the Internal Revenue Code. Please note that for tax year 2018, any retroactive relief for year-end planning will be limited by the timing of the CFC’s inclusion of GILTI income.
https://www.lexology.com/library/detail.aspx?g=ff67381a-bef7-475a-a7ab-186213ad2cca
CC-MAIN-2020-50
refinedweb
2,155
54.83
Advent of Code 2019 Day 203 2: Part 1 This days challenge is quite different to Day 1 and involves creating a simple interpreter or emulator for processing a simple input program and set of opcodes. I have decided not to copy and paste the whole challenge here for brevity’s sake but I will refer back to parts. I encourage you to read the whole challenge before continuing. We are tasked with building a “computer” to interpret “Intcode” programs:means that the program is finished and should immediately halt. Encountering an unknown opcode means something went wrong. We are provided with 3 opcodes in this part of the task, 1, 2, and 99. Opcode 1 does the following:. And opcode 2 does: Opcode 2 works exactly like opcode 1, except it multiplies the two inputs instead of adding them. Again, the three integers after the opcode indicate where the inputs and outputs are, not their values. With opcode 99 halting the program. We are also told how to move on to the next operation when done calculating the current opcode: Once you’re done processing an opcode, move to the next one by stepping forward 4 positions. It is useful to note that all opcodes and data in this task appears to be integers. It is also useful to realise that from the description of this task we are actually implementing a very simple computer with Von Neumann architecture, that is, a computer where program input and output and program instructions are stored within the same space, and is the basis of most common computers in use today. An interesting side-effect of this architecture is that code can be self modifying. As part of this task we are given some example inputs and their eventual outputs which will be useful when testing our implementation: Here are the initial and final states of a few more small programs: - 1,0,0,0,99becomes 2,0,0,0,99(1 + 1 = 2). - 2,3,0,3,99becomes 2,3,0,6,99(3 * 2 = 6). - 2,4,4,5,99,0becomes 2,4,4,5,99,9801(99 * 99 = 9801). - 1,1,1,4,99,5,6,0,99becomes 30,1,1,4,2,5,6,0,99. Our overall task is given at the end as: Once you have a working computer, the first step is to restore the gravity assist program (your puzzle input) to the “1202 program alarm” state it had just before the last computer caught fire. To do this, before running the program, replace position 1with the value 12and replace position 2with the value 2. What value is left at position 0after the program halts? Implementing the Computer We first need to read in the input program and convert it into a structure our program can use. import scala.io.Source val filename = "day2.input.txt" // Open the input file val bufferedSource = Source.fromFile(filename) // Convert the contents into our opcodes val originalOpcodes: Array[Int] = bufferedSource.mkString .trim .split(',') .map(string => string.toInt) // Close the input file bufferedSource.close() This code will convert the input file into an array of integers ready for us to work with. The mkString method will load the whole contents of the file into a string then the trim method removes any trailing spaces, with the split and map methods dividing the string up on the commas and converting that output to integers. Now with our 3 given opcodes we should define some kind of structure to make calculating them easier. Since this is a quick puzzle I will opt for defining a simple functions and will also use the scala type keyword to try and make my code easier to understand. I will be making use of Scala’s mutable indexed type mutable.IndexedSeq to store the working memory of the program that will be read and modified by each operation: type Memory = Array[Int] type Position = Int type Opcode = Int // Simple Operation type: // Taking in the current memory state and position and outputting the new // state and position type Operation = (Memory, Position) => (Memory, Position) Now I can create a simple lookup table of opcodes and their operations: type Memory = mutable.IndexedSeq[Int] type Opcode = Int // Simple Operation type: // Taking in the current position in memory and memory itself and outputting // the new position and whether this operation should halt or not. type Operation = (Int, Memory) => (Int, Boolean) // The add operation val addOp: Operation = (pos: Int, memory: Memory) => { val inputAddress1 = memory(pos + 1) val inputAddress2 = memory(pos + 2) val outputAddress = memory(pos + 3) memory(outputAddress) = memory(inputAddress1) + memory(inputAddress2) (pos + 4, false) } // The multiply operation val multiplyOp: Operation = (pos: Int, memory: Memory) => { val inputAddress1 = memory(pos + 1) val inputAddress2 = memory(pos + 2) val outputAddress = memory(pos + 3) memory(outputAddress) = memory(inputAddress1) * memory(inputAddress2) (pos + 4, false) } // The simple halting operation val haltOp: Operation = (pos: Int, memory: Memory) => { (pos, true) } // The map of opcodes to their operations val opcodeMap = Map[Opcode, Operation]( (1, addOp), (2, multiplyOp), (99, haltOp) ) Now we have a simple map of opcodes to their operations we need to write the code to execute them: val errorOp: Operation = (pos: Int, memory: Memory) => { val opcode = memory(pos) println(s"Unknown opcode encountered at $pos: $opcode") (pos, true) } @scala.annotation.tailrec def iterate(pos: Int, memory: Memory): Unit = { val opcode = memory(pos) val operation = opcodeMap.getOrElse(opcode, errorOp) val (newPos, shouldHalt) = operation(pos, memory) if (shouldHalt) { return } iterate(newPos, memory) } This method will take in a position in memory and the memory itself and execute opcodes on it until it reaches an operation that will cause it to halt. I have done this using the Tail Recursion support in Scala to make it easy to read. This will avoid stack overflow issues. I have also added an error operation that will be executed upon hitting an unknown opcode. We can test one of the examples: val mainMemory: Memory = mutable.IndexedSeq(2,4,4,5,99,0) iterate(0, mainMemory) val finalOutput = mainMemory.mkString(",") println(finalOutput) This will output: 2,4,4,5,99,9801 We can run this with the file contents my copying the original code to the memory variable: val mainMemory: Memory = mutable.IndexedSeq(originalOpcodes: _*) Of course the task also instructs us to fix the program: To do this, before running the program, replace position 1with the value 12and replace position 2with the value 2. mainMemory(1) = 12 mainMemory(2) = 2 Then execute it. And we have the answer to the puzzle in index 0. Day 2: Part 2 The second part of the day requires us to figure out the inputs to the program that will result in an expected value. “With terminology out of the way, we’re ready to proceed. To complete the gravity assist, you need to determine what pair of inputs produces the output 19690720.” Something important noted in the puzzle is that opcodes can move the position in memory a variable amount of steps depending on what instructions there are:.) This actually means our halt instruction should technically look like: val haltOp: Operation = (pos: Int, memory: Memory) => { (pos + 1, true) } The following extra details are provided to narrow down the search:. This narrows down our search somewhat. To repeat what we need to do is: Find the input noun and verb that cause the program to produce the output 19690720. What is 100 * noun + verb? (For example, if noun= 12and verb= 2, the answer would be 1202.) It is also suggested that we should make sure to reset the memory to the original opcodes before each attempt. To this end we can write a function to make it easier to test various inputs: def decode(noun: Int, verb: Int, originalMemory: IndexedSeq[Int]): Int = { val mainMemory: Memory = mutable.IndexedSeq(originalMemory: _*) mainMemory(1) = noun mainMemory(2) = verb iterate(0, mainMemory) mainMemory(0) } This will execute for the given noun and verb pair and output the result. We can then brute force the answer to the puzzle: val random = scala.util.Random var output: Int = 0 var noun: Int = 0 var verb: Int = 0 while (output != 19690720) { noun = random.nextInt(100) verb = random.nextInt(100) output = decode(noun, verb, originalOpcodes) } println(s"noun=$noun verb=$verb") val answer = 100 * noun + verb println(answer) And this will output the solution to part 2!
https://lyndon.codes/2019/12/03/advent-of-code-2019-day-2/
CC-MAIN-2019-51
refinedweb
1,390
50.46
MXJ class problems Hi all, I am compiling MXJ objects but have spent so much time trying to compile the class using eclipse and mxj quickie one day it is working and then as i try to recompile the class it stops working. It has worked using quickie and eclipse. Granted I am new to MXJ but am unsure why it works sometimes and not others. any reason why this is happening? I have uninstalled Max and recompiled several times on two computers. Quickie seems to stall max when i try to delete it from the patcher window… which is bizzar… because it says it was compiled successfully… FYI I am using the googleIMG-java code. anyhelp is most welcome. Thanks in advance Tim On 16 Jun 2006, at 06:08, Timothy Devine wrote: > It has worked using quickie and eclipse. As Ben suggests, it’s hard to determine what might at fault without more information, but a common cause for Java problems is a messed-up classpath, or class/JAR files being created in the wrong place. I use Eclipse for MXJ development, with a customised project configuration and settings in the MXJ configuration file, but it’s possible that Eclipse and Quickie are putting the class files in different places, and/or making different decisions about when to recompile. Can you give more details about your configuration? Oh, and Mac or PC? — N. nick rothwell — composition, systems, performance — http:// Ok sorry. i thought is was a little vague. first, i am running latest osx, max, jitter, java… all have been updated. So the story goes, I tried to compile googleIMG using Quickie and it worked fine. but then it stopped working the next day i tried to use it. oh I had a few problem with restrictions on the folder in the app Support cycling74 java… i had to enable read and write… that allowed me to complie it firstly in quickie so I recomplied it again in quickie and it said it was successful (green text etc…) but when i tried to detlete the mxj quickie object i got the spinning rainbow wheel forever… and creating an mxj googleIMG did the same thing so it said it compiled ok but then would stall max that is the quickie story…. i have to shoot off to work, i will post the eclipse story soon.. it is quite similar though… oh and the quickiecompile ok and stall happend on a g5 and a powerbook thanks for you replies i appreciate it tim Ok i am back! The object I compiled in Eclipse following the MXJ Eclipse instructions on the Cycling website, (great help, thankyou) worked fine for a day. Then i set about changing the googleIMG.java file and recompiling it with a slight change in the code. this didn’t work and would not work with the original unchanged code. I have recompiled and uninstalled everything several times. I get no errors when using the MXJ googleIMG object in the max window, just a bang… so it is just not working like it has, by returning a number of urls. Having just written that I will check if it is being blocked by little snitch or something like that when i get back to the office on monday. thanks tim MONDAY Super! Arrived at the gallery to find the google image search was working perfectly without having to do anything what so ever! I just turned it on and started my patch to see what was happening and bang(ha) it worked! TUESDAY Unbelievable! it is not working again! I have no idea what this problem is and expect that nobody else does. This is such a random problem. Which makes me think the problems lie outside of the computer in the network or googleIMG object or Google Image search engine. I have no idea (how anyone can help). I ‘just’ need to do safe img search and restrict the size. Here is the googleIMG java FYI maybe it is set to work only even days of the week! thanks Tim _______________________ import com.cycling74.max.*; import java.io.*; import java.net.*; import javax.swing.text.*; import javax.swing.text.html.*; public class googleIMG extends MaxObject { public void anything(String str, Atom[] args) { EditorKit kit = new HTMLEditorKit(); Document doc = kit.createDefaultDocument(); // The Document class does not yet handle charset’s properly. doc.putProperty("IgnoreCharsetDirective", Boolean.TRUE); try { // Create a reader on the HTML content. Reader rd = getReader(str,args[1].getInt(),args[0].getInt()); // Parse the HTML. kit.read(rd, doc, 0); // Iterate through the elements of the HTML document. ElementIterator it = new ElementIterator(doc); javax.swing.text.Element elem; while ((elem = it.next()) != null) { SimpleAttributeSet s = (SimpleAttributeSet) elem.getAttributes().getAttribute(HTML.Tag.A); if (s != null) { String link = s.getAttribute(HTML.Attribute.HREF).toString(); if(link.startsWith("/imgres")) { int start = link.indexOf("/imgres?imgurl=") + 15; int end = link.indexOf("&imgrefurl="); String imgUrl = link.substring(start, end); outlet(0,imgUrl); } } } outletBang(1); } catch (Exception e) { e.printStackTrace(); } } static Reader getReader(String term, int num, int start) throws IOException { //System.out.println("" + start + "&num=" + num + "&q=" + term); URLConnection conn = new URL("" + start + "&num=" + num + "&q=" + term).openConnection(); conn.addRequestProperty("User-Agent", "Mozilla/4.76"); return new InputStreamReader(conn.getInputStream()); } } Ok I am sorry that i keep responding to my own posts but i think i can finally put this to rest and maybe oneday somebody else will find a use for it. It seems that I compiled the java classes fine and that the problem was in that the googleIMG code uses Firefox in someway and the network i use to connect to the internet would only work with safari. put simply safari worked – firefox didn’t. The tech enabled acces with Firefox and the object started working again! If it works tomorrow then all is good. Tim Forums > Dev
https://cycling74.com/forums/topic/mxj-class-problems/
CC-MAIN-2015-18
refinedweb
981
65.12
In PHP, use of namespaces allows classes / functions / constants of same name be used in different contexts without any conflict, thereby encapsulating these items. A namespace is logical grouping of classes/functions etc depending on their relevence. Just as a file with same name can exist in two different folders, a class of ertain name can be defined in two namespaces. Further, as we specify the complete path of a file to gain access, we need to specify full name of class along with namespace. Use of namespaces becomes crucial when application code grows. To give a unique name to each class/function may become tedious and not exactly elegant, namespace comes handy. For example, if we need to declare a calculate() function to calculate area as well as tax, instead of defining them as something like calculate_area() and calculate_tax(), we can create two namespaces area and tax and use calculate() inside them. Use of namespaces solves two problems. avoiding name collisions between classes/functions/constants defined by someone with third-party classes/functions/constants. provides ability to alias (or shorten) Extra_Long_Names thereby improving readability of source code. PHP Namespaces provide a way in which to group related classes, interfaces, functions and constants. Namespace names are case - insensitive <?php namespace myspace; function hello() { echo "Hello World\n"; } ?> To call a function defined inside a namespace, include with use keyword. Name of function is qualified with namespace <?php namespace myspace; function hello() { echo "Hello World\n"; } use myspace; myspace\hello(); ?> Above code now returns name following output Hello World
https://www.tutorialspoint.com/php-namespaces-overview
CC-MAIN-2022-21
refinedweb
258
56.66
Phillip is the author of the open-source Python libraries PEAK and PyProtocols, and has contributed fixes and enhancements to the Python interpreter. He is the author of the Python Web Server Gateway Interface specification (PEP 333). He can be contacted at pje@telecommunity.com. As software environments become more complex and programs get larger, it becomes more and more necessary to find ways to reduce code duplication and scattering of knowledge. While simple code duplication is easy to factor out into functions or methods, more complex code duplication is not. For example, if a method needs to be wrapped in a transaction, synchronized in a lock, or have its calls transmitted to a remote object, there often is no simple way to factor out a function or method to be called, because the part of the behavior that varies needs to be wrapped inside the common behavior. A second and related problem is scattering of knowledge. Sometimes a framework needs to be able to locate all of a program's functions or methods that have a particular characteristic, such as "all of the remote methods accessible to users with authorization X." The typical solution is to put this information in external configuration files, but then you run the risk of configuration being out of sync with the code. For example, you might add a new method, but forget to also add it to the configuration file. And of course, you'll be doing a lot more typing, because you'll have to put the method names in the configuration file, and any renaming you do requires editing two files. So no matter how you slice it, duplication is a bad thing for both developer productivity and software reliabilitywhich is why Python 2.4's new "decorator" feature lets you address both kinds of duplication. Decorators are Python objects that can register, annotate, and/or wrap a Python function or method. For example, the Python atexit module contains a register function that registers a callback to be invoked when a Python program is exited. Without the new decorator feature, a program that uses this function looks something like Listing One(a). When Listing One(a) is run, it prints "Goodbye, world!" because when it exits, the goodbye() function is invoked. Now look at the decorator version in Listing One(b), which does exactly the same thing, but uses decorator syntax insteadan @ sign and expression on the line before the function definition. This new syntax lets the registration be placed before the function definition, which accomplishes two things. First, you are made aware that the function is an atexit function before you read the function body, giving you a better context for understanding the function. With such a short function, it hardly makes a difference, but for longer functions or methods, it can be very helpful to know in advance what you're looking at. Second, the function name is not repeated. The first program refers to goodbye twice, so there is more duplicationprecisely the thing we're trying to avoid. Why Decorate? The original motivation for adding decorator syntax was to allow class methods and static methods to be obvious to someone reading a program. Python 2.2 introduced the classmethod and staticmethod built-ins, which were used as in Listing Two(a). Listing Two(b) shows the same code using decorator syntax, which avoids the unnecessary repetitions of the method name, and gives you a heads-up that a classmethod is being defined. While this could have been handled by creating a syntax specifically for class or static methods, one of Python's primary design principles is that: "Special cases aren't special enough to break the rules." That is, the language should avoid having privileged features that you can't reuse for other purposes. Since class methods and static methods in Python are just objects that wrap a function, it would not make sense to create special syntax for just two kinds of wrapping. Instead, a syntax was created to allow arbitrary wrapping, annotation, or registration of functions at the point where they're defined. Many syntaxes for this feature were discussed, but in the end, a syntax resembling Java 1.5 annotations was chosen. Decorators, however, are considerably more flexible than Java's annotations, as they are executed at runtime and can have arbitrary behavior, while Java annotations are limited to only providing metadata about a particular class or method. Creating Decorators Decorators may appear before any function definition, whether that definition is part of a module, a class, or even contained in another function definition. You can even stack multiple decorators on the same function definition, one per line. But before you can do that, you first need to have some decorators to stack. A decorator is a callable object (like a function) that accepts one argumentthe function being decorated. The return value of the decorator replaces the original function definition. See the script in Listing Three(a), which produces the output in Listing Three(b), demonstrating that the mydecorator function is called when the function is defined. For the first example decorator, I had it return the original function object unchanged, but in practice, it's rare that you'll do that (except for registration decorators). More often, you'll either be annotating the function (by adding attributes to it), or wrapping the function with another function, then returning the wrapper. The returned wrapper then replaces the original function. For example, the script in Listing Four prints "Hello, world!" because the does_nothing function is replaced with the return value of stupid_decorator. Objects as Decorators As you can see, Python doesn't care what kind of object you return from a decorator, which means that for advanced uses, you can turn functions or methods into specialized objects of your own choosing. For example, if you wanted to trace certain functions' execution, you could use something like Listing Five. When run, Listing Five prints "entering" and "exiting" messages around the "Hello, world" function. As you can see, a decorator doesn't have to be a function; it can be a class, as long as it can be called with a single argument. (Remember that in Python, calling a class returns a new instance of that class.) Thus, the traced class is a decorator that replaces a function with an instance of the traced class. So after the hello function definition in Listing Five, hello is no longer a function, but is instead an instance of the traced class that has the old hello function saved in its func attribute. When that wrapper instance is called (by the hello() statement at the end of the script), Python's class machinery invokes the instance's __call__() method, which then invokes the original function between printing trace messages. Stacking Decorators Now that we have an interesting decorator, you can stack it with another decorator to see how decorators can be combined. The script in Listing Six prints "Called with <class '__main__.SomeClass'>", wrapped in "entering" and "exiting" messages. The ordering of the decorators determines the structure of the result. Thus, someMethod is a classmethod descriptor wrapping a traced instance wrapping the original someMethod function. So, outer decorators are listed before inner decorators. Therefore, if you are using multiple decorators, you must know what kind of object each decorator expects to receive, and what kind of object it returns, so that you can arrange them in a compatible wrapping order, so that the output of the innermost decorator is compatible with the input of the next-outer decorator. Usually, most decorators expect a function on input, and return either a function or an attribute descriptor as their output. The Python built-ins classmethod, staticmethod, and property all return attribute descriptors, so their output cannot be passed to a decorator that expects a function. That's why I had to put classmethod first in Listing Four. As an experiment, try reversing the order of @traced and @classmethod in Listing Four, and see if you can guess what will happen. Functions as Decorators Because most decorators expect an actual function as their input, some of them may not be compatible with our initial implementation of @traced, which returns an instance of the traced class. Let's rework @traced such that it returns an actual function object, so it'll be compatible with a wider range of decorators. Listing Seven provides the same functionality as the original traced decorator, but instead of returning a traced object instance, it returns a new function object that wraps the original function. If you've never used Python closures before, you might be a little confused by this function-in-a-function syntax. Basically, when you define a function inside of another function, any undefined local variables in the inner function will take the value of that variable in the outer function. So here, the value of func in the inner function comes from the value of func in the outer function. Because the inner function definition is executed each time the outer function is called, Python actually creates a new wrapper function object each time. Such function objects are called "lexical closures," because they enclose a set of variables from the lexical scope where the function was defined. A closure does not actually duplicate the code of the function, however. It simply encloses a reference to the existing code, and a reference to the free variables from the enclosing function. In this case, that means that the wrapper closure is essentially a pointer to the Python bytecode making up the wrapper function body, and a pointer to the local variables of the traced function during the invocation when the closure was created. Because a closure is really just a normal Python function object (with some predefined variables), and because most decorators expect to receive a function object, creating a closure is perhaps the most popular way of creating a stackable decorator. Decorators with Arguments Many applications of decorators call for parameterization. For example, say you want to create a pair of @require and @ensure decorators so that you can record a method's precondition and postcondition. Python lets us specify arguments with our decorators; see Listing Eight. (Of course, Listing Eight is for illustration only. A full-featured implementation of preconditions and postconditions would need to be a lot more sophisticated than this to deal with things like inheritance of conditions, allowing postconditions to access before/after expressions, and allowing conditions to access function arguments by name instead of by position.) You'll notice that the require() decorator creates two closures. The first closure creates a decorator function that knows the expr that was supplied to @require(). This means require itself is not really the decorator function here. Instead, require returns the decorator function, here called decorator. This is very different from the previous decorators, and this change is necessary to implement parameterized decorators. The second closure is the actual wrapper function that evaluates expr whenever the original function is called. Try calling the test() function with different numbers of arguments, and see what happens. Also, try changing the @require line to use a different precondition, or stack multiple @require lines to combine preconditions. You'll also notice that @require(expr="len(__args)==1") still works. Decorator invocations follow the same syntax rules as normal Python function or method calls, so you can use positional arguments, keyword arguments, or both. Function Attributes All of the examples so far have been things that can't be done quite so directly with Java annotations. But what if all you really need is to tack some metadata onto a function or method for later use? For this purpose, you may wish to use function attributes in your decorator. Function attributes, introduced in Python 2.1, let you record arbitrary values as attributes on a function object. For example, suppose you want to track the author of a function or method, using an @author() decorator? You could implement it as in Listing Nine. In this example, you simply set an author_name attribute on the function and return it, rather than creating a wrapper. Then, you can retrieve the attribute at a later time as part of some metadata-gathering operation. Practicing "Safe Decs" To keep the examples simple, I've been ignoring "safe decorator" practices. It's easy to create a decorator that will work by itself, but creating a decorator that will work properly when combined with other decorators is a bit more complex. To the extent possible, your decorator should return an actual function object, with the same name and attributes as the original function, so as not to confuse an outer decorator or cancel out the work of an inner decorator. This means that decorators that simply modify and return the function they were given (like Listings Three and Nine), are already safe. But decorators that return a wrapper function need to do two more things to be safe: - Set the new function's name to match the old function's name. - Copy the old function's attributes to the new function. These can be accomplished by adding just three short lines to our old decorators. (Compare the version of @require in Listing Ten with the original in Listing Eight.) Before returning the wrapper function, the decorator function in Listing Ten changes the wrapper function's name (by setting its __name__ attribute) to match the original function's name, and sets its __dict__ attribute (the dictionary containing its attributes) to the original function's __dict__, so it will have all the same attributes that the original function did. It also changes the wrapper function's documentation (its __doc__ attribute) to match the original function's documentation. Thus, if you used this new @require() decorator stacked over the @author() decorator, the resulting function would still have an author_name attribute, even though it was a different function object than the original one being decorated. Putting It All Together To illustrate, I'll use a few of these techniques to implement a complete, useful decorator that can be combined with other decorators. Specifically, I'll implement an @synchronized decorator (Listing Eleven) that implements Java-like synchronized methods. A given object's synchronized methods can only be invoked by one thread at a time. That is, as long as any synchronized method is executing, any other thread must wait until all the synchronized methods have returned. To implement this, you need to have a lock that you can acquire whenever the method is executing. Then you can create a wrapping decorator that acquires and releases the lock around the original method call. I'll store this lock in a _sync_lock attribute on the object, automatically creating a new lock if there's no _sync_lock attribute already present. But what if one synchronized method calls another synchronized method on the same object? Using simple mutual exclusion locks would result in a deadlock in this case, so we'll use a threading.RLock instead. An RLock may be held by only one thread, but it can be recursively acquired and released. Thus, if one synchronized method calls another on the same object, the lock count of the RLock simply increases, then decreases as the methods return. When the lock count reaches zero, other threads can acquire the lock and can, therefore, invoke synchronized methods on the object again. There are two little tricks being done in Listing Eleven's wrapper code that are worth knowing about. First, the code uses a try/except block to catch an attribute error in the case where the object does not already have a synchronization lock. Since in the common case the lock should exist, this is generally faster than using an if/then test to check whether the lock exists (because the if/then test would have to execute every time, but the AttributeError will occur only once). Second, when the lock doesn't exist, the code uses the setdefault method of the object's attribute dictionary (its __dict__) to either retrieve an existing value of _sync_lock, or to set a new one if there was no value there before. This is important because it's possible that two threads could simultaneously notice that the object has no lock, and then each would create and successfully acquire its own lock, while ignoring the lock created by the other! This would mean that our synchronization could fail on the first call to a synchronized method of a given object. Using the atomic setdefault operation, however, guarantees that no matter how many threads simultaneously detect the need for a new lock, they will all receive the same RLock object. That is, one setdefault() operation sets the lock, then all subsequent setdefault() operations receive that lock object. Therefore, all threads end up using the same lock object, and thus only one is able to enter the wrapped method at a time, even if the lock object was just created. Conclusion Python decorators are a simple, highly customizable way to wrap functions or methods, annotate them with metadata, or register them with a framework of some kind. But, as a relatively new feature, their full possibilities have not yet been explored, and perhaps the most exciting uses haven't even been invented yet. Just to give you some ideas, here are links to a couple of lists of use cases that were posted to the mailing list for the developers working on the next version of Python: and 2004-April/044132.html. Each message uses different syntax for decorators, based on some C#-like alternatives being discussed at the time. But the actual decorator examples presented should still be usable with the current syntax. And, by the time you read this article, there will likely be many other uses of decorators out there. For example, Thomas Heller has been working on experimental decorator support for the ctypes package (), and I've been working on a complete generic function package using decorators, as part of the PyProtocols system ( PyProtocols.html). So, have fun experimenting with decorators! (Just be sure to practice "safe decs," to ensure that your decorators will play nice with others.) DDJ import atexit def goodbye(): print "Goodbye, world!" atexit.register(goodbye)(b) import atexit @atexit.register def goodbye(): print "Goodbye, world!"Back to article Listing Two (a) class Something(object): def someMethod(cls,foo,bar): print "I'm a class method" someMethod = classmethod(someMethod)(b) class Something(object): @classmethod def someMethod(cls,foo,bar): print "I'm a class method"Back to article Listing Three (a) def mydecorator(func): print "decorating", func return func print "before definition" @mydecorator def some_function(): print "I'm never called, so you'll never see this message" print "after definition"(b) before definition decorating <function some_function at 0x00A933C0> after definitionBack to article Listing Four def stupid_decorator(func): return "Hello, world!" @stupid_decorator def does_nothing(): print "I'm never called, so you'll never see this message" print does_nothingBack to article Listing Five class traced: def __init__(self,func): self.func = func def __call__(__self,*__args,**__kw): print "entering", __self.func try: return __self.func(*__args,**__kw) finally: print "exiting", __self.func @traced def hello(): print "Hello, world!" hello()Back to article Listing Six class SomeClass(object): @classmethod @traced def someMethod(cls): print "Called with class", cls Something.someMethod()Back to article Listing Seven def traced(func): def wrapper(*__args,**__kw): print "entering", func try: return func(*__args,**__kw) finally: print "exiting", func return wrapperBack to article Listing Eight def require(expr): def decorator(func): def wrapper(*__args,**__kw): assert eval(expr),"Precondition failed" return func(*__args,**__kw) return wrapper return decorator @require("len(__args)==1") def test(*args): print args[0] test("Hello world!")Back to article Listing Nine def author(author_name): def decorator(func): func.author_name = author_name return func return decorator @author("Lemony Snicket") def sequenceOf(unfortunate_events): pass print sequenceOf.author_name # prints "Lemony Snicket"Back to article Listing Ten def require(expr): def decorator(func): def wrapper(*__args,**__kw): assert eval(expr),"Precondition failed" return func(*__args,**__kw) wrapper.__name__ = func.__name__ wrapper.__dict__ = func.__dict__ wrapper.__doc__ = func.__doc__ return wrapper return decoratorBack to article Listing Eleven def synchronized(func): def wrapper(self,*__args,**__kw): try: rlock = self._sync_lock except AttributeError: from threading import RLock rlock = self.__dict__.setdefault('_sync_lock',RLock()) rlock.acquire() try: return func(self,*__args,**__kw) finally: rlock.release() wrapper.__name__ = func.__name__ wrapper.__dict__ = func.__dict__ wrapper.__doc__ = func.__doc__ return wrapper class SomeClass: """Example usage""" @synchronized def doSomething(self,someParam): """This method can only be entered by one thread at a time"""Back to article
http://www.drdobbs.com/web-development/python-24-decorators/184406073
CC-MAIN-2017-04
refinedweb
3,447
50.67
Pig Latin - a beginner lesson with some depth# COMMENTS I recently did a unit where I had my students convert words into Pig Latin. I like the problem because to start it only requires strings, functions and if statements but there is some depth to the unit. We start with simplified rules: - If the word starts with a vowel, add "ay" to the end of the word - If it starts with a consonant move the first latter to the end andadd "ay" - don't worry about anything else Students usually start with something like this: def piglatinify(word): first = word[0] if first == "a" or first == "e" or firts == "i" \ or first == "o" or first == "u": result = word + "ay" else: result = word[1:] + first + "ay" return result as students realize it's much easier to check for vowels rather than consonants. Some students discover that you can do any of the following instead of the big compound or : first in ('a','e','i','o','u') first in ['a','e','i','o','u'] first in "aeiou" This allows us to talk a little about lists (and tuples if you like) as well as now strings are similar to them in certain ways. By itself, this is a nice little beginner project but it gets better. Since we talked a bit about lists and strings in the refinement, we then talk about using python's split() method that parses a string on whitespace. We also talk about the for loop. Ultimately this leads us to writing a function to convert a sentence into Pig Latin: def sentence-to-piglatin(sentence): result_list = [] for word in sentence.split(): result_list.append(piglatinify(word)) return " ".join(result_list) But this doesn't work with real sentences. Let's focus on two problems. The first is that it won't handle the period at the end of the sentence properly. It would take that last word, let's say dog. and convert it to og,day rather than ogday.. It also doesn't handle capital letters. There are other issues but they have similar solutions to the ones we'll use for these two. This is where things get fun. To handle the period, students frequently jump to modifying the if conditions in piglatinify : def piglatinify(word): first = word[0] if first in "aeiou": if word[-1] == '.': result = word[:-1]+"ay." else: result = word+"ay" else: if word[-1] == '.': result = word[1:-1]+first+"ay." else: result = word[1:] + first + "ay" return result or something similar. This is a straighforward working solution but it's also a great place to introduce the idea of changing the data instead of using complex conditionals to handle special cases (earlier posts here and here). If we take out the period we can do our regular piglatinify and then add it back in. This leads us to a solution like this: def piglatinify(word): # at some point we added this if len(word)==0: return 0 # if the last character is a '.' store it and remove it from word last_char = '' if word[-1] == '.': word = word[:-1] last_char = '.' first = word[0] if first in 'aeiou': result = word + "ay" else: result = word[1:] + first + "ay" result = result + last_char return result We can do something similar to deal with words that start with upper case letters: def piglatinify(word): # at some point we added this if len(word)==0: return 0 # handle periods # if the last character is a '.' store it and remove it from word last_char = '' if word[-1] == '.': word = word[:-1] last_char = '.' # check for capital starts_with_capital = False if word[0] == word[0].upper(): starts_with_capital = True word = word[0].lower()+word[1:] first = word[0] if first in 'aeiou': result = word + "ay" else: result = word[1:] + first + "ay" result = result + last_char if starts_with_capital: result = result.capitalize() return result You can approach other special cases similarly. So, there you have it. A fun little problem that you can do with your students early on in a CS0 with surprising depth.Tweet
https://cestlaz.github.io/post/piglatin/
CC-MAIN-2020-10
refinedweb
666
69.92
Introduction to Goto Statement in C# The Goto Statement in C#, also known as the Jump Statement, is used to transfer the flow of the program directly to the labeled statement. These statements move the control of the program to other parts. One of the most common applications of the Goto statement is to move the control of the program to a specific point in the switch statements. In case of a deep nested loop, goto statement can be a very good function to get out of the loop. The nested loop continues and the program waits until the end of the loop, but if the condition is satisfied in midway, we can implement goto statement and quickly get out of the loop and save time. Syntax: Following is the standard syntax for goto statement: goto statement_name; The syntax begins with declaring the goto keyword and then using the statement name. When in the program, whenever this line is to be executed, the program will jump on to the statement_name part of the program. When any program, whenever or at whatever point, stumbles upon the above-mentioned goto syntax, it will execute the goto statement and jump on to the mentioned statement_name and that’s how the control shifts. Flowchart of Goto Statement Let us now understand the working of the goto statement in the flowchart. Referring to the above flowchart, we can understand the code flow of a program with the goto statement. We have multiple statements, 1,2 and 3, and as the code moves ahead, it encounters with goto statement in the 3rd statement. And from 3rd statement, the code will jump to wherever the goto statement is pointing. In our sample, we have our goto statement referring to statement 1. Meaning, when the code stumbles upon goto statement, it will check for condition and based on the result of the condition the code will either move ahead, which means it will conclude the program or the goto statement will be executed and the code will make the jump. How Goto Statements Works in C#? Basically Goto statement is a Jump Statement. It works in any program in a way to provide a quick exit. How it works is, To transfer the control of the program to any specific point at any given time, is the primary purpose of Goto Statement in C#. Example #1 Now that we have understood how Goto Statement works, let’s demonstrate the working of Goto statement with proper code. Code: using System; public class sampleGoto { public static void Main(string[] args) { eligibility: Console.WriteLine("You are not an adult yet!"); Console.WriteLine("Enter your age:\n"); int age = Convert.ToInt32(Console.ReadLine()); if (age < 18) { goto eligibility; } else { Console.WriteLine("You are an adult!"); Console.Read(); } } } Code Interpretation: Basically we have our using, namespace files. Then beginning of our class with the main class within. Then we have our goto the keyword which goes by as “eligibility” which has a print statement, “You are not an adult yet!”. After printing this statement, the program will move ahead and execute the next line. Here “Enter your age:” is the statement that will be printed and we will have to input a value. Upon entering the value, the program will enter the if statement and check for the condition. If the condition is fulfilled, meaning if the value we entered is else that 18, it will go to the next statement where we have our goto statement. As our program touches the goto statement, it’ll jump to the mentioned part i.e. eligibility and move ahead from that point. Else the program will have it if condition satisfied and will enter the else part where it will print “You are an adult!”, meaning the program has come to a conclusion. Refer below-attached screenshot for output. As shown in the screenshot, when we passed a value less than 18, it printed the first statement and then when we entered a value greater than 18, the program printed the else statement. Now that we have demonstrated a simple program with Goto statement, let’s try another example which will carry out the same operation. Example #2 Code: using System; public class sampleGoto { public static void Main(string[] args) { result: Console.WriteLine("Sorry! you did not pass the test."); Console.WriteLine("Please submit your Passing Percentage:\n"); int age = Convert.ToInt32(Console.ReadLine()); if (age < 40) { goto result; } else { Console.WriteLine("Congrats! You have passed the Test"); Console.Read(); } } } Code interpretation: Similar to the first program, we have demonstrated the working of Goto Statement. Here we have a simple condition to check if the entered input value is above 40 or not. Upon executing, the program will print the first line with “Sorry! you did not pass the test.” Then the program will ask for the user to enter a numeric value. Upon entering the value, the program will enter IF ELSE loop where the entered value will be checked for a condition of beginning lesser or greater than 40. If the entered value is less than 40, the program will execute the goto statement and jump to a labeled statement. And If the entered value is greater than 40, then the program will proceed and enter the else part. In else part, it will print the “Congrats! You have passed the Test” and end. Refer to the below attached screenshot for proper output. Should you Implement GOTO: It is advisable not to implement or use goto statements because the program logic will be more complexed. Also, it could be quite difficult to trace the flow of code once the program encounters a goto statement. On the contrary, if you think using Goto will smoothen the flow of the program then you are free to use it. Goto is rarely used. Conclusion We have understood what Goto statement in C# is. We’ve broadly understood the working and the syntax for the Goto Statement. Later, with an example, we demonstrated the working of Goto Statement. We implemented a Goto statement with two examples with different scenarios. Though Goto Statement is easy to use, it is not quite recommended to implement it with long programs as Goto statement might jumble the program and make it difficult to debug in a simpler way. Recommended Articles This is a guide to Goto Statement in C#. Here we discuss a brief overview on Goto Statement in C# and its Examples along with its Code Implementation. You can also go through our other suggested articles to learn more –
https://www.educba.com/goto-statement-in-c-sharp/?source=leftnav
CC-MAIN-2021-31
refinedweb
1,099
63.49
Building a Text Classifier with Spacy 3.0 Explosion AI just released their brand new nightly releases for their natural language processing toolkit SpaCy. I have been a huge fan of this package for years since it allows for rapid development and is easy to use for creating applications that can deal with naturally written text. There have been a plethora of great articles in the past that showcased SpaCy’s API for the 2.0+ release. The recent changes to their API also affect most tutorials which are now broken with the newly released Spacy V3. I like the changes and want to show how simple it has gotten to train a text classifier with very few lines of code. In the first step, we need to install some packages: pip install spacy-nightly pip install ml-datasets python -m spacy download en_core_web_md Ml-Datasets is a curated repository of datasets from Explosion AI that also comes with a simple way to load the data. We will use this library to get data to train our classifier. explosion/ml-datasets Loaders for various machine learning datasets for testing and example scripts. Previously in thinc.extra.datasets. The… github.com Let's Build a classifier The full code is also available in this GitHub repository: p-sodmann/Spacy3Textcat SpacyV3 Text Categorizer Tutorial GitHub is home to over 50 million developers working together to host and review… github.com We need to set up everything first: import spacy# tqdm is a great progress bar for python # tqdm.auto automatically selects a text based progress # for the console # and html based output in jupyter notebooks from tqdm.auto import tqdm# DocBin is spacys new way to store Docs in a # binary format for training later from spacy.tokens import DocBin# We want to classify movie reviews as positive or negative # from ml_datasets import imdb# load movie reviews as a tuple (text, label) train_data, valid_data = imdb()# load a medium sized english language model in spacy nlp = spacy.load(“en_core_web_md”) Then, we need to turn the text and the labels into neat SpaCy Doc Objects. def make_docs(data): """ this will take a list of texts and labels and transform them in spacy documents data: list(tuple(text, label)) returns: List(spacy.Doc.doc) """ docs = [] # nlp.pipe([texts]) is way faster than running # nlp(text) for each text # as_tuples allows us to pass in a tuple, # the first one is treated as text # the second one will get returned as it is. for doc, label in tqdm(nlp.pipe(data, as_tuples=True), total = len(data)): # we need to set the (text)cat(egory) for each document doc.cats["positive"] = label # put them into a nice list docs.append(doc) return docs Now we only need to transform our data and store it as a binary file on the disc. # we are so far only interested in the first 5000 reviews # this will keep the training time short. # In practice take as much data as you can get. # you can always reduce it to make the script even faster. num_texts = 5000# first we need to transform all the training data train_docs = make_docs(train_data[:num_texts]) # then we save it in a binary file to disc doc_bin = DocBin(docs=train_docs) doc_bin.to_disk("./data/train.spacy")# repeat for validation data valid_docs = make_docs(valid_data[:num_texts]) doc_bin = DocBin(docs=valid_docs) doc_bin.to_disk("./data/valid.spacy") Next, we need to create a configuration file that tells SpaCy what it is supposed to learn from our data. Explosion AI created a tool to quickly make a base configuration file: In our case, we would choose “Textcat” under components, CPU-preferred in hardware, and “Optimize-for”: efficiency. Usually SpaCy will provide sane defaults for each parameter. They won't be the best parameters for your problem, but they will work just fine for most data. We need to change paths for our train and validation data: [paths] train = "data/train.spacy" dev = "data/valid.spacy" In the next step, we need to turn our base-configuration into a full configuration. Spacy will automatically fill all missing values with their default parameters: python -m spacy init fill-config ./base_config.cfg ./config.cfg Finally, we can fire up the training in the CLI: python -m spacy train config.cfg --output ./output For each training step, it will yield an output with its loss and accuracy. The loss tells us how big the mistakes of the classifier are and the score tells us how often the binary classification is correct. E # LOSS TOK2VEC LOSS TEXTCAT CATS_SCORE SCORE — — — — — — — — — — — — — — — — — — — — — — — — — 0 0 0.00 0.25 48.82 0.49 2 5600 0.00 1.91 92.54 0.93 Running the classifier on our own input The trained model is saved in the “output” folder. Once the script is done, we can load the “output/model-best” model and check the prediction for new inputs: import spacy# load thebest model from training nlp = spacy.load("output/model-best")text = "" print("type : ‘quit’ to exit")# predict the sentiment until someone writes quit while text != "quit": text = input("Please enter example input: ") doc = nlp(text) if doc.cats['positive'] >.5: print(f"the sentiment is positive") else: print(f"the sentiment is negative") Further Improvements We did not use any pre-trained vectors for this text-classifier, and we probably won't get representable scores “how good” the review is. We will get a binary answer: Is the sentiment of the text input greater than 0.5 it is considered positive. If we enter a text that is different from the data we trained the classifier on, the output could make no sense: Please enter example input: i hate mondays the sentiment is positive Steps to improve the classifier from here: 1. Train on more data: We only used 5000 texts, which is only a fifth of the whole corpus. We can change our script easily to get more examples. We could even try to get data from different resources or scrape rating websites ourselves. 2. Train for more steps: Currently, our script stops either after 1600 training steps without finding a better “solution” on the validation data or after 20'000 steps in total. In our case, a step is a forward pass, making a prediction, and a backward pass, correcting the neural network, so the error between the prediction and the label (loss) gets smaller. We can increase the values [patience, max_steps, and max_epochs] and see if the optimizer can find better weights for our network later in the training. 2. Use pre-trained word-vectors. By default, the training in SpaCy is using a Tok2Vec layer. It uses features of the word like its length to generate a vector on the fly. The advantage is that it can handle previously unseen words and come up with a numerical representation for them. The disadvantage is that its embedding does not represent its meaning. Pretrained word-vectors are numerical representations for each token that are derived from large amounts of text and try to embed the meaning of a word in a high dimensional space. This can help to group semantically similar words together. 3. Use a transformer model. The Transformer is a “newer” architecture that includes the context of a word into its embedding. SpaCy V3 now supports models like Bert which can help to boost the performance even further. 4. Detect outliers in the input. We trained our network on movie reviews. That does not mean that the model can tell you if a cooking recipe is good or bad. We might want to check if we should make a prediction on the input data, or return that the data is too different from the training to make a meaningful prediction. Vincent D. Warmerdam made some great talks about this matter like “how to constraint artificial stupidity”. I am also looking forward to the upcoming videos of Ines and Sofie which will bring more insights into the way SpaCy V3 can be used.
https://medium.com/analytics-vidhya/building-a-text-classifier-with-spacy-3-0-dd16e9979a
CC-MAIN-2021-21
refinedweb
1,335
62.27
Matplotlib is a powerful library for plotting data. Data may be in the form of lists, or data from NumPy and Pandas may be used directly. Charts are very highly configurable, but here we’ll focus on the key basics first. A useful resource for matplotlib is a gallery of charts with associated code: Plotting a single line The following code plots a simple line plot and saves the file in png format. Other formats may be used (e.g. jpg), but png offers an open source high quality format which is ideal for charts. import matplotlib.pyplot as plt # The following line is only needed to display chart in Jupyter notebooks %matplotlib inline X=range(100) Y=[value ** 2 for value in X] # (A list comprehension) plt.plot(X,Y) plt.show() plt.savefig('plot_01.png') # optional save as png file A simple xy line chart: Plotting two lines To plot two lines we simply add another plot before generating the chart with plt.show() import numpy as np import matplotlib.pyplot as plt # The following line is only needed to display chart in Jupyter notebooks %matplotlib inline # np.linsapce creates an array of equally spaced numbers X=np.linspace(0,2*np.pi,100) # 100 points between 0 and 2*pi Ya=np.sin(X) Yb=np.cos(X) plt.plot(X,Ya) plt.plot(X,Yb) plt.show() A simple xy line chart with two lines: Saving figures To save a figure use plt.savefig(’my_figname.png’) before plt.show() to save as png format (best for figures, but you can also use jpg or tif). One thought on “36. Matplotlib: simple xy line charts, and simple save to file”
https://pythonhealthcare.org/2018/04/10/36-matplotlib-simple-xy-line-charts-and-simple-save-to-file/
CC-MAIN-2020-29
refinedweb
283
65.52
Default traits class for all conversions of value types. More... #include <Teuchos_as.hpp> Default traits class for all conversions of value types. This class should never be called directly by clients. Instead, use the as() and asSafe() template functions. This default traits class simply does an implicit type conversion. Therefore, any conversions that are built into the language and are safe do not need a traits class specialization and should not generate any compiler warnings. For example, the conversions float to double, short type to type, type to long type, and an enum value to int are all always value preserving and should never result in a compiler warning or any aberrant runtime behavior. All other conversions that cause compiler warnings and/or could result in aberrant runtime behavior (e.g. type to and from unsigned type, to and from floating point and integral types, etc.), or do not have compiler defined conversions (e.g. std::string to int, double etc.) should be given specializations of this class template. If an unsafe or non-supported conversion is requested by a client (i.e. through as() or asSafe()) then this default traits class will be instantiated and the compiler will either generate a warning message (if the conversion is supported but is unsafe) or will not compile the code (if the conversion is not supported by default in C++). When this happens, a specialization can be added or the client code can be changed to avoid the conversion. Definition at line 84 of file Teuchos_as.hpp. Definition at line 86 of file Teuchos_as.hpp. Definition at line 92 of file Teuchos_as.hpp.
http://trilinos.sandia.gov/packages/docs/r10.8/packages/teuchos/browser/doc/html/classTeuchos_1_1ValueTypeConversionTraits.html
CC-MAIN-2014-15
refinedweb
271
65.22
06 July 2012 15:55 [Source: ICIS news] LONDON (ICIS)--The International Monetary Fund (IMF) expects to lower its growth forecast for the global economy as the outlook has worsened in past months, its managing director said on Friday. “Over the past few months, the outlook has, regrettably, become more worrisome,” Christine Lagarde said in a speech at a policy forum in ?xml:namespace> “Many indicators of economic activity—investment, employment, manufacturing—have deteriorated. And not just in Europe or the “The global growth outlook will be somewhat less than we anticipated just three months ago,” Lagarde said. “And even that lower projection will depend on the right policy actions being taken,” she added. In her speech, Lagarde stressed that the crisis is not just a European concern. “This is a global crisis. In today’s interconnected world, we can no longer afford to look only at what goes on within our national borders. This crisis does not recognise borders,” she said. As for At the same time, “further progress will continue to be needed to overcome the crisis decisively and avoid the damaging effects on stability and growth,” she
http://www.icis.com/Articles/2012/07/06/9576229/imf-to-cut-global-gdp-forecast-as-outlook-worsens.html
CC-MAIN-2014-52
refinedweb
190
52.8
Caching is a very common operation when developing applications. Spring made a neat abstraction layer on top of the different caching providers (Ehcache, Caffeine, Guava, GemFire, …). In this article I will demonstrate how the cache abstraction works using Ehcache as the actual cache implementation. Setting up the project In this example I will be creating a simple REST service. So let’s start by opening the Spring Initialzr. In this example I’ll create a simple REST API, so let’s add Web and Cache as dependencies. Now that we’ve done that, it’s time to create a simple REST API. Creating a dummy REST API The first step is to create a DTO. I’m going to create a simple task REST API, so my DTO will look like this: public class TaskDTO { private Long id; private String task; private boolean completed; public TaskDTO(Long id, String task, boolean completed) { this.id = id; this.task = task; this.completed = completed; } public TaskDTO(String task, boolean completed) { this(null, task, completed); } public TaskDTO() { } public Long getId() { return id; } public String getTask() { return task; } public void setTask(String task) { this.task = task; } public boolean isCompleted() { return completed; } public void setCompleted(boolean completed) { this.completed = completed; } } Now the next step is to create a service, so this is what I did in my dummy service: @Service public class TaskServiceImpl { private final Logger logger = LoggerFactory.getLogger(getClass()); public List<TaskDTO> findAll() { logger.info("Retrieving tasks"); return Arrays.asList(new TaskDTO(1L, "My first task", true), new TaskDTO(2L, "My second task", false)); } } The logging is added just for demonstration, because it will allow us to see if this method is actually called, or a cached version of the result is retrieved. Finally, we have to add a controller: @RestController @RequestMapping("/api/tasks") public class TaskController { @Autowired private TaskServiceImpl service; @RequestMapping(method = RequestMethod.GET) public List<TaskDTO> findAll() { return service.findAll(); } } Nothing too fancy here. Run the application and test it out by going to, which should show you the dummy tasks. Configuring the cache Now, before we start configuring anything, we have to add a dependency. Spring by itself only provides a caching abstraction. This means you still have to actually add a caching implementation to your classpath. Like I mentioned at the start of the article, I will be using Ehcache. So, add the following dependency to your Maven descriptor (pom.xml): <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache-core</artifactId> <version>2.6.5</version> </dependency> Ehcache has a lot of possibilities. You can configure the cache size, time to live, time to idle, if it should be a persistent cache, if it should overflow to disk, … . In this example I will just create a simple in memory cache. Create a file called ehcache.xml in the src/main/resources folder: <?xml version="1.0" encoding="UTF-8"?> <ehcache xmlns: <cache name="tasks" maxElementsInMemory="100" eternal="false" overflowToDisk="false" timeToLiveSeconds="300" timeToIdleSeconds="0" memoryStoreEvictionPolicy="LFU" transactionalMode="off"> </cache> </ehcache> As you can see, I create a <cache> with the name “tasks”, with 100 items that can be stored in memory, with a time to live of 5 minutes. The next step is to configure Spring boot to use this configuration file by adding the following property to application.properties (or application.yml): # application.properties spring.cache.ehcache.config=classpath:ehcache.xml # application.yml spring: cache: ehcache: config: classpath:ehcache.xml The final step is to enable caching itself in Spring boot with the @EnableCaching annotation. Open the main class and add the annotation like this: @SpringBootApplication @EnableCaching public class SpringBootEhcacheApplication { public static void main(String[] args) { SpringApplication.run(SpringBootEhcacheApplication.class, args); } } Setting up caching for your service Setting up caching for a class is quite easy. Open the TaskServiceImpl and add the @Cacheable annotation to the methods you want to cache, for example: @Cacheable("tasks") public List<TaskDTO> findAll() { logger.info("Retrieving tasks"); return Arrays.asList(new TaskDTO(1L, "My first task", true), new TaskDTO(2L, "My second task", false)); } Now, if you run the application again and refresh the tasks endpoint a few times, you’ll see that the log entry within TaskServiceImpl only appears once. So indeed, caching is working fine now. There are a few other annotations you can use though, so let’s see if we can test those as well! Be aware, the Spring cache abstraction works by proxying your target class. This means that calls within the same service will not be cached. So if we had another method called findStuff() which was calling findAll(), the call will not be cached. Conditional caching Have you ever encountered a REST API that has a noCache parameter that allows you to retrieve the actual value rather than the cached version? Well, with Spring you can implement such a behaviour as well. If we take a look at the documentation about cache abstraction, we see there is the possibility to implement conditional caching. So, if we modify our REST API a bit to include an optional @RequestParam called noCache, we could use this parameter to implement the conditional caching: @RequestMapping(method = RequestMethod.GET) public List<TaskDTO> findAll(@RequestParam(required = false) boolean noCache) { return service.findAll(noCache); } In our service we now have to change the @Cacheable annotation a bit: @Cacheable(value = "tasks", condition = "!#noCache") public List<TaskDTO> findAll(boolean noCache) { logger.info("Retrieving tasks"); return Arrays.asList(new TaskDTO(1L, "My first task", true), new TaskDTO(2L, "My second task", false)); } So, if we rerun the service now, we’ll see that the caching now works fine like previously, but if you add ?noCache=true to the end of the URL (like this:), and you check the logs, you’ll see it prints the log statement from the service each time. That means the conditional caching is working fine! But what if we still want to update the cache if the user provides noCache=true, but we just don’t want to use the cache as the response? Well, then you could use the @CachePut annotation and use the opposite condition of what we used before: @CachePut(value = "tasks", condition = "#noCache") @Cacheable(value = "tasks", condition = "!#noCache") public List<TaskDTO> findAll(boolean noCache) { logger.info("Retrieving tasks"); return Arrays.asList(new TaskDTO(1L, "My first task", true), new TaskDTO(2L, "My second task", false)); } However, this is not enough yet. Spring generates a key, by default using the hashcode of all method arguments. In our case we want to use the same key, ignoring the state of noCache . The easiest workaround to this problem is by setting your own key based on the field and for the other annotation use the exact opposite of that annotation, for example: @CachePut(value = "tasks", condition = "#noCache", key = "#noCache") @Cacheable(value = "tasks", condition = "!#noCache", key = "!#noCache") public List<TaskDTO> findAll(boolean noCache) { logger.info("Retrieving tasks"); return Arrays.asList(new TaskDTO(1L, "My first task", true), new TaskDTO(2L, "My second task", false)); } The result should be similar, however, even when you specifiy the noCache=true parameter, it will still store the result in the cache, and both will use the same key, namely the hashcode of Boolean.TRUE. Clearing the cache The last annotation I will take a look at in this article is the @CacheEvict annotation. With this annotation you can clear the cache “on command”, rather than have it getting expired. This can be useful in situations when someone updated something in the back-end and wants the value immediately to be updated. By evicting the cache the next time someone makes a call, there won’t be a cache to load the value from. For a small cache with a TTL of 5 mins this isn’t probably as important, but if you choose to cache for several hours, it could be quite useful. So, let’s create a separate operation to evict the cache in our controller: @RequestMapping(value = "/cache", method = RequestMethod.DELETE) public void clearCache() { service.clearCache(); } Now we just have to add an empty method to our service with the right annotation: @CacheEvict(value = "tasks", allEntries = true) public void clearCache() { // Empty method, @CacheEvict annotation does everything } And there we have it, make sure you add the allEntries property, without it you can still use @CacheEvict to remove one item from the cache. Now we can clear the cache now by calling, using the DELETE method. You might need a REST client to test this. I’m personally using Postman, but other REST clients such as DHC should also work. I’m often getting questions about what REST client I’m using, so here it is. Anyhow, test it out by first calling the tasks API once to fill the cache, then evict it and call the tasks API again. Normally you should see the log appear again, while before this you had to wait 5 minutes before the method would be invoked again due to the caching TTL. You probably want to secure this endpoint using Spring Security, but that’s out of scope for this article. Anyways, now we’ve seen most of the caching annotations, so this is where I’ll end the article.
http://g00glen00b.be/spring-boot-cache-ehcache/
CC-MAIN-2017-26
refinedweb
1,531
55.03
wasteful is a SAX process that terminates after reading the root element node? The two expensive parts are instantiating the XML parser and populating the stack trace in the exception object. Both costs can be cut to near zero with careful Java programming. Opening the file and reading the first bufferful of content could also be expensive if the file is accessed over a network. >I lost interest ... >I don't have much patience I'm afraid those are psychological factors that I can't really take into account when I make design recommendations. Happy Easter! Michael Kay _____ From: Todd Gochenour [mailto:todd.gochenour@...] Sent: 11 April 2009 16:58 To: Mailing list for the SAXON XSLT and XQuery processor Subject: Re: [saxon] Using both Saxon and Xalan How wasteful is a SAX process that terminates after reading the root element node? It opens the file and reads the root element and its attributes and then closes the file. Is that an expensive operation? The namespace problem presented itself as "stylesheet requires attribute: version" when in fact the file had a version. It took a day of research to figure out that the factory needed a boolean flag set to true for this to work. Not at all intuitive. When I got to the I/O exception when passing the DOMSource to the transformer I lost interest in the approach. It wasn't til after the SAX strategy was finished that I came to realize the base URI isn't set for a DOMSource like it is automatically with a StreamSource. I'm an XSL programmer more than a Java programmer. I don't have much patience dealing with object oriented plumbing issues like this. Document centric functional programming is so much easier for me. View entire thread
https://sourceforge.net/p/saxon/mailman/message/22084551/
CC-MAIN-2017-51
refinedweb
299
64.51
How do I go about sorting a 2-D string array in C using bubble sort (or any other kind of sorting in that matter) ? What I'm actlly trying to do is as follows : Example: Unsorted 2-D string array : abfg abcd xyzw pqrs orde Sorted 2-D string array: abcd abfg orde pqrs xyzw My current algorithm which is not working (gives me an incompatibility error) is as follows : #include <stdio.h> #include<string.h> int main() { char str[5][4]; int i,j; char temp[4]; for (i=0;i<5;i++) { scanf("%s",str[i]); } for(i = 0; i<5-1; i++) { for(j = 0; j<5-1; j++) { if(strcmp(str[j],str[j+1])== -1) { temp = str[j]; str[j] = str[j+1]; str[j+1] = temp; } } } for(i = 0; i< 5; i++) printf("%s ", str[i]); return 0; } Arrays are not assignable as you're attempting in C. You need to setup some buffer swapping logic. For example. if(strcmp(str[j+1],str[j]) < 0) // note: fixed. { strcpy(temp, str[j]); strcpy(str[j], str[j+1]); strcpy(str[j+1], temp); } Other issues with your code: char[5]to hold four-char strings (including space for the terminator). "%3s", which would have hinted your dimensions were too short to begin with. scanf Anyway, all of that is related to your code, but not the question you posted. It is worth looking at regardless. Best of luck.
http://databasefaq.com/index.php/answer/161723/c-arrays-computer-science-bubble-sort-c-strings-how-do-i-output-a-bubble-sorted-2-d-string-array-in-c
CC-MAIN-2018-51
refinedweb
245
60.95
Notes on 'Monads Are Not Metaphors' This is a translation of 「モナドはメタファーではない」に関する補足 by Kenji Yoshida (@xuwei_k), one of the most active Scala bloggers in Japan covering wide range of topics from Play to Scalaz. Daniel Spiewak's Monads Are Not Metaphors was written about two and a half years ago, but seeing how its Japanese translation is still being tweeted and being bookmarked by 250 users on Hantena, its popularity doesn't seem to cease. I just remembered something to note about the example code used in the post, which could be an unstylish critique, but I'm going to jot it down here. It's an unstylish critique, because I'll be digging into the part where the author likely knew from the beginning but omitted it intentionally for the sake of illustration. Also I'll be using Scalaz in this post. The example code that calculates fullName from firstName and lastName only requires Applicative1 not Monad Here's the original code def firstName(id: Int): Option[String] = ... // fetch from database def lastName(id: Int): Option[String] = ... def fullName(id: Int): Option[String] = { firstName(id) bind { fname => lastName(id) bind { lname => Some(fname + " " + lname) } } } We can rewrite this using Scalaz2 as follows: import scalaz._,Scalaz._ def firstName(id: Int): Option[String] = ??? def lastName(id: Int): Option[String] = ??? def fullName(id: Int): Option[String] = ^(firstName(id), lastName(id))(_ + " " + _) or def fullName(id: Int): Option[String] = Apply[Option].apply2(firstName(id), lastName(id))(_ + " " + _) To reiterate, the author likely knows this, since he's said something like this: Don’t use a monad when an applicative will do. — Daniel Spiewak (@djspiewak) December 31, 2012 Monad's sequence function in Scalaz First, in Scalaz 7 there's no function with the following signature: def sequence[M[_], A](ms: List[M[A]])(implicit tc: Monad[M]): M[List[A]] Instead it has the following under Applicative: def sequence[A, G[_]: Traverse](as: G[F[A]]): F[G[A]] = See. To cut to the chase, the sequence function as described in 'Monads Are Not Metaphors' 3 only exists in a generalized form in Scalaz 7. By the way, in Haskell, there's no corresponding function under Applicative, but there's sequenceA under Traversable:. Coming back to Scalaz, here's what I mean by a generalized form. def sequence[A, G[_]: Traverse](as: G[F[A]]): F[G[A]] = If we fix the type parameter G in the above to List, it becomes def sequence[A](as: List[F[A]]): F[List[A]] and sequence function defined in Scalaz becomes sequence described in 'Monads Are Not Metaphors.' It might appear as if (implicit tc: Monad[M]) disappeared, but this sequence is a method of Scalaz's Applicative, so F is automatically an instance of Applicative. This is a long-winded way to say that even the sequence function requires only Applicative, and not Monad. In Haskell, the Monad directly definining sqequence as sequence :: Monad m => [m a] -> m [a] and Traversal defining two similar functions sequenceA :: Applicative f => t (f a) -> f (t a) and sequence :: Monad m => t (m a) -> m (t a) are all artifacts of historical reason that Monad does not inherit Applicative, if I may say so at the risk of over-simplification. To follow up again on the generalized form, the implementation of the sequence function as described in 'Monads Are Not Metaphors' uses List's foldRight 4. In other words, to generalize sequence, we only need a container that has equivalent function as foldLeft, which is instances of Traverse typeclass 5. def sequence[A, G[_]: Traverse](as: G[F[A]]): F[G[A]] This is the reason Traverse appears in the above signature. Summary So it turns out both fullNamefunction Monad's sequencefunction only require Applicative, and not Monad. Either on purpose (for the sake of illustration) or by negligence, there are many other examples in Monad tutorials that fall into this "turns out Applicative would suffice instead of Monad" pattern. So readers should keep their eyes open for Applicatives instead of parroting "Monad ( ゚∀゚)彡 Monad ( ゚∀゚)彡."
http://eed3si9n.com/notes-on-monads-are-not-metaphors
CC-MAIN-2017-51
refinedweb
692
56.39
So, I haven’t found a good answer to this, but I want to be able to have my Meteor.settings.public available on my deployed app. We are using meteor build, so AFAIK, the --settings settings.json flag isn’t available for that. I looked into exporting the Meteor.settings as part of my env_settings.sh file, as export METEOR_SETTINGS=$(cat settings.json), but I don’t see the Meteor.settings.public in the browser console. Any suggestions on how to achieve this for deployment? Thanks! So, I haven’t found a good answer to this, but I want to be able to have my Github issue - If you’re on 1.3 you should try window.Meteor.settings.public in the console. Alternatively, in the code you need this ability use import {Meteor} from 'meteor/meteor'. So when I console.log(Meteor.settings) in the browser console, I get Object {public: {}} My bad. I didn’t dig far enough into the settings object. So as per this, METEOR_SETTINGS is only populated if using the meteor build command - i.e., for production. I have to try this, but it seems a little weird that I can’t export the settings to test in development before pushing to production. Hopefully this works okay! So this works. Again, just feels weird that I can’t export METEOR_SETTINGS in development.
https://forums.meteor.com/t/deployment-with-meteor-build-and-settings-json/20442
CC-MAIN-2022-33
refinedweb
228
70.6
What are UDFs UDF is short for User Defined Function. MaxCompute provides many built-in functions to meet your computing demands. You can also create UDFs to meet different computing needs. The usage of UDFs is similar to that of general built-in functions. Java UDF means the UDFs implemented by using Java. The general Java UDF usage process in the big data platform consists the following four steps: (1) Prepare and debug Java codes in a local environment, and then compile them into a JAR package; (2) Create resources in DataWorks and upload the JAR package; (3) Create and register a function in DataWorks and associate function with the JAR resources; (4) Use the Java UDF. See the following figure: Use case Implement an UDF for converting lower-case letters. Step 1: Coding Follow the instructions in MaxCompute UDF framework to write Java codes for functional implementation in your local Java environment (The UDF development plug-in must be added. For more information, see UDF), and compile them into a JAR package. The Java codes of this example are as follows. The compiled JAR package is my_lower.jar. package test_UDF; import com.aliyun.odps.udf.UDF; public class test_UDF extends UDF { public String evaluate(String s) { if (s == null) { return null; } return s.toLowerCase(); } } Step 2: Add a JAR resource Before running an UDF, you must specify the UDF codes to be referenced. You must upload Java codes you write to the big data platform as a resource. Java UDF must be built into a JAR package and added to the platform as a JAR package resource. The UDF framework automatically loads the JAR package to run the UDF. Follow these steps: Log on to the Alibaba Cloud DTplus console as a developer. Select DataWorks> Management console. Click Enter the work area in the Actions column of the corresponding project. Create a resource file. Right-click a folder in the file directory tree and select Upload Resource to upload the resource. Complete the configurations in the Upload resource dialog box, and click Submit. Once the submission is successful, the resource is created successfully. Step 3: Register UDFs In the preceding steps, we have finished writing the Java UDF codes and uploading the JAR resources, which enables big data platform to get and run the user codes automatically. But this UDF remains unavailable in the big data platform at this time, because the platform does not have any information about the UDF yet. Therefore, you must register a unique function name in the platform and map the function name with a function of a specific resource. The specific steps are as follows: Create the function directory. Switch to Manage Functions in the file directory tree and create a new directory. Right-click the directory folder and select New Function, or click New in the upper-right corner and select New Function. Enter the values in the required fields in the Create MaxCompute function dialog box and click Submit. After a successful submission, the function is created successfully. Step 4: Test the function in MaxCompute SQL The specific usage of a Java UDF is the same as that of built-in functions in the platform. The specific steps are as follows: Click New in the upper-right corner and select New Script File to create a SQL script file. Write the SQL statement in the code editor. Sample code: select my_lower('A') from dual; Click the button. So far, we have completed the registration of the Java UDF and run the local call test in SQL. If you require daily scheduling to perform lower-case letter conversions, create a new MaxCompute SQL node in the workflow and configure the scheduling properties of the workflow.
https://www.alibabacloud.com/help/doc-detail/30270.htm
CC-MAIN-2018-17
refinedweb
623
55.34
14 July 2010 12:23 [Source: ICIS news] LONDON (ICIS news)--Saudi Arabia’s Yanbu National Petrochemical Company (Yansab) has reported a net profit of Saudi riyal (SR) 502.4m ($134.0m) for the second quarter, its first full quarter of commercial operation, the company said on Wednesday. The result compares with a net loss of SR6.7m in the second quarter of 2009 and a net profit of SR259.4m in the first quarter of 2010. Operating profits for the quarter were SR596.0m, it said The net results were 9% below consensus and 13% below its forecast, Credit Suisse said in a note to clients. “We had always maintained that the market was attributing optimistic margins for Yansab, without discounting adequately for the mixed feed nature of the crackers and that there was significant downside risk to our optimistic scenario,” Credit Suisse said. The first full quarter of plant operations provided visibility to the earnings potential of the company, it added. Operating profits for the quarter were 25% below its forecast. Commercial operations commenced on 1 March 2010, Yansab said. The Yansab cracker started up in September 2009. Yansab is a joint-stock company 51% owned by SABIC. It operates a 1.3m tonne/year cracker in ?xml:namespace> ($1 = SR3.75) Read John Richardson and Malini Hariharan’s Asian Chemical Connections
http://www.icis.com/Articles/2010/07/14/9376327/yansabs-first-full-quarter-generates-net-profit-of-134m.html
CC-MAIN-2014-52
refinedweb
224
58.89
How to set timeZone ?philip andrew Jan 31, 2009 1:55 PM Hi, My server is in USA, my client users are in Hong Kong. How can I set the time zone for Hong Kong? I don't see any config setting for it... Philip 1. Re: How to set timeZone ?Stefano Travelli Jan 31, 2009 6:13 PM (in response to philip andrew) Read the manual at chapter 16. All you have to do, probably, is to use <s:convertDateTime> instead of <f:convertDateTime>. 2. Re: How to set timeZone ?Stefano Travelli Jan 31, 2009 6:28 PM (in response to philip andrew) I think you can set the default value in components.xml, if that was your question: <international:time-zone-selector 3. Re: How to set timeZone ?philip andrew Feb 1, 2009 6:12 AM (in response to philip andrew) Thanks I'll try out your suggestions. 4. Re: How to set timeZone ?Tathagat Tathagat Feb 3, 2009 2:54 PM (in response to philip andrew) Hey. I am excatly where you are. Did you manage to get client time? Cheers 5. Re: How to set timeZone ?Lawrence Li Feb 4, 2009 2:52 AM (in response to philip andrew) Are you thinking about somehow dynamically retrieving a user's time zone? For example, one client may be located in Hong Kong, another may be in London, another may be in Los Angeles. I don't think there's anything that can dynamically do that (at least in Seam). The user would have to use a timezone picker as suggested by another forum member in this thread. It would be nice if we could have a time zone component that automatically retrieves and stores the client's time zone without having the user go through a time zone picker. I'm sure it could be done... 6. Re: How to set timeZone ?philip andrew Feb 4, 2009 10:08 AM (in response to philip andrew) Hi Tathagat, You mean your in Hong Kong or you mean you have the same problem?! :) I created a Seam class @Name("orsamgr") @Scope(ScopeType.APPLICATION) public class OrsaMgr { public TimeZone getTimeZone() { return TimeZone.getTimeZone("Hongkong"); } } So I use this class in my pages, sadly I have just deleted my pages due to having to start again. I'll tell you how I did it later shortly. Thanks, Philip
https://developer.jboss.org/thread/186096
CC-MAIN-2018-17
refinedweb
397
75.4
.conn.tsccm; 28 29 /** 30 * A simple class that can interrupt a {@link WaitingThread}. 31 * 32 * Must be called with the pool lock held. 33 * 34 * @since 4.0 35 * 36 * @deprecated (4.2) do not use 37 */ 38 @Deprecated 39 public class WaitingThreadAborter { 40 41 private WaitingThread waitingThread; 42 private boolean aborted; 43 44 /** 45 * If a waiting thread has been set, interrupts it. 46 */ 47 public void abort() { 48 aborted = true; 49 50 if (waitingThread != null) { 51 waitingThread.interrupt(); 52 } 53 54 } 55 56 /** 57 * Sets the waiting thread. If this has already been aborted, 58 * the waiting thread is immediately interrupted. 59 * 60 * @param waitingThread The thread to interrupt when aborting. 61 */ 62 public void setWaitingThread(final WaitingThread waitingThread) { 63 this.waitingThread = waitingThread; 64 if (aborted) { 65 waitingThread.interrupt(); 66 } 67 } 68 69 }
http://hc.apache.org/httpcomponents-client-dev/httpclient/xref/org/apache/http/impl/conn/tsccm/WaitingThreadAborter.html
CC-MAIN-2015-18
refinedweb
136
61.43
=head1 NAME libev - a high performance full-featured event loop written in C =head1 SYNOPSIS #include <ev.h> =head1 DESCRIPTION Libev is an event loop: you register interest in certain events (such as a file descriptor being readable or a timeout occuring), and it will manage these event sources and provide your program with events. To do this, it must take more or less complete control over your process (or thread) by executing the I<event loop> handler, and will then communicate events via a callback mechanism. You register interest in certain events by registering so-called I<event watchers>, which are relatively small C structures you initialise with the details of the event, and then hand it over to libev by I<starting> the watcher. =head1 FEATURES L<benchmark| comparing it to libevent for example). =head1 CONVENTIONS Libev is very configurable. In this manual the default configuration will be described, which supports multiple event loops. For more info about various configuration options please have a look at the file F<README.embed> in the libev distribution. If libev was configured without support for multiple event loops, then all functions taking an initial argument of name C<loop> (which is always of type C<struct ev_loop *>) will not have this argument. =head1 TIME AND OTHER GLOBAL FUNCTIONS double type in C. =over 4 =item ev_tstamp ev_time () Returns the current time as libev would use it. =item int ev_version_major () =item int ev_version_minor () You can find out the major and minor version numbers of the library you linked against by calling the functions C<ev_version_major> and C<ev_version_minor>. If you want, you can compare against the global symbols C<EV_VERSION_MAJOR> and C<EV_VERSION_MINOR>, which specify the version of the library your program was compiled against. Usually, it's a good idea to terminate if the major versions mismatch, as this indicates an incompatible change. Minor versions are usually compatible to older versions, so a larger minor version alone is usually not a problem. =item ev_set_allocator (void *(*cb)(void *ptr, long size)) Sets the allocation function to use (the prototype is similar to the realloc C function, the semantics are identical). It is used to allocate and free memory (no surprises here). If it returns zero when memory needs to be allocated, the library might abort or take some potentially destructive action. The default is your system realloc function. You could override this function in high-availability programs to, say, free some memory if it cannot allocate memory, to use a special allocator, or even to sleep a while and retry until some memory is available. =item ev_set_syserr_cb (void (*cb)(const char *msg)); Set the callback function to call on a retryable syscall error (such as failed select, poll, epoll_wait). The message is a printable string indicating the system call or subsystem causing the problem. If this callback is set, then libev will expect it to remedy the sitution, no matter what, when it returns. That is, libev will generally retry the requested operation, or, if the condition doesn't go away, do bad stuff (such as abort). =back =head1 FUNCTIONS CONTROLLING THE EVENT LOOP An event loop is described by a C<struct ev_loop *>. The library knows two types of such loops, the I<default> loop, which supports signals and child events, and dynamically created loops which do not. If you use threads, a common model is to run the default event loop in your main thread (or in a separate thrad) and for each thread you create, you also create another event loop. Libev itself does no locking whatsoever, so if you mix calls to the same event loop in different threads, make sure you lock (this is usually a bad idea, though, even if done correctly, because it's hideous and inefficient). =item struct ev_loop *ev_default_loop (unsigned int flags) This will initialise the default event loop if it hasn't been initialised yet and return it. If the default loop could not be initialised, returns false. If it already was initialised it simply returns it (and ignores the flags). If you don't know what event loop to use, use the one returned from this function. The flags argument can be used to specify special behaviour or specific backends to use, and is usually specified as 0 (or EVFLAG_AUTO). It supports the following flags: =item C<EVFLAG_AUTO> The default flags value. Use this if you have no clue (it's the right thing, believe me). =item C<EVFLAG_NOENV> If this flag bit is ored into the flag value (or the program runs setuid or setgid) then libev will I<not> look at the environment variable C<LIBEV_FLAGS>. Otherwise (the default), this environment variable will override the flags completely if it is found in the environment. This is useful to try out specific backends to test their performance, or to work around bugs. =item C<EVMETHOD_SELECT> (portable select backend) =item C<EVMETHOD_POLL> (poll backend, available everywhere except on windows) =item C<EVMETHOD_EPOLL> (linux only) =item C<EVMETHOD_KQUEUE> (some bsds only) =item C<EVMETHOD_DEVPOLL> (solaris 8 only) =item C<EVMETHOD_PORT> (solaris 10 only) If one or more of these are ored into the flags value, then only these backends will be tried (in the reverse order as given here). If one are specified, any backend will do. =item struct ev_loop *ev_loop_new (unsigned int flags) Similar to C<ev_default_loop>, but always creates a new event loop that is always distinct from the default loop. Unlike the default loop, it cannot handle signal and child watchers, and attempts to do so will be greeted by undefined behaviour (or a failed assertion if assertions are enabled). =item ev_default_destroy () Destroys the default loop again (frees all memory and kernel state etc.). This stops all registered event watchers (by not touching them in any way whatsoever, although you cannot rely on this :). after forking if and only if you want to use the event library in both processes. If you just fork+exec, you don't have to call it. The function itself is quite fast and it's usually not a problem to call it just in case after a fork. To make this easy, the function will fit in quite nicely into a call to C<pthread_atfork>: pthread_atfork (0, 0, ev_default_fork); =item ev_loop_fork (loop) Like C<ev_default_fork>, but acts on an event loop created by C<ev_loop_new>. Yes, you have to call this on every allocated event loop after fork, and how you do this is entirely your own problem. =item unsigned int ev_method (loop) Returns one of the C<EVMETHOD_*> flags indicating the event backend in use. =item ev_tstamp ev_now (loop) Returns the current "event loop time", which is the time the event loop got events and started processing them. This timestamp does not change as long as callbacks are being processed, and this is also the base time used for relative timers. You can treat it as the timestamp of the event occuring (or more correctly, the mainloop finding out about it). =item ev_loop (loop, int flags) Finally, this is it, the event handler. This function usually is called after you initialised all your watchers and you want to start handling events. If the flags argument is specified as 0, it will not return until either no event watchers are active anymore or C<ev_unloop> was called. A flags value of C<EVLOOP_NONBLOCK> will look for new events, will handle those events and any outstanding ones, but will not block your process in case there are no events and will return after one iteration of the loop. A flags value of C<EVLOOP_ONESHOT> will look for new events (waiting if neccessary) and will handle those and any outstanding ones. It will block your process until at least one new event arrives, and will return after one iteration of the loop. This flags value could be used to implement alternative looping constructs, but the C<prepare> and C<check> watchers provide a better and more generic mechanism. =item ev_unloop (loop, how)>. =head1 ANATOMY OF A WATCHER A watcher is a structure that you create and register to record your interest in some event. For instance, if you want to wait for STDIN to become readable, you would create an C<ev_io> watcher for that: static void my_cb (struct ev_loop *loop, struct ev_io *w, int revents) { ev_io_stop (w); ev_unloop (loop, EVUNLOOP_ALL); } struct ev_loop *loop = ev_default_loop (0); struct ev_io stdin_watcher; ev_init (&stdin_watcher, my_cb); ev_io_set (&stdin_watcher, STDIN_FILENO, EV_READ); ev_io_start (loop, &stdin_watcher); ev_loop (loop, 0); As you can see, you are responsible for allocating the memory for your watcher structures (and it is this watcher type. There is also a macro to combine initialisation and setting in one call: C<< ev_<type>_init (watcher *, callback, ...) >>. To make the watcher actually watch out for events, you have to start it with a watcher-specific start function (C<< ev_<type>_start (loop, watcher *) >>), and you can stop watching for events at any time by calling the corresponding stop function (C<< ev_<type>_stop (loop, watcher *) >>. As long as your watcher is active (has been started but not stopped) you must not touch the values stored in it. Most specifically you must never reinitialise it or call its set method. You cna check whether an event is active by calling the C<ev_is_active (watcher *)> macro. To see whether an event is outstanding (but the callback for it has not been called yet) you cna use the C<ev_is_pending (watcher *)> macro. Each and every callback receives the event loop pointer as first, the registered watcher structure as second, and a bitset of received events as third argument. The rceeived events usually include a single bit per event type received (you can receive multiple events at the same time). The possible bit masks are: =item C<EV_READ> =item C<EV_WRITE> The file descriptor in the C<ev_io> watcher has become readable and/or writable. =item C<EV_TIMEOUT> The C<ev_timer> watcher has timed out. =item C<EV_PERIODIC> The C<ev_periodic> watcher has timed out. =item C<EV_SIGNAL> The signal specified in the C<ev_signal> watcher has been received by a thread. =item C<EV_CHILD> The pid specified in the C<ev_child> watcher has received a status change. =item C<EV_IDLE> The C<ev_idle> watcher has determined that you have nothing better to do. =item C<EV_PREPARE> =item C<EV_CHECK> All C<ev_prepare> watchers are invoked just I<before> C<ev_ERROR> An unspecified error has occured, the watcher has been stopped. This might happen because the watcher could not be properly started because libev ran out of memory, a file descriptor was found to be closed or any other problem. You best act on it by reporting the problem and somehow coping with the watcher being stopped. multithreaded programs, though, so beware. =head2 ASSOCIATING CUSTOM DATA WITH A WATCHER Each watcher has, by default, a member C<void *data> that you can change and read at any time, libev will completely ignore it. This cna be used to associate arbitrary data with your watcher. If you need more data and don't want to allocate memory and store a pointer to it in that data member, you can also "subclass" the watcher type and provide your own data: struct my_io catsing your callback type have been omitted.... =head1 WATCHER TYPES This section describes each watcher in detail, but will not repeat information given in the last section. =head2 C<ev_io> - is this file descriptor readable or writable I/O watchers check whether a file descriptor is readable or writable in each iteration of the event loop (This behaviour is called level-triggering because you keep receiving events as long as the condition persists. Remember you cna stop the watcher if you don't want to act on the event and neither want to receive future events). In general you can register as many read and/or write event watchers oer fd as you want (as long as you don't confuse yourself). Setting all file descriptors to non-blocking mode is also usually a good idea (but not required if you know what you are doing). You have to be careful with dup'ed file descriptors, though. Some backends (the linux epoll backend is a notable example) cannot handle dup'ed file descriptors correctly if you register interest in two or more fds pointing to the same file/socket etc. description. If you must do this, then force the use of a known-to-be-good backend (at the time of this writing, this includes only EVMETHOD_SELECT and EVMETHOD_POLL). =item ev_io_init (ev_io *, callback, int fd, int events) =item ev_io_set (ev_io *, int fd, int events) Configures an C<ev_io> watcher. The fd is the file descriptor to rceeive events for and events is either C<EV_READ>, C<EV_WRITE> or C<EV_READ | EV_WRITE> to receive the given events. =head2 C<ev_timer> - relative and optionally recurring timeouts time, it will still time out after (roughly) and hour. "Roughly" because detecting time jumps is hard, and soem inaccuracies are unavoidable (the monotonic clock option helps a lot here). The relative timeouts are calculated relative to the C<ev_now ()> time. This is usually the right thing as this timestamp refers to the time of the event triggering whatever timeout you are modifying/starting. If you suspect event processing to be delayed and you *need* to base the timeout ion the current time, use something like this to adjust for this: ev_timer_set (&timer, after + ev_now () - ev_time (), 0.); . If it is positive, then the timer will automatically be configured to trigger again C<repeat> seconds timer will not fire more than once per event loop iteration. value. This sounds a bit complicated, but here is a useful and typical example: Imagine you have a tcp connection and you want a so-called idle timeout, that is, you want to be called when there have been, say, 60 seconds of inactivity on the socket. The easiest way to do this is to configure an C<ev_timer> with after=repeat=60 and calling ev_timer_again each time you successfully read or write some data. If you go into an idle state where you do not expect data to travel on the socket, you can stop the timer, and again will automatically restart it if need be. =head2 C<ev_periodic> - to cron or not to cron it Periodic watchers are also timers of a kind, but they are very versatile (and unfortunately a bit complex). Unlike C take a year to trigger the event (unlike an C<ev_timer>, which would trigger roughly 10 seconds later and of course not if you reset your system time again). They can also be used to implement vastly more complex timers, such as triggering an event on eahc midnight, local time. the the callback will be called when the system time shows a full hour (UTC), or more correct, when the system time is evenly divisible by 3600. Another way to think about it (for the mathematically inclined) is that C<ev_periodic> will try to run the callback in this mode at the next possible time where C<time = at (mod interval)>, regardless of any time jumps. =item * manual reschedule mode (reschedule_cb = callback) In this mode the values for C<interval> and C<at> are both modificstions>. If you need to stop it, return 1e30 (or so, fudge fudge) and stop it afterwards. Its prototype is c<ev_tstamp (*reschedule_cb)(struct ev_periodic *w, ev_tstamp now)>, e.g.: static ev_tstamp my_rescheduler (struct.). =item ev_periodic_again (loop, ev_periodic *)). =head2 C<ev_signal> - signal me when a signal gets signalled Signal watchers will trigger an event when the process receives a specific signal one or more times. Even though signals are very asynchronous, libev will try it's best to deliver signals synchronously, i.e. as part of the normal event processing, like any other event. You cna configure as many watchers as you like per signal. Only when the first watcher gets started will libev actually register a signal watcher with the kernel (thus it coexists with your own signal handlers as long as you don't register any with libev). Similarly, when the last signal watcher for a signal is stopped libev will reset the signal handler to SIG_DFL (regardless of what it was set to before). =item ev_signal_init (ev_signal *, callback, int signum) =item ev_signal_set (ev_signal *, int signum) Configures the watcher to trigger on the given signal number (usually one of the C<SIGxxx> constants). =head2 C<ev_child> - wait for pid status changes Child watchers trigger when your process receives a SIGCHLD in response to some child status changes (most typically when a child of yours dies). =item ev_child_init (ev_child *, callback, int pid) =item ev_child_set (ev_child *, int pid) Configures the watcher to wait for status changes of process C<pid> (or I<any> process if C<pid> is specified as C<0>). The callback can look at the C<rstatus> member of the C<ev_child> watcher structure to see the status word (use the macros from C<sys/wait.h>). The C<rpid> member contains the pid of the process causing the status change. =head2 C<ev_idle> - when you've got nothing better to do Idle watchers trigger events when there are no other I/O or timer (or periodic) events pending. That is, as long as your process is busy handling sockets or timeouts it will not be called. But when your process is idle all idle watchers are being called again and again - until stopped, that is, or your process receives more events.. =item ev_idle_init (ev_signal *, callback) Initialises and configures the idle watcher - it has no parameters of any kind. There is a C<ev_idle_set> macro, but using it is utterly pointless, believe me. =head2 prepare and check - your hooks into the event loop Prepare and check watchers usually (but not always) are used in tandom. Prepare watchers get invoked before the process blocks and check watchers afterwards. Their main purpose is to integrate other event mechanisms into libev. This could be used, for example, to track variable changes, implement your own watchers, integrate net-snmp or a coroutine library and lots more. This is done by examining in each prepare call which file descriptors need. As another example, the perl Coro module uses these hooks to integrate coroutines into libev programs, by yielding to other active coroutines during each prepare and only letting the process block if no coroutines are ready to run. =item ev_prepare_init (ev_prepare *, callback) =item ev_check_init (ev_check *, callback) Initialises and configures the prepare or check watcher - they have no parameters of any kind. There are C<ev_prepare_set> and C<ev_check_set> macros, but using them is utterly, utterly pointless. =head1 OTHER FUNCTIONS There are some other fucntions of possible interest. Described. Here. Now. =item ev_once (loop, int fd, int events, ev_tstamp timeout, callback) more watchers yourself. If C<fd> is less than 0, then no I/O watcher will be started and events is. The callback has the type C<void (*cb)(int revents, void *arg)> and gets passed an events set (normallym 10., stdin_ready, 0); =item ev_feed_event (loop, watcher, int events) Feeds the given event set into the event loop, as if the specified event has happened for the specified watcher (which must be a pointer to an initialised but not necessarily active event watcher). =item ev_feed_fd_event (loop, int fd, int revents) Feed an event on the given fd, as if a file descriptor backend detected it. =item ev_feed_signal_event (loop, int signum) Feed an event as if the given signal occured (loop must be the default loop!). =head1 AUTHOR Marc Lehmann <libev@schmorp.de>.
https://git.lighttpd.net/mirrors/libev/src/commit/d199f05aa70cfdca8d99870a7c1edfdac672d24f/ev.pod
CC-MAIN-2022-21
refinedweb
3,267
66.67
>> Assertion in response in Rest Assured? We can use Assertion in response in Rest Assured. To obtain the Response we need to use the methods - Response.body or Response.getBody. Both these methods are a part of the Response interface. Once a Response is obtained it is converted to string with the help of the asString method. This method is a part of the ResponseBody interface. We can then obtain the JSON representation of the Response body with the help of the jsonPath method. Finally, we shall verify the JSON content to explore a particular JSON key with its value. We shall first send a GET request via Postman on a mock API URL and go through the Response body. Using Rest Assured, we shall check if the value of the key - Location is Michigan. Example Code Implementation import org.testng.Assert; import org.testng.annotations.Test; import static io.restassured.RestAssured.*; import io.restassured.RestAssured; import io.restassured.path.json.JsonPath; import io.restassured.response.Response; import io.restassured.response.ResponseBody; import io.restassured.specification.RequestSpecification; public class NewTest { @Test void respAssertion() { //base URI with Rest Assured class RestAssured.baseURI = ""; //input details RequestSpecification h = RestAssured.given(); //get response Response r = h.get("/0cb0e329-3dc8-4976-a14b-5e5e80e3db92"); //Response body ResponseBody bdy = r.getBody(); //convert response body to string String b = bdy.asString(); //JSON Representation from Response Body JsonPath j = r.jsonPath(); //Get value of Location Key String l = j.get("Location"); System.out.println(l); // verify the value of key Assert.assertTrue(l.equalsIgnoreCase("Michigan")); } } Output - Related Questions & Answers - How to validate XML response in Rest Assured? - How to verify JSON response headers in Rest Assured? - How to incorporate TestNG assertions in validating Response in Rest Assured? - How to transform the response to the Java list in Rest Assured? - How to verify a JSON response body using Assertions in Rest Assured? - How to get the response time of a request in Rest Assured? - How to validate the response time of a request in Rest Assured? - How to use the then method in Rest Assured? - How to extract the whole JSON response as a string in Rest Assured? - How to parse a JSON response and get a particular field from the response in Rest Assured? - Explain how to get the size of a JSON array response in Rest Assured. - How to use TestNG data providers for parameterization in Rest Assured? - How to handle static JSON in Rest Assured? - What is Rest Assured? - Validate JSON Schema in Rest Assured. Advertisements
https://www.tutorialspoint.com/how-to-use-assertion-in-response-in-rest-assured
CC-MAIN-2022-33
refinedweb
419
53.78
I'm having a problem with performance of the program "dot" (for rendering graphs) when called from a python cgi script, served by an apache virtual host. I'm writing this email both the apache httpd list as well as the graphviz users lists (hoping some intersection might be useful). I'm using OSX 10.5.6 and graphviz2.22 (built via macports). When I run a simple python script that calls dot (via a system call) to render a small graph, it takes about .04 seconds, which is normal. But when I run the same call via CGI script it takes roughly 100 times longer (4 seconds). I've tried graphs of various sizes, ranging from very tiny graphs (4 nodes) to much larger graphs. The 4 seconds figure for the CGI script doesn't change much except for very large graphs, so the time spent in the CGI script is probably not coming from the actual rendering itself. Not all system calls in my CGI scripts take longer than their "regular called at the prompt" version (e.g. if I do a simple 'pwd' it is comparable in time when called in the CGI script versus the non-served script. (And many others perform fine.) The problem just seems to be with the graphviz dot executable. Below are the two test scripts that when run on my machine illustrate the problem. Any help (from either community) would be great. Thanks! Dan Here's the python script: run as "python test.py" ------test.py----------------- import os, time T = time.time() os.system('/opt/local/bin/dot -Tsvg -o test.svg graph.dot') print time.time() - T #output here is 0.04 seconds T = time.time() os.system('pwd > testout.txt') print time.time() - T #output here is 0.003 seconds Here's the CGI script: run by navigating to the proper URL for the virtual host on my local machine ------test.cgi---------------- #!/Library/Frameworks/Python.framework/Versions/Current/bin/python import time import cgi, os print 'Content-Type: text/html; charset=utf-8\n\n' os.chdir('../../Temp') T = time.time() os.system('/opt/local/bin/dot -Tsvg -o test.svg graph.dot') print time.time() - T #output here is like 4 seconds T = time.time() os.system('pwd > testout.txt') print time.time() - T #output here is 0.003 seconds
http://mail-archives.apache.org/mod_mbox/httpd-users/200904.mbox/%3C15e4667e0904250923k250bc6b3h5a81412f8ff20d6a@mail.gmail.com%3E
CC-MAIN-2017-43
refinedweb
392
77.94
Suppose you have a set of few lines of code which is used and executed at multiple places. Instead of writing it again and again at multiple places, these lines are written at a single point and grouped into a function. Where ever they are required, this function is called. Thus, a function is a group of lines of code which can be used to perform a task and called from different places. Benefits of a function - A function promotes reusability as multiple repetitive lines are written only once and can be used where required. - It also promotes modularity as dividing the code into pieces converts it into smaller independent modules. - It makes the code cleaner, organized and easier to understand since writing all the code at one place makes makes it very difficult to understand and modify. - It makes maintenance easier. Think about it that when same logic is repeated at many places and there is a slight change in that logic, then that change needs to be done at all the places. While if that logic is grouped into a function, then that change only needs to be done at a single place. - It makes testing of code easier since you do not need to test multiple repeating lines again and again if they are grouped into a single function. Creating a function A function is created using def keyword followed by the name of function and arguments of function enclosed between parenthesis. These arguments are optional and are required only if you want to pass some values to the function though parenthesis after function name are mandatory. If there are no arguments to the function, parenthesis will be empty. Syntax: def <function_name> (): # no-argument function # statement one # statement two # other statements def <function_name>(arg1, arg2...): # function with arguments # statement one # statement two # other statements There can be any number of arguments supplied to a function. Statements of a function should be indented at the same level to denote that they belong to that function. A function may or may not return a value. If a function needs to send a value back to the calling code, it is done by using return statement with value to be returned written after return. Remember that when return is encountered, function execution is stopped and the function returns immediately. Any code written after return will not be executed. returncan also be used without a value. In that case, it is just used to terminate function execution. Calling a function A function is called by using its name followed by parenthesis. If the function expects any arguments, then those arguments should be provided between parenthesis separated by a comma. A function can be called only after it has been defined else you get an Unresolved reference error. Below example defines a function which accepts two numbers as arguments and prints their sum. # prints the sum of two numbers def sum(first_number, second_number): sum = first_number + second_number print("Sum of numbers is " + str(sum)) # calling function sum(2,3) Remember that values of arguments when calling a function are passed in the same order in which they are given. In above example, first_number will be 2 and second_number will be 3. Number of arguments which the function expects and which you pass while calling the function should match else you get an error. Example, # prints the sum of two numbers def sum(first_number, second_number): sum = first_number + second_number print("Sum of numbers is " + str(sum)) # calling function sum(2) # only one argument The function is expecting two arguments but we are calling it with a single argument. When this code is executed, you get an error TypeError: sum() missing 1 required positional argument: ‘second_number’ Returning values from function A function becomes more useful if it performs some task and produces a result and it is also capable of returning that result back to the calling code. A function can return a value using the return keyword followed by the value to return. Example, def sum(first_number, second_number): sum = first_number + second_number return sum # calling function result = sum(2, 3) # assigning the return value to a variable print("Sum of numbers is " + str(result))Value returned from a function should be assigned to a variable in the calling code or the returned value can also be used directly. That is, below line is also valid. print("Sum of numbers is " + str(sum(2, 3))) Also, return statement should be the last statement in the function body. Default function arguments As explained earlier, if a function is defined with arguments then an equal number of arguments need to be supplied while calling this function. However, python gives you a flexibility to this rule by the use of default arguments. By using default arguments, you can provide the value of an argument while defining it in the function. If a function argument is provided a default value, then there is no need to supply its value at the time of calling the function. Example, def defaultArg(a, b=5): print(b) defaultArg(2) # prints 5 defaultArg(2, 10) # prints 10, overrides default value Look in the above code, a function accepting two arguments is defined out of which one argument has a default value. When this function is called with a single argument, then the default value of second argument is considered. Note that while calling a function, if you supply a value for the argument which also has a default value, then the value supplied while calling the function overrides the default value. 11
https://codippa.com/tutorials/python/functions/
CC-MAIN-2021-04
refinedweb
927
58.62
#include <OSchemaObject.h> An OSchemaObject is an object with a single schema. This is just a convenience class, really, but it also deals with setting up and validating metadata Definition at line 53 of file OSchemaObject.h. Definition at line 59 of file OSchemaObject.h. Definition at line 60 of file OSchemaObject.h. The default constructor creates an empty OSchemaObject function set. ... Definition at line 118 of file OSchemaObject.h. The primary constructor creates an OSchemaObject as a child of the first argument, which is any Abc or AbcCoreAbstract (or other) object which can be intrusively cast to an ObjectWriterPtr. Definition at line 169 of file OSchemaObject.h. The unspecified-bool-type operator casts the object to "true" if it is valid, and "false" otherwise. Schemas are not necessarily cheap to copy, so we return by reference rather than by value. Definition at line 139 of file OSchemaObject.h. Definition at line 140 of file OSchemaObject.h. Our schema title contains the schema title of the underlying compound property, along with the default name of that compound property. So, for example - most AbcGeom types put their data in ".geom", so, "AbcGeom_PolyMesh_v1:.geom" Sometimes schema titles from underlying schemas are "", but ours never are. Definition at line 68 of file OSchemaObject.h. Definition at line 74 of file OSchemaObject.h. This will check whether or not a given entity (as represented by a metadata) strictly matches the interpretation of this schema object Definition at line 82 of file OSchemaObject.h. This will check whether or not a given object (as represented by an object header) strictly matches the interpretation of this schema object, as well as the data type. Definition at line 105 of file OSchemaObject.h. Reset returns this function set to an empty, default state. Definition at line 144 of file OSchemaObject.h. Valid returns whether this function set is valid. Definition at line 148 of file OSchemaObject.h. Definition at line 158 of file OSchemaObject.h.
http://www.sidefx.com/docs/hdk/class_alembic_1_1_abc_1_1_a_l_e_m_b_i_c___v_e_r_s_i_o_n___n_s_1_1_o_schema_object.html
CC-MAIN-2018-05
refinedweb
328
59.4
ColdFusion 10 - Invoking ColdFusion Closures From Within A Java Context I don't know too much about Java programming. But, ColdFusion 10 now allows us to load additional Java class files on a per-application basis. Instead of always having to put JAR files into the core ColdFusion lib folder, we can now keep our application-specific JAR files alongside our application-specific ColdFusion code. This is pretty awesome; but, when it comes to communication between ColdFusion objects and Java objects, things can get pretty tricky. And what about objects like the new ColdFusion 10 closures? How do these non-components work within a Java context? NOTE: At the time of this writing, ColdFusion 10 was in public beta. Before we look at any Closure stuff, let's first look at the new per-application Java integration. In the Application.cfc ColdFusion framework file, we can now specify directories that contain our application-specific JAR and CLASS files. We can also tell ColdFusion to monitor those directories for changes (ie. new files or compiling) and, whether or not to load new Java classes when they come into being. Before we look at the Application.cfc settings, let me just explain how I even got custom Java class files in the first place. As I said before, I don't know that much about Java programming; so, what I demonstrate below is simply what I was able to get working after a bunch of Google searching and much trial an error. For these few demos, I created a folder called, "src" to store my raw Java code: - java - java / src / - java / src / com / - java / src / com / bennadel / - java / src / com / bennadel / cf10 / All of my demo ColdFusion 10 Java files are in this nested "cf10" package. I then created a "lib" folder. This is the folder into which my compiled Java CLASS files will be saved: - java - java / lib / Then, I created my first Java code file (in the cf10 package directory): Friend.java - // Define the namespace at which our Java Class will be found. - package com.bennadel.cf10; - public class Friend { - // Declare properties. - private String name = ""; - // I return an initialized component. - public Friend( String name ){ - this.name = name; - } - // I greet the given person. - public String greet( String otherName ){ - return( - "Hello " + otherName + ", I'm " + - this.name + ", it's super nice to meet you!" - ); - } - } As you can see, this Java class is fairly simple. It starts off by declaring the class package. I don't fully understand how this name-spacing works; but, when the code gets compiled, the Java byte code is put into a directory structure that mimics the package hierarchy. Now that I have my Friend.java file, I need to compile it. Since I'm running on Mac OSX, I apparently already have the Java Compiler installed. So, I went into the root of my "src" directory and ran the "javac" compiler from the Terminal (command line): - ben-2:src ben$ pwd - /Sites/bennadel.com/testing/coldfusion10/java/src - ben-2:src ben$ javac -classpath . -d ../lib/ com/bennadel/cf10/* I don't understand what the "-classpath ." option does; but, it was required to get all of this stuff to work. The "-d" option tells the compiler where to store the compiled Java class files. In this case, I want to store the Java class files in the "lib" directory (remember, I'm currently in the "src" directory). The final argument simply tells the compiler to compile all of the files in my cf10 package. Once the compiler has run, I now have a Friend.class file in the following location: - java / lib / com / bennadel / cf10 / Friend.class Awesome! Now that we have our Java class files ready to be loaded into our ColdFusion application, let's finally take a look at our Application.cfc file. This file is in the "java" directory, alongside my "lib" directory. Application.cfc - Our ColdFusion Application Framework - - }; - } The Java settings are defined as a property of the Application.cfc component instance. Here, we can define what Java classes and JAR files to load, how often to scan the given directories for changes, and whether or not to load those changes into the ColdFusion class loader. Notice that I am simply defining the "lib" folder - I don't need to define the nested CLASS files. I had some trouble with these settings. They seemed to be really responsive for a while; then, out of nowhere, I stated to get all kinds of caching issues. Even with a watchInterval time of 2 seconds, my updated CLASS files were not taking effect. When I lowered the applicationTimeout to be 3 seconds, this seemed to bust the CLASS caching. Naturally, you wouldn't have such a small Application timeout in production; but, for this research and development, there was no harm in keeping the timeout low to constantly be reloading the Java class files. Once the ColdFusion application has loaded the custom Java class files, using them in ColdFusion is as easy as using any other Java class: - <cfscript> - // Create our Friend Java class. Notice that it is found at - // the package we defined in our CLASS file. - friend = createObject( "java", "com.bennadel.cf10.Friend" ).init( - "Tricia" - ); - // Get the greeting from Tricia. - greeting = friend.greet( "Ben" ); - // Output String value returned from the Java context. - writeOutput( "Greeting: " & greeting ); - </cfscript> Here's I'm instantiating the Java class, Friend, and invoking the greet() method. Notice that my class path is the same as the package path I defined within the .java file. When we run the above code, we get the following output: Greeting: Hello Ben, I'm Tricia, it's super nice to meet you! Groove-sauce! It worked perfectly! Ok, now that we can easily load a custom Java class into our ColdFusion application, let's try to take it up a level - let's try to call a ColdFusion closure from within a Java context. Unfortunately, this isn't very easy. ColdFusion closures aren't Components in the way that we typically think about them. They are Java objects under the hood; but, there's no documentation on how to invoke a closure (or a ColdFusion user defined function for that matter) from a Java context. Luckily, ColdFusion does provide a way to proxy ColdFusion components in a Java context. This is my first time looking into this concept, so I'm absolutely sure there is a more straightforward way to bridge this communication gap. But for this demo, the best approach that I could come up with (that worked) was to create a ColdFusion component that acts as an invocation proxy to our ColdFusion closure. Since we have a way to invoke ColdFusion components from a Java context, we'll invoke a component method which handles the closure invocation for us. To do this, we have to create our ColdFusion proxy and a Java Interface for that ColdFusion proxy. The Interface allows ColdFusion to generate a static Java class that mimics the dynamic architecture of our ColdFusion components. The ColdFusion component is rather simple. All it does is provide one method, callClosure(), which takes Java arguments and invokes the given closure: ClosureProxy.cfc - Our ColdFusion Component For Proxied Closure Invocation - component - output="false" - hint="I proxy the invocation of a ColdFusion closure so that the closure can be invoked from Java." - { - // I get called from Java to invoke the stored closure. Since the - // ) - ); - } - } As you can see, this just takes an array of arguments, separates the Closure from the Closure parameters, and invokes the closure. In order to create a Java proxy for this ColdFusion component, we have to use the ColdFusion function: createDynamicProxy( CFC, Interfaces ) Here, we provide a ColdFusion component instance (or path) and an array of Java Interfaces. In order to do this, I created the following Java file: ClosureProxy.java - Our Java Interface For Our ColdFusion Component - // Define the namespace at which our Java Class will be found. - package com.bennadel.cf10; - // Here, we need to define an interface to our ColdFusion component - // so that we can create a dynamic proxy. Since Java does not allow - // for the level of dynamic-structure that ColdFusion allows, we have - // to actually define interfaces to our ColdFusion components so that - // we can build a static Java object around it. - public interface ClosureProxy { - // Since we don't know what type of closure is going to stored - // in our proxy (or how it will be called), we'll have to accept a - // variable-number of arguments in the form of an Array. - public java.lang.Object callClosure( Object...args ); - } As you can see, all this does is identify the callClosure() method as one that takes a variable number of objects. And, if you look back to our ClosureProxy.cfc, you can see that we take accept these variable number of objects as a Java array. Now, imagine for a moment that we have created a Java class that extends that ArrayList (what ColdFusion Arrays use under the hood). And, to this sub-class, we are adding the method, .each(). This each() method accepts a ColdFusion closure that will be invoked on each element within the internal collection. Before we look at the code behind the ArrayList sub-class, however, let's look at the ColdFusion code that makes use of it: - <cfscript> - // I am a utility function that returns a dynamic Java proxy for - // the ClosureProxy.cfc ColdFusion component. This returns a Java - // object that adheres to the interface "ClosureProxy". - function javaClosureProxy(){ - return( - createDynamicProxy( - new ClosureProxy(), - [ "com.bennadel.cf10.ClosureProxy" ] - ) - ); - } - // ------------------------------------------------------ // - // ------------------------------------------------------ // - // Create our custom Array class. When we initialize it, we have - // to pass in our Closure proxy which will act as a tunnel when - // we invoke our ColdFusion closure from Java. - friends = createObject( "java", "com.bennadel.cf10.CFArray" ).init( - javaClosureProxy() - ); - // Initialize our custom ColdFUsion array. - arrayAppend( - friends, - [ "Tricia", "Joanna", "Sarah", "Kit" ], - true - ); - // Iterate over each item in the array. Notice that friends.each() - // is a JAVA method that we passing a COLDFUSION CLOSURE into. - friends.each( - function( friend, index ){ - writeOutput( "#index#) Hey #friend#, what it be like?" ); - writeOutput( "<br />" ); - } - ); - </cfscript> Here, we are creating an instance of the custom Java class, com.bennadel.cf10.CFArray. This is our ArrayList sub-class. When we instantiate it, we have to initialize it with an instance of our Proxy object - this is junky monkey, but we have to use this proxy to invoke the ColdFusion closure. Once we do this, we can then populate the CFArray instance and call .each(), passing in our ColdFusion closure. And, when we run the above code, we get the following page output: 1) Hey Tricia, what it be like? 2) Hey Joanna, what it be like? 3) Hey Sarah, what it be like? 4) Hey Kit, what it be like? As you can see, the ColdFusion closure was passed into and successfully executed within a Java context; both the "friend" and the "index" parameters were properly passed to the closure at invocation time. Now that we have a high-level understanding of how this communications tunnel works, let's look at the CFArray Java class to see how the ClosureProxy comes into play: CFArray.java - Our ArrayList Sub-Class - // Define the namespace at which our Java Class will be found. - package com.bennadel.cf10; - // Import classes for short-hand references. - import com.bennadel.cf10.Closure { - // This will hold our ColdFusion proxy for Closure invocation. - private ClosureProxy closureProxy; - // I return an initialized component. - public CFArray( ClosureProxy closureProxy ){ - // Property initialize the core ArrayList. - super(); - // Store the ColdFusion closure proxy. - this.closureProxy = closureProxy; - } - // I iterate over the elements in the array and invoke the given - // opertor on each element. - public CFArray each( Object operator ){ - //. - this.closureProxy.callClosure( - operator, - iterator.next(), - ++iteration - ); - } - // Return this object reference for method chaining. - return( this ); - } - } As you can see, the each() method loops over the internal ArrayList collection; and, for each element, it uses the ClosureProxy instance to encapsulate the ColdFusion closure invocation: - this.closureProxy.callClosure( operator, iterator.next(), ... ); In this way, we can invoke a ColdFusion closure from a Java context without having to know how to invoke the closure at a technical level. This is pretty cool; but, at the same time, it's wicked junky, right? We have to instantiate our Java class with a reference to our ClosureProxy instance. Clearly, this is not a good solution; but it's the only thing I could figure out so far. I'm gonna keep tinkering with this to see if I can come up with something that is a bit more seamless. Looking For A New Job? - Senior Developer at Quality Bicycle Products - ColdFusion Developer at WRIS Web Services - Coldfusion Developer at Cavulus - Web Developer at Townsend Communications, Inc. - Support Programming Manager at InterCoastal Net Designs Reader Comments @All, I found a slightly cleaner way of doing this: In that approach, I am storing the ColdFusion proxy as a static property of the package; this way, all classes in the package can access it. I have tried your method, it works, thanks for sharing.
http://www.bennadel.com/blog/2342-coldfusion-10---invoking-coldfusion-closures-from-within-a-java-context.htm
CC-MAIN-2016-07
refinedweb
2,176
55.54
So this works for me - It So this works for me - It starts an Arduino & Servo up. You'll have to change the port from "/dev/ttyUSB0" to something like COM4 or COM2 for your Windows XP machine. And then provide the correct pin number attached to the servo, and load MRLComm.ino into the Arduino so MRL can control it. Okay, got it! Yes you were Okay, got it! Yes you were right the copy paste did put an extra space on the front of each sentence. So this works, I looked at the configuration on the Gui to see what has been created, I still can't figure out how the routing works. But I guess lets do it by step. Did the servo move ? If Did the servo move ? If so, that is great ! we are moving along - What do you want to do next? Get all the fingers involved? Build some hand gestures ? Do you have the microphone ? What next ! What next ! If not, that's ok - if you flip over to the arduino tab -> oscope -> do you see a trace ? Thanks for creating this Thanks for creating this post, didn't know how to start it. And you're right, it's better if others can get a use of it. I like the title! Okay I just sent you a log file, I did change the port to COM7, I modified my baud port according to yours, used the pin5 for my servo but something has a "mismatched input" right from the start. And welcome to Python. One And welcome to Python. One of the first things to know about the Python language is that it is VERY STRICT about indentation. It reminds me of my 3rd grade teacher, and beginning to write paragraphs (its sooo strict) You have to make sure everything is on the same alignment, as it looks like in the segment I've pasted & the picture I've posted. I have had problems when copying and pasting Python from a web post in the past - where an extra space or tab was added. If you don't find anything like that, and it doesn't look mis-aligned - then I will attach the program as a file instead of copy & past from the post page. Good Luck. P.S. Did you load MRLComm.ino on the arduino ? This is a Arduino script and it allows MRL to control the Arduino board. I'd recommend loading it through the Arduino IDE, which I believe you have experience with. The script can be found here - You want to uploaded it to the Arduino. Yay it move ! Yes it is not Yay it move ! Yes it is not smooth movement. We can make smooth quick movements. In that case we don't want to increment, but just tell the servo where to go in one statement. like this It can be quicker to get things done with Python, instead of the GUI - but the GUI can be helpful too. One does not preclude the other. You won't have to "learn" Python. I barely know Python. But, if you see "Cause and Effect" and you saw the cause was a new line of Python - then you can "tweak" that python to get a different effect. It can be fun and easy to learn this way. Since we have the copy & paste down I will make another script. This one will be with COM7 and the appropriate pins for all the fingers - then we will have a hand & hand gesture test. Maybe we should think of appropriate hand functions - I'll take a crack at a few of these - I think the code you sent me has all the pin numbers of the servos. After a few hand movements are put into the Python script, when you get the microphone, you can activate it by words. I had a thought were you can speak, the speech goes to text, the text is broken down into letters, and the letters are sent to the hand sign alphabet - then you'd have a speech to hand-sign robot .. that'd be pretty cool. I should have looked at your I should have looked at your code before coming up with ideas :D Looks like you've implemented a lot already. So, to go to the next level, I just ported some of your code. All the absolute hand position movements. This is just to test and see "cause and effect". Hopefully, the "effect" will be several of your hand gestures repeated at 0.5 second intervals. Fantastic work here, I didn't Fantastic work here, I didn't expect so much. When I run the script every thing gets initialised, the arduino, the servos are all there, the script runs to it's end correctly. But nothing much happens. During create and start the Arduino , there is a short buzzing from the servos. I looked in the Arduino tab and the COM7 (I had modified the script to 7 not 6) is not marked, in the pins tab none of the pins are on. Then looking at the servo tabs, none of them are attached nor configured to the Arduino or to any pin. So that is strange. I could rerun the script after configuring everything by hand. But of course the script tells me an error since every thing is already created. How can I rerun the script without: Ahhhh ! Panic ... hit the Help->About -> "GroG, it No Worky" button - that will send me a log. Yeah, the fact that the Servo's don't show they are connected to the arduino on the appropriate pins, is something I'll take care of shortly... But they "should" be attached. The script is attaching them, its just the GUI doesn't know about the script doing it... Anyway, in answer to your question, MRL can control as many Arduinos as you can think of unique names ;) E.g. arduino01 arduino02 ..... arduin0N+1 It's not limited.. just limited on memory & cpu, but I suspect evenn with your windows XP box you could control several thousand, if you had that many USB ports.. Also if you computer runs out of USB ports MRL can attach and control other MRL instances running on other computers (but let's not go there ... yet ;) Fantastic, we got it working Fantastic, we got it working with MRL! It is amazing when it becomes alive all at once. Don't know what you think, but maybe you should replace the script uploaded on this thread by the working one? If others stumble on this, they can get something that works directly. Is there a way to say to a gesture to wait until the one before is complete for to start, instead of just adding delay? The "shoulder" takes a delay of about (6000) in Aduino for to acomplish 0 to 180, when "thumb" takes only "800". (these delays are just as an exemple) I removed the inline script I removed the inline script and added a reference on your post - where the script can be found in MRL. It's better this way, because the script will always be the "most recent". We can add appropriate timed delays in milliseconds (that is what Arduino uses), but the robot can't tell at the moment when a move has competed, because there is no feedback sensor, like a limit switch, a potentiometer, or an encoder. If you want I can add 6 seconds after each gesture, or you can tell me specific timings if you want. Just thought of an idea. If Just thought of an idea. If you get a Kinect (you have to get one with the seperate power supply to work with a computer's usb - or maybe they changed this) - Then you can do gestures and the robot will mimic you. Like the Simon says game. Monkey see, monkey do... or I guess Monkey do, Robot do too ;) Well, yes, but I don't want Well, yes, but I don't want to have a monkey in my workshop, doing all what I do! The monkey looks for a brain remember. Or could I show him and he remembers the movement to play them back later? Anyway I rmodified a little the gesture python script to have something more smooth and with a little more delays to accomplish full gestures. I will send it by mail so you can update it, if you feel. Finally got a camera today with high frame rate with built-in microphone using USB port, so what should I do with that to give him ORDERS!!!! I also added extra power supply because with all the servos assembled it couldn't reach enough speed anymore. Opencv still don't work of course, but I had hope maybe it would. Hope your having fun, telling Hope your having fun, telling your robot what to do .. I think its great that you put your designs up on thingyverse ! If a chimp like myself wanted to print your designs would a 20cm X 20 cm Printrbot be enough? Do you have a list of materials - e.g. size, type & number of servos ? I have a wheelchair, which I've been meaning to connect to a MyRobotLab brain, on top of that base I think it would be great to build your torso - The school kids would be really inspired ! Thanks fo rthe new vidoes Thanks fo rthe new vidoes ! I've updated the script. Here is a list of changes. "all open | hand close | pinch mode | open pinch | hand open | hand rest | hand open two " which corresponds to the methods you originally created in the Arduino sketch. I usually recommend transparency and a one to one correspondence to the methods. If you decide to make a handsup method I would recommend you add a "hands up" grammar to activate it. It just makes the program more clear. ear.stopListening() when the gesture is done the ear is turned back on with ear.startListening(), this should help reduce the servo noise feedback interfering with a false command Goodbye inMoov Wow, Lots of things to do Wow, Lots of things to do ! Well with offline communication going for a while - I'll post a list of things I need to do in order to make an InMoov Service & lots of new features.... I looked at the latest log file : I noticed that you are running it from "C:/Program Files/Myrobotlab/intermediate.884.20121021.1606" I have a tingly feeling that it doesn't like the space in the "Program Files" Try moving it to: C:\myrobotlab\intermediate.884.20121021.1606 - and try OpenCV again Okay I moved it as you said Okay I moved it as you said to C:\myrobotlab\intermediate.884.20121021.1606 but unfortunatly it still gives the same error. I just sent you the log file. I am sure it's a lot of work, but once again, there is no rush, it has to be fun, we are not at work! Really with the new voice commands it's doing so much better. I have just ordered a special power supply, because I came to realise that if I don't have enough power, the servos tend to act strangely. 24 servos is a lot of Amps. Until now I worked myself with, small outlet supplies but if I want to power up everything at once, I need a strong source. I got a efuel 6V20A. I think it should do the trick. I have the InMoov Service I have the InMoov Service checked-in, which is to say it has What is missing - You will have to download the bleeding edge to get this - "not just hit the bleeding edge button" there is a difference. What you should see - After downloading the latest intermediate - when you start you should see the InMoov Service - right click on it and install - it will download all the dependencies. Restart - In the Jython panel type the following - and see what happens (adjust for the correct values of left & right boards) let me know what happens Yes I tried the superb InMoov Yes I tried the superb InMoov service, and it loads all fine, but I have a few servos that can't work because of a lack of power supply. I guess a service can only work if everything is connected, and I can't uncheck some servos before launching the service. (Or can I?) So I won't be able to try until I get my new power supply. But in the meantime I finally got OpenCV working, I went through all kinds of forums and post tonight about my problem, and solved it. I have WindowsXP., so I installed in system32, avutil-50.dll, and installed Microsoft C++2010. I had the 2008 version. And I was astonished when finally it worked! What a gamble. Of course I went directly trying the gadgets I had read about like facetracking, finding a ball with InRange filters. And it works really great! Amazing when the head is following a purple box. So this is an open door to much more with MRL, I can't wait to make InMoov recognize some colored cubes and grab it! Yay ! - and Grrr ... I'm Yay ! - and Grrr ... I'm sure it was the MSVCRT.dll and its brothers ... I don't understand why Microsoft doesn't distribute this immediately. Maybe, I should put it in the repo.. Anyway, I'm very glad you figured it out. For removing a service, you should be able to right click in the "current servies" and choose release. I've been meaning to do this right click menu item on the tabs too but the menu is not consitent (yet) I worked on InMoov service I worked on InMoov service some more. It should: The following script should load everything including opencv - and initialize everthing properly (you'll have to change the values for your boards) I tested this as best I could with a single arduino and a single servo I'm going to try this I'm going to try this freshhhh release as soon it's downloaded, and let you know how it went. I really wonder how you can test all that with only one servo... It's a mystery and it will remain. A group of elves make the Ah, Ah they all could be Ah, Ah they all could be called hairy something Here are the values we should Here are the values we should not override at the present time. Bicep should never go more then 0 to 90 the omoplat 10 to 80. rotate is delicate because if the arm is over the head it can do a full rotation 0 to 180, but if the arm is at the level of stomach, going to 0 it's like having the hand inside it's stomach. let say for now 40 to 180 this is what I currently have this is what I currently have on initialization of the InMoov service : How to add an OpenCV filter How to add an OpenCV filter ? Go to the OpenCV Service page here - Thanks, pyramidedown was the Thanks, pyramidedown was the one I needed. Hi Hairygael, I've been a Hi Hairygael, I've been a bit distracted with some serial communication issues on some of the platforms (Linux) and have not put a lot of updates in the InMoov service. As soon as I am done with the issues, I should be able to focus on InMoov. Hope things are going well with you. My future plans for the InMoov service are : 1. Full Initialization - once you start this service everything needed is ready. 2. A System test - test all servos sequentialy - I wonder if it will look like he is doing a strange dance move. This is helpful on startup to see if everything is functioning correctly, before you start playing with him.. If you have more ideas let me know. Regards, GroG. This all sounds great! I was This all sounds great! I was also away for a few days, so I haven't done anything much. I received the power supply, and it seems to be great, I rebuilded a pcb with connectors to make sureit can support enough Amps in the wires. Did a test tonight and it all works, only rothead on pin 13 doesn't work. I switched of board and the problem remains, but only when I run MRL. If I use the Arduino.exe, it works. Could it be in the initialisation? Your idea of a system test is very good, I suggest to make only a short and smooth move per servos, because for the new builders of InMoov it will be less stressfull. (wires might get stucked or stretched, the arms may not have enough room to move) I rather check smoothly, YOU NEVER KNOW. Ideally would be running in this order, thumb,index,majeure,ringfinger,pinky,wrist,bicep, rotate,shoulder, omoplate,neck, rothead, jaw,eyesup, eyesdown. We could have both arms simultaneous to make it less long. "I was also think about how a moveArm method with 3 or 4 parameters would be helpful like the moveHand method. The 3 parameters would be wrist - elbow - shoulder, with an optional omoplat." This also is a very good idea! Ideally would be 5 parameters because to acomplish grabing movements or lifting arm up, I need to use all five servos. Wrist, bicep,rotate,shoulder,omoplate Then will come the head... ." I just can't wait to try all of that! "If you have more ideas let me know." You know I mentionned the speed of the servos, could we have something like what you have made with a range of speed between (1) and (5) 5 would be the fastest. Gee, it's like I make a wish list to Santa Claus! I am done with serial I am done with serial problems for a while. Was not completely successful, but it's time to move on. Working on InMoov now. The first thing I will do is make Servo speed adjustable. The notation you suggested would be rather challenging inMoov.moveHand("left",90{2} ,90{5} ,90{5} ,90{5} ,90{5} I would propose the following - a setHandSpeed method with the following conventions: It would use the same position signature as moveHand, however, a fractional amount (value between 0 <-> 1) would be supplied for proportional speed. For example: This would set the thumb speed of the left hand at 80% - all other fingers would move at full speed. The servo would remember and move at it's "set" speed until changed. Will this work for you? I understand the challenge I understand the challenge you are talking about, this solution is fine with me. How will we proceed? Start the service, it will initialize every thing, test the servos, and then in jython I will write: inMoov.setHandSpeed("left", 0.8, 1,1,1,1) inMoov.moveHand("left", 90, 90, 90, 90,90) Then select another part to move and so on? Do you think I can test each move to make sure I'm happy with the position and speed before to go on the next part? I tried to use the sliders for to define some standard position and it is a good solution, it helps to forsee what can happen, like two hands colliding, or the arm hitting the head... It's always scary when you load the script and it starts to go, I'm always ready to unplugg the two arduinos. Last night I was using the sliders for the head movements to understand the vision tracking, and all of a sudden, the bicep started to act but totally the wrong way, of course it broke a part. (All servos are powered even if I use only two of them). Luckily the printer can reprint the part easily. Correct. We shall proceed Correct. We shall proceed this way. Since the Servos will now have "memory" you will only need to set the hand speed if you intend to change it. All subsequent moves will be at the last set speed. Do you think I can test each move to make sure I'm happy with the position and speed before to go on the next part? Yes, you need to clear the jython screen and put in only the code you want to test - once you like it save it in a text editor somewhere else. That would allow you to try the moves incrementally. (It would be nice to highlight the script and execute only that part - but this is a future enhancement) I tried to use the sliders for to define some standard position and it is a good solution, it helps to forsee what can happen, like two hands colliding, or the arm hitting the head... Excellent. Avoid hitting the head ;) It's always scary when you load the script and it starts to go, I'm always ready to unplugg the two arduinos. Hopefully the limits will make you feel safer, additionally you can "detach" and re- "attach" the servos through MRL. I noticed controlling continuous rotational servos, it was the only way to completely stop them. I'm working on the speed stuff now, I'll let you know when I have an update you can try. Before setHandSpeed.... There has to be smaller blocks.... I've just implemented Servo.setSpeed I've tested it with 1 servo - and it "seems" to be working correctly. It's should be smoother than moving the servo through Jython. If you could test with 2 servos that would be great.. You'll have to update MRL of course. If this make no sense, let me know... Check this for some detail : Simple tuto for the beginners of MRL This a simple tuto for to start up MRL and InMoov in good conditions. I hope I didn't forget important things. Hi Hairygael,Have not heard Hi Hairygael, Have not heard from you for a while, hope things are well. This post describes the current InMoov service - It will be the page people go to when they right click -> press info on InMoov Hi GroG, I had too much work Hi GroG, I had too much work lately, and barely had anytime during my nights to check on updates. I can see that you have done a lot of work here. I'm glad you finally went through your issues with serial problems. I need to link your "info" tuto on my blog for the new arrivers. Okay, I hooked every servos to a self made connector board which should avoid bad electrical contacts during movements of parts. I still need to extend some servo cables because during some extremes or unexpected movements some of the wires are still too short. I just tried with the last "bleeding edge", the service InMoov and it loads everything as expected, but none of the servos are "attached" since the arduino boards aren't configured before initialization. That's normal of course. In the service if I configure the boards and attach each servo one by one, it works. But doing : So I tried inMoov = createAndStart("inMoov", "InMoov") directly in Jython but it gives me an error. I have sent you the log file. Then I tried inMoov.initializeRight("atmega328p","COM8") and inMoov.initializeLeft("atmega1280","COM7") I also get the same sort of errors. I am sure it must be a simple thing, but I can't seem to figure out what it is. I have sent you the log files. I will try other workarounds this afternoon to see if I can trigger InMoov with speeds. Wecome Back ! I saw the 2 Wecome Back ! I saw the 2 error logs : There are 2 ways to start a Service. Method 1: Use the GUI Right click on the desired Service. If it needs installing, install it first. If the Service is installed, then you may right click and select start. Next you will be asked for the name of the service. The name could be anything you want, but there are 2 rules. First, the name must be unique - no other service can have the same name. Second, any Python script must refer to this service with the same name. I usually name the Services I start with a small letter at the beginning. I just do this from force of habit. So an instance of the InMoov service I usually name inMoov. But, that's just me :) Method 2: Use Python You were right to try the example on the Service page - but the example was wrong :P (sorry) . I have updated it. It was : It needs to be : Here are the initial Here are the initial positions at start up that would suit me the best for both sides. thumb 0 index 0 majeure 0 ringfinger 0 pinky 0 wrist 90 bicep 0 rotate 90 shoulder 30 omoplate 10 neck 90 rothead 90 Okay both sides can initiate Okay both sides can initiate correctly. The initial positions at starts avoids very much the stress of "What is going to happen when plugged". It really avoids big arm movements and sets InMoov in a rest position. Now what seems to be happening and needs to be looked at is: -After the init. I get three left fingers detached from the arduino, thumb, index and majeure and can't seem to be able to reattach them unless the gui is restarted. -rothead is not attached either. Trying to attach it has no effect. -set speed works nice for the fingers but seems still a bit fast for omoplate. I haven't tested on shoulder rotate and bicep. This might be due to the fact the potentiometer is set outside the servo, and the servo needs to make more rotations to get to its position. Initialization is different - Initialization is different - You only initialize the parts which you want to control ... the other parts should lie dormant... Also "left" and "right" are keys and you may initialize other systems e.g. inMoov.initialize("left1", "uno","COM9") might be appropriate if you had 2 left arms.. this would be relevant if you wanted to make a Shiva model ;) Servo Problems... I was going to suggest a few things. I think you had the right idea in disconnecting and re-attaching the servos to try to isolate the problem. A individual servo may be the root cause of the issues. For my experience, I have found smaller servos to be more noisy. Noisy power supplies can cause havoc with servos too. This can cause jitter of the servos, and the jittering can cause more noise (a vicious cycle). It might be the Sketch code - but I'm not sure on how I would rectify it, if that were the case. I remember reading a article on letsmakerobots with oddbot & Fritsl ... Fritsl & Oddbot discovered that "shifting" the code around made a difference... From a software design perspective this is .. "horrible" :P ... But I'd be willing to experiment with anything which you think will make a difference. I've used the Oscope on Arduino at the same time as running a servo and you can see some "noise" whenever the servo moves just on floating analog lines. If it was me, I might try moving the electronics on a seperate power supply if possible.. you have to connect the grounds together, but going through a seperate power supply there is often filtering devices (including caps) which provide better isolation... Let me know if I might help.. Hello, I have updated the Hello, I have updated the InMoov service page with the beginnings of a Head Tracking section . I have a pan / tilt servo set up, and have tested the InMoov using the 2 servos and a Bare Bones Board Arduino clone (compatible with a Duemilanove Atmega 328). The Servos do not jitter, and I can control them with the sliders. I did manage to move them fast enough with the sliders to black out my board. This is when the servos consume too much power and it resets the Arduino. When that happened I the servos were unresponsive and I had to detach and re-attach for control. The USB power source is usually clean but can offer very little current. Overall the servos were rock solid. I DID get strange behavor when I used the Analog lines of the Arduino for a Oscope trace. This SHOULD NOT happen. Typically, I use the oscope trace as a system check. This verifies MRL can speak to the Arduino and the Arduino can talk back to MRL. The servos I had went crazy when I ran a trace. Previously, tracing was done for 1/2 second in the SystemCheck ... I have now (version 938) taken that out. You should be able to update with the Bleeding edge button, and verify that the version (Help->About->Version) is 938 or above. I'd recommend experimenting with the Head or some other part which has a low servo count, just to be safe. Thought this might help... Regards, GroG Great I have used the version Great I have used the version 999 of MRL and have to say that the automatique repositionning of last set configuration of the GUI is very handy. Using the sliders on sides of the screen makes the gesture job easier and faster. I'm trying on the moment different sound options for the microphone since I get trouble with the noise created by the servos for to get InMoov to understand the commands. In your last post you mention you've updated the headtracking so I went straight for that, and I do get the start tracking working by voice command but after setting a point manually, the servos of the head are not reactif to moving objects. Though I checked if the servos had been correctly initialize and they were responding to the sliders. With the batteries set as power source, there is no more eratic issues with the servos, so going further is now possible, even though I have a few of those cheap MG995 that burned lately. I will order some HK15298 hobbyking servos that should be a good exchange, they seem to have a good review. You know you can make the You know you can make the robot stop listening when your moving? inMoov.pauseListening() .... do movements - wait time ..... inMoov.resumeListening() It won't get things wrong, but it won't try to interpret noises or speech either... Tracking - correct - it's not attached yet.. I'm doing some clean-up.. still have some work to do.. I'm trying to put in some calibration routines. Pan tilt mechanisms are all very different.. Even if they are done with 2 servos - the axis & locations of how they are mounted together in relation to the camera causes very different behavior. My idea is to have it self-calibrate, where it uses feedback to adjust variables on how it will track objects.. Okay Grog, this is how InMoov Okay Grog, this is how InMoov can see his hands with this gesture capture. I have blue silicone strips on the fingers, hopefully it is enough for to see his fingers. The strips aren't on the sides of fingers though... inMoov = Runtime.createAndStart(" inMoov.captureGesture("I see my hands") def I see my hands(): inMoov.moveHead(0,80) inMoov.moveArm("left",72,84, inMoov.moveArm("right",70,62, inMoov.moveHand("left",50,26, inMoov.moveHand("right",22,5, Thanks for the picture. I'm Thanks for the picture. I'm currently having some issues with LKOptical track filter - it won't track more than 1 point. But I think I'll have it figured out soon, then I'll be able to fully develop the Tracking service (No subject) Second view of InMoov's play Second view of InMoov's play set with a toy removed First Oscope with a finger First Oscope with a finger sensor of foam.
http://myrobotlab.org/content/inmoov-robot-searches-brain
CC-MAIN-2018-43
refinedweb
5,396
81.02
sourcecode jcwren <code> #!/usr/local/bin/perl -w # # Invoke with './xstatswhore.pl [-u username] [-p password] [-b histogram_binsize | -1]' # # Alternatively, username and/or password can be embedded into the script, if you don't want # command line arguments. Use a -b option of -1 to suppress the histogram. # # Displays a users total writeups, total reputation, along with min, max, and average, and # a histogram. Only works for your own account, since reps are 'proprietary'. # # Thanks to larryl for the histogram patch. Be sure to give him ++ credit if you like # the histogram portion (node id 65199) # # Requires: # XML::Twig # LWP::Simple # # Copyright 2000,2001(c) J.C.Wren jcwren@jcwren.com # No rights reserved, use as you see fit. I'd like to know about it, though, just for kicks. # # This module has more code than is actually necessary for the functionality provided. # I was originally writing a module for another function, and this came out of it. The # @articles array that is returned from get_article_list() contains an array reference # that has the articles name, prefixed by the string 'node_id=xxx:', where 'xxx' is the # number the article title refers to, the reputation of the article, and the date the # article was written. I'm sure you can imagine some uses for this... # # 2000/08/03 - 1.00.00 - Initial release # 2001/03/17 - 1.10.00 - Changed XML::TableExtract to using XML::Twig, added larryl's # histogram code # 2001/03/18 - 1.10.01 - Applied mirods change from node 65444 # use strict; use Carp; use LWP::Simple; use Getopt::Std; use XML::Twig; use POSIX qw(ceil floor); my $def_username = ""; # Set this to your user name if you don't want to use the -u option my $def_password = ""; # Set this to your pass word if you don't want to use the -p option my $def_binsize = 5; # Bin size for reputations. Set to -1 to disable histograms. my $pmsite = ""; { my %args = (); getopts ('u:p:b:', \%args); my $username = $args{u} || $def_username; my $password = $args{p} || $def_password; my $binsize = $args{b} || $def_binsize; die "No password and/or username. Program terminated.\n" if (!$username || !$password); my $hrephash = get_article_list ($username, $password) or croak "Get on $pmsite failed."; my $hsummary = summarize ($username, $hrephash) or croak "You have no articles, perhaps?\n";; show_reps ($hsummary); show_histogram ($binsize, $hrephash, $hsummary) unless $binsize < 0; } # # Display the user that's the whole point of the program # sub show_reps { @_ == 1 or croak "Incorrect number of parameters"; my $hsummary = shift; print "\n"; printf (" User: %s\n", $hsummary->{username}); printf (" Total articles: %d\n", $hsummary->{articles}); printf (" Total reputation: %d\n", $hsummary->{reputation}); printf (" Min reputation: %d\n", $hsummary->{repmin}); printf (" Max reputation: %d\n", $hsummary->{repmax}); printf ("Average reputation: %3.2f\n", $hsummary->{average}); print "\n"; } # # We subtract one from the total hash count because the site XML returns # the users homenode as a article (Dog knows why... Ask Vroom) # sub summarize { @_ == 2 or croak "Incorrect number of parameters"; my ($username, $hrephash) = @_; my $total = 0; my $repmax = 0; my $repmin = 999999999; my %hsummary = (); ((scalar keys %$hrephash) - 1) >= 0 or return undef; for (keys %$hrephash) { $total += $hrephash->{$_}->{rep}; $repmax = max ($repmax, $hrephash->{$_}->{rep}); $repmin = min ($repmin, $hrephash->{$_}->{rep}); } $hsummary {articles} = (scalar keys %$hrephash) - 1; $hsummary {repmax} = $repmax; $hsummary {repmin} = $repmin; $hsummary {reputation} = $total; $hsummary {average} = $total / ((scalar keys %$hrephash) - 1); $hsummary {username} = $username; return (\%hsummary); } # # Gets the XML from the site. Much more reliable that the old 'get each # page of articles' method. And for those verbose people (tilly...), it # should result in a 26x reduction of server hits. # sub get_article_list { @_ == 2 or croak "Incorrect number of parameters"; my ($username, $password) = @_; my %nodehash = (); $LWP::Simple::FULL_LWP = 1; my $page = get ("$pmsite?user=$username&passwd=$password&op=login&node=User+nodes+info+xml+generator") or return undef; my $twig= new XML::Twig (TwigRoots => { NODE => sub { my ($t, $node) = @_; my $nodeid = $node->att ('id'); !exists ($nodehash {$nodeid}) or croak "Node $nodeid is duplicated!"; $nodehash {$nodeid} = {'nodeid' => $nodeid, 'title' => $node->text, 'rep' => $node->att ('reputation'), 'last' => $node->att ('reputation'), 'date' => $node->att ('createtime') }; $t->purge; } }); $twig->parse ($page); return (\%nodehash); } # # This code was contributed by larryl. I mucked with it a little bit, passing in the # summary hash, and a little reformatting. This is a great idea, and I'm glad it was # contributed. I'da never thought of it... # sub show_histogram { @_ == 3 or croak "Incorrect number of parameters"; my ($binsize, $hrephash, $hsummary) = @_; # # Divide articles into bins based on reputation: # my %bins = (); $bins {floor (($hrephash->{$_}->{rep} + 0.5) / $binsize)}++ foreach (keys %$hrephash); my @bins = sort {$a <=> $b} keys %bins; my $minbin = $bins [0]; # lowest reputation bin my $maxbin = $bins [-1]; # highest reputation bin # # Try to keep histogram on one page: # my $width = 50; my $scale = 1; my $maxrep = $hsummary->{repmax}; if ($maxrep > $width && $maxrep <= ($width * 5)) { $scale = 5; } elsif ($maxrep > ($width*5)) { while (($maxrep / $scale) > $width) { $scale *= 10; } } my $start = $minbin * $binsize; my $end = $start + $binsize - 1; print " Reputation Article Count\n"; print "------------- -------", "-" x 50, "\n"; do { my $count = $bins {$minbin} || 0; my $extra = ($count % $scale) ? '.' : ''; printf "%4d .. %4d \[%4d\] %s$extra\n", $start, $end, $count, '#' x ceil ($count / $scale); $start += $binsize; $end += $binsize; } while ($minbin++ < $maxbin); print "\n Scale: #=$scale\n\n" if $scale > 1; } # # Gotta wonder why these aren't core functions... # sub max { my ($a, $b) = @_; return ($a > $b ? $a : $b); } sub min { my ($a, $b) = @_; return ($a < $b ? $a : $b); } </code> <p>xstatswhore.pl is an update of [statswhore.pl] that uses [mirod]s [cpan://XML::Twig] module, and includes [larryl]s histogram code (if you like the histogram part, be sure to thank him. ++[id://65199]). This version has the advantage of being faster by generating fewer hits to the server, and fixing the problem of having exactly a multiple of 50 nodes. And the histogram part is really cool..</p> <p>I posted this as new code so that the old one would still be available. Some people have had trouble installing the various XML modules, or don't wish to.</p> PerlMonks.org Related Scripts J. C. Wren<br> jcwren@jcwren.com
http://www.perlmonks.org/?displaytype=xml;node_id=65220
CC-MAIN-2015-22
refinedweb
1,019
60.45
all¶ - paddle. all ( x, axis=None, keepdim=False, name=None ) [source] Computes the the logical andof tensor elements over the given dimension. - Parameters x (Tensor) – An N-D Tensor, the input data type should be bool. axis (int|list|tuple, optional) – The dimensions along which the logical andis compute. If None, and all elements of xand return a Tensor with a single element, otherwise must be in the range \([-rank(x), rank(x))\). If \(axis[i] < 0\), the dimension to reduce is \(rank + axis[i]\). keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result Tensor will have one fewer dimension than the xunless keepdimis true, default value is False. name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name - Returns Results the logical andon the specified axis of input Tensor x, it’s data type is bool. - Return type Tensor - Raises ValueError – If the data type of x is not bool. TypeError – The type of axismust be int, list or tuple. Examples import paddle import numpy as np # x is a bool Tensor with following elements: # [[True, False] # [True, True]] x = paddle.assign(np.array([[1, 0], [1, 1]], dtype='int32')) print(x) x = paddle.cast(x, 'bool') # out1 should be [False] out1 = paddle.all(x) # [False] print(out1) # out2 should be [True, False] out2 = paddle.all(x, axis=0) # [True, False] print(out2) # keep_dim=False, out3 should be [False, True], out.shape should be (2,) out3 = paddle.all(x, axis=-1) # [False, True] print(out3) # keep_dim=True, out4 should be [[False], [True]], out.shape should be (2,1) out4 = paddle.all(x, axis=1, keepdim=True) out4 = paddle.cast(out4, 'int32') # [[False], [True]] print(out4)
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/all_en.html
CC-MAIN-2021-25
refinedweb
298
59.7
Introduction There is a simple experiment you can perform to estimate the value of π, i.e. 3.14159…, using a random number generator. Explicitly, π is defined as the ratio of a circle’s circumference to its diameter, \(\pi = C/d\) where \(C\) and \(d\) are the circumference and diameter, respectively. Also recall the area of a circle is \(\pi r^2 = A\) For this experiment, you’ll need a programming language with a random number generator that will generate uniform random numbers, or that is randomly pick a number over a uniform distribution. What does a uniform random number mean? Let us consider a discrete (numbers that are individually separate and distinct) example: Consider the game where you have a dozen unique marbles in a bag. You pick one at random, record the color, and then return it. Over time (i.e. thousands of draws), you will notice that approximately each marble will be drawn approximately the same number of times. That is because, theoretically, there is no bias toward any specific marble in the bag. It’s the same concept, in an ideal world a random number will be be picked without bias to any other number. Unfortunately, computers are not truly random. Most, if not all, random numbers are pseudo-random number generators. They rely on mathematical formulations to generate random numbers (which, can’t be random!). However, they have a depth and complexity to sufficiently provide a program with a random number. Python’s Random Function To attempt to compute the value of π using this random sampling, consider python’s random number generator. Using the function random.random() you can draw a random floating point number in the range [0.0, 1.0). So In your preferred Python IDE of choice, import the random module. import random after typing random.random() a few dozen times, you should begin to see that you’ll get a random number between 0 and 1. Consider the circle Now, consider a geometric representation of a circle. In a Cartesian plane a circle centered at the origin can be represented as the following equation: \((x^2 + y^2)^{1/2} = a\) and the area of a circle as: \(\pi r^2 = A_C\) In addition, consider a square that is circumscribed around the same circle: \((2 \times r)^2 =A_S\) Visually: Looking at the figure above, assume we pick a uniformly random point that is bounded by the blue square \(S\), what is the chance that the point chosen would be inside the circle \(C\) as well? It would be the ratio of the area of the circle \(A_c\) to the area of the square \(A_s\), that is the probability of selecting a point in \(S\) and \(C\) is \(P\{S \& C\}\) \(P\{S \& C\} = A_C/A_S\) \(P\{S \& C\} = {\pi r^2}/{(2r)^2}\) Of course, assuming a \(r>0\) \(P\{S \& C\} = {\pi}/{4}\) So the chance of selecting a point in the square \(S\) that is also inside the circle \(C\) is \({\pi}/{4}\) Coding the experiment in Python To test this experiment Pythonically, let us consider a square and circle centered at the origin and, for simplicity, we draw only positive numbers from the random number generator: def GetPoint(): # Returns True if the point is inside the circle. x, y = random.random(), random.random() # uniform [0.0, 1.0) return math.sqrt(x**2+y**2) <= 1 GetPoint() tests if randomly generated point from inside the square is also inside the circle. Since we are only considering one-quarter the area of the circle and one-quarter the area of the square, the reduction cancels out and we are still left with the relationship of \(\pi/4\). To call the experiment and estimate π, we must interpret what happens when a random point is inside the circle. In Python, we can write: def circle_estimator(n): i = 0 for k in range(n): if GetPoint(): i += 1 return 4 * i / n # returns the estimated value of pi For this experiment to work, we must execute it many times. If we only call GetPoint() once, there is a chance it will say π is 4 or 0! Clearly not a close value to π. Running the experiment for multiple iterations, the following is the results of the estimated value of π: Close! However, this is clearly not as efficient as just calling math.pi
http://seangrogan.net/2018/05/
CC-MAIN-2020-50
refinedweb
736
60.04
Originally published at:… Setting Up GPU Telemetry with NVIDIA Data Center GPU Manager Originally published at: Is it possible to install it with Cloud GPU instance? Yes looking to run the poc python file in the blog have all the relevant pre-requisite running on a vm such as dcgm and nv host engine along with Prometheus as well unable to run the python file complains of a module and there is no way it can be installed via pip as well i dont find this module what are the ways i can get dcgm metrics so i can get gpu metrics to display it via prometheous to grafana python inheritance_reader_example.py Traceback (most recent call last): File “inheritance_reader_example.py”, line 2, in from DcgmReader import DcgmReader ImportError: No module named ‘DcgmReader’ $ python dictionary_reader_example.py Traceback (most recent call last): File “dictionary_reader_example.py”, line 2, in from DcgmReader import DcgmReader ImportError: No module named ‘DcgmReader’ You likely need to add the directory containing DcgmReader.py to your PYTHONPATH environment. For example, on my system, DcgmReader.py is in /usr/src/dcgm/bindings, so I would run PYTHONPATH=/usr/src/dcgm/bindings python dictionary_reader_example.py
https://forums.developer.nvidia.com/t/setting-up-gpu-telemetry-with-nvidia-data-center-gpu-manager/148799
CC-MAIN-2022-27
refinedweb
194
51.28
Mobile Corner In part one of this series, Nick Randolph discusses the use of Expression Blend design-time data in Windows Phone applications. Microsoft often talks about how designers and developers can work hand in hand with its latest tools. Designers work in Expression Blend and developers work in Visual Studio. In my experience, this rarely happens on real world projects, however. In the few cases in which you’re lucky enough to have a designer that's familiar with Expression Blend, the fairy tale of developers and designers being able to work independently seldom happens. Projects often start out well. Designers are able to create design-time data in Expression Blend, which they can use to do the initial design of the application. However, as the project evolves and developers are brought in to start wiring up the logic of the application, the design experience typically deteriorates until the designers are left pretty much guessing where or how to position elements on the screen. In this column, I'm going to recap how to use design-time data, and how it's wired up within your Windows Phone application. My next column will expand on this topic and cover disabling design-time data at runtime, whilst preserving the design-time experience. Design-Time Data in Expression Blend We’re going to start by creating a new project in Expression Blend, which we’ll call DesignTimeDataSample.(Select File,then New Project, then Windows Phone, and then Windows Phone Application.) From the Data window, click the Create Sample Data icon and select New Sample Data, as highlighted in Figure 1. When prompted give the sample data set a name, in this case MockDataSource, and make sure the "Enable sample data when application is running" option is checked. In most cases, it's the designers that will do the initial layout of pages within the application, so they'll want to run the application using design-time data. I’ll come back to this option later to walk through replacing it with dynamically loaded data at runtime. Next, let’s customize the design-time data by changing the name and types of properties. Rename the Collection node to Images; change Property1 to Title; change Property2 to ImageUrl; and change the type of data to Image. You should have a Data window that looks similar to Figure 2. I’ll close off the role of the designer by dragging the Images node into the middle of the design surface. This should create a ListBox in the middle of the page and databind it to the design-time data. Tidy this up a little by right-clicking anywhere on the ListBox and selecting Auto Size, then Fill to stretch the ListBox to fill the available space. Structure of Design-Time Data Before I look at how to replace the design-time data with runtime data, let’s first look at how the design-time data is wired up. If you look at the Projects window (Figure 3), you’ll see that a number of additional files have been added to the project. The structure of your design-time data, as illustrated in Figure 2 in the data window, is defined by the MockDataSource.xsd and the generated MockDataSource.xaml.cs file. The MockDataSource.xaml file contains the actual sample data. The XAML in Listing 1 includes the Images collection, which in this case has two items, each with Title and ImageUrl properties. What you should also note is that the ImageUrl property values correspond to the images that have also been added to your project. When you added the design-time data to your application, you may have noticed that the App.xaml file was modified. The changes have been emphasized in the following XAML. This code is creating an instance of the design-time data within your application so that it can be referenced as a resource within your application using the key, MockDataSource and, of course, by Expression Blend. <Application xmlns:SampleData="clr-namespace:Expression.Blend.SampleData.MockDataSource" x:Class="DesignTimeDataSample.App" ... > <Application.Resources> <SampleData:MockDataSource x: </Application.Resources> ... </Application> The last step is looking at how the design-time data is wired up on the page; again this was done automatically when you dragged the Images node onto the design surface. The following XAML, taken from MainPage.xaml, highlights the attributes that wire up the design-time data to the ListBox. <phone:PhoneApplicationPage x:Class="DesignTimeDataSample.MainPage" ... > ... <Grid x: ... <Grid x: <ListBox ItemTemplate="{StaticResource ImagesItemTemplate}" ItemsSource="{Binding Images}" /> </Grid> </Grid> </phone:PhoneApplicationPage> First, the DataContext of the LayoutRoot (the first element within the page) is set equal to the design-time data using the key, MockDataSource. The key value used to reference the design-time data is irrelevant, so long as it’s the sample in the App.xaml, where the design-time data is loaded into the application, and wherever the data needs to be referenced. It’s important to note that the DataContext flows down to each of the nested elements (unless the DataContext is specified for a nested element). Since neither the ListBox, nor the ContentPanel Grid, have a DataContext explicitly defined, their DataContext will be the same as the LayoutRoot. The contents of the ListBox are wired up to the Images property on the design-time data in the data binding expression for the ItemsSource property. This column serves as a recap for getting started with design-time data in your Windows Phone application, and how it’s wired up within your application. In my next column, I’ll expand on what I've presented here to disable the design-time data when the application is run, whilst ensuring it's still available at design
http://visualstudiomagazine.com/articles/2012/07/12/design-time-data-for-windows-phone.aspx
CC-MAIN-2015-11
refinedweb
958
51.48
In this article, we will cover NumPy.choose(). Along with that, for an overall better understanding, we will look at its syntax and parameter. Then we will see the application of all the theory part through a couple of examples. But at first, let us try to get a brief understanding of the function through its definition. Numpy Choose is a function to select options from the multiple arrays according to our need. Suppose you have multiple Numpy arrays grouped under a single array, and you want to get values from them collectively at once. In such cases the function NumPy.choose() comes handy. Up next, we will see the parameter associated with it, and followed by which we will look at the different parameters. Syntax of Numpy.Choose() np.choose(a,c) == np.array(][I] ]) Above, we can see the general syntax of our function. It may seem a bit complicated at first sight. But worry not. We will be discussing each of the parameters associated with it in detail. Parameter of Numpy.Choose() a: int array This Numpy array must contain numbers between 0 to n-1. Here n represents the number of choices unless a mode(an optional parameter) is associated with it. choices: a sequence of array “a” and all of the choices must be broadcastable to the same shape. If choices are itself an array, then its outermost dimension is taken as defining the “sequence”. out: array It is an optional parameter. If it is declared the output will be inserted in the pre-existing array. It should be of appropriate d-type and shape. mode:{raise(defalut), wrap,clip} It is an optional parameter. The function of this parameter is to decide how index numbers outside[0,n-1] will be treated. It has 3 conditions associated with it, which are: “raise”: In this case an exception is raised. “wrap”: In this case the value becomes value |N| “clip”: In this case value <0 are mapped to 0, values >n-1 are mapped to n-1. Return of Numpy.Choose() merged array: array This function returns a merged array as output. NOTE: If a and each choice array are not broadcastable to the same shape then in that case “VALUE ERROR” is displayed. Numpy Choose Elements of Array Example – We have covered the syntax and parameters of NumPy.choose() in detail. Now let us see them in action through different examples. These examples will help us in understanding the topic better. We will start with an elementary level and then look at various variations. #input import numpy as ppool a=[[1,23,3,6],[3,5,6,9]] print(ppool.choose([1,0,1,0],a)) Output: [ 3 23 6 6] In the above example, we have first imported the NumPy module. Then we have declared a Numpy array. After which, we have used our general syntax and a print statement to get the desired result. The output here justifies our input. We can understand it as follows: we have declared our choices as [1,0,1,0]. This means that the first component of our element will be the 1st element of sub-array number 2. Then, the second element will be the second element of array number 1. Next term will 3rd element of array number sub-array number 2, and the last element will be the 4th element of sub-array 2. Example of Choosing Values from Numpy Array #input import numpy as ppool a=[[1,23,3],[3,5,6],[45,78,90]] print(ppool.choose([1,2,0],a)) Output: [ 3 78 3] Again in this example, we have followed a similar procedure as in the above example. Here we have 3 sub-array instead of 2 and 3 choices instead of 4. The reason behind this is that we have 3 elements in the array. If we declare 4 choices, then there would be no 4th term to fill its space, and all we would get is an error. That’s something everyone should take care of while dealing working with this function. According to outputs, numpy choose() function selected values from the 2nd, 3rd, and 1st array respectively. Numpy Choose Random from an Array Example We all know a case where we need to choose a choice from a list of options. Luckily, in numpy, there is a method to achieve it precisely. Given that you have an array, numpy.choose() will select a random option from the Numpy array. The following example can help you to understand it – Code – import numpy as np x = np.random.choice(5, 3) print(x) Output – [2 3 2] (Random output) Explanation – First, we import the module numpy in the first line. Then numpy.random.choice returns the random number from 0 to 5 and form an array of lengths 3. Must Read Understanding Python Bubble Sort Numpy Determinant | What is NumPy.linalg.det() Numpy log in Python Conclusion In this article, we have covered NumPy.choose(). Besides that, we have also looked at its syntax and parameters. For better understanding, we looked at a couple of examples. We varied the syntax and looked at the output for each case. In the end, we can conclude that Numpy. choose() helps us getting elements from the different sub-arrays at once. I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this, why not read fliplr next.
https://www.pythonpool.com/numpy-choose/
CC-MAIN-2021-43
refinedweb
926
67.04
Man Page Manual Section... (3) - page: towupper NAMEtowupper - convert a wide character to uppercase SYNOPSIS #include <wctype.h> wint_t towupper(wint_t wc); DESCRIPTIONThe towupper() function is the wide-character equivalent of the toupper(3) function. If wc is a wide character, it is converted to uppercase. Characters which do not have case are returned unchanged. If wc is WEOF, WEOF is returned. RETURN VALUEThe towupper() function returns the uppercase equivalent of wc, or WEOF if wc is WEOF. CONFORMING TOC99. NOTESThe behavior of towupper() depends on the LC_CTYPE category of the current locale. This function is not very appropriate for dealing with Unicode characters, because Unicode knows about three cases: upper, lower and title case. SEE ALSOiswupper(3), towctrans(3), towlower
http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=towupper
CC-MAIN-2014-52
refinedweb
121
51.34
10 February 2012 11:24 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Reconstruction work will be executed during the shutdown. Sinopec will upgrade the facilities to reach national standard for energy saving and emission reduction, the source said. The No 2 plant will be taken off line first on 1 March through November. Once the unit restarts, the No 1 unit will be shut for the same nine-month duration, the source said. “Sinopec will have more paraxylene (PX) cargoes to sell, which is likely to earn more money in 2012 as compared with PTA,” a major Chinese trader said. Yangzi Petrochemical has no turnaround plans this year for its biggest and newest PTA
http://www.icis.com/Articles/2012/02/10/9531187/chinas-yangzi-petchem-sets-9-month-turnaround-at-old-pta-units.html
CC-MAIN-2014-10
refinedweb
114
72.46
A pipe can be created by calling pipe(int fd[2]). When an error occurs, it sets errno accordingly (check the man pages) and returns -1. When the pipe() function succeeds, a pipe will be created. The two file descriptors are needed when read(int fd[0]) or write(fd[1]) is called. Now we have the pipe which leads output to input. The standard procedure is then to fork the parent. The result will be a child process which inherits the file descriptors of the parent. What then should follow is the closing of the file descriptors which are not used. For example, when the parent wants to read data from the child, the parent closes fd[1] and the child closes fd[0]. When close() is used on a file descriptor, an EOF is encountered when using that file descriptor. Then a process won't block while trying to read the pipe when there is nothing in it. Also, no confusion is possible: no two file descriptors are the same. In this context, the dup() and dup2() system calls are interesting. int dup( int oldfd ) duplicates and connects a file descriptor on the lowest available one. So, when standard input is closed with close(0) and dup(fd[0]) is called, it copies fd[0] and connects the copy on descriptor 0. Now your pipe is connected to standard input when you exec() a program! In this example, we will create a child process which wants to send a message back to the parent. Thus, we need a pipe! #include <stdio.h> #include <unistd.h> #include <sys/types.h> void sillychild(int); int main(void) { char sz_readbuffer[40]; /* where read() puts the bytes in */ int fd[2]; /* to access the pipe */ int n_bytes_read; /* how much bytes read() returns */ int n_pipe_returnvalue; /* the integer which pipe() returns */ pid_t childpid; /* the unique number of the child process */ n_pipe_returnvalue = pipe(fd); if (n_pipe_returnvalue == -1) { perror("pipe"); exit(1); } childpid = fork(); if (childpid == -1) { perror("fork"); exit(1); } if (childpid == 0) /* child process */ { close(fd[0]); /* Close read part of the pipe */ sillychild(fd[1]); exit(0); } else /* parent process */ { close(fd[1]); /* close write part of the pipe */ printf("I try to read from %d...\n",fd[0]); n_bytes_read = read(fd[0], sz_readbuffer, 80); printf("Result: %s\n#bytes: %d\n", sz_readbuffer, n_bytes_read); close(fd[0]); } return(0); } void sillychild(int fd) { char sz_welcome_message[] = "Hello, world!"; write(fd, sz_welcome_message, sizeof(sz_welcome_message)); close(fd); }
https://www.vankuik.nl/Piping
CC-MAIN-2021-04
refinedweb
411
72.46
Suppose I have the following code in a Python unit test: aw = aps.Request("nv1") aw2 = aps.Request("nv2", aw) aw.Clear() #pseudocode: assertMethodIsCalled(aw.Clear, lambda: aps.Request("nv2", aw)) from mock import patch from PyQt4 import Qt @patch.object(Qt.QMessageBox, 'aboutQt') def testShowAboutQt(self, mock): self.win.actionAboutQt.trigger() self.assertTrue(mock.called) For your case, it could look like this: import mock def testClearWasCalled(self): aw = aps.Request("nv1") with patch.object(aw, 'Clear') as mock: aw2 = aps.Request("nv2", aw) mock.assert_called_with(42) # or mock.assert_called_once_with(42) Mock supports quite a few useful features, including ways to patch an object or module, as well as checking that the right thing was called, etc etc. Caveat emptor! (Buyer beware!) If you mistype assert_called_with (to assert_called_once or assert_called_wiht) your test may still run, as Mock will think this is a mocked function and happily go along, unless you use autospec=true. For more info read assert_called_once: Threat or Menace.
https://codedump.io/share/9xpmXcRXv4pO/1/assert-that-a-method-was-called-in-a-python-unit-test
CC-MAIN-2017-47
refinedweb
162
62.64
On Thu, 22 Sep 2011, Bernhard R. Link wrote: > Sorry for again joining in late in the distribution, but what is the use > case of this field exactly? ifeq($(DEB_DISTRIBUTION),unstable) $(error GNOME 3 packages should be uploaded to experimental) endif Or rather: ifneq(,$(filter unstable,$(DEB_DISTRIBUTION))) $(error GNOME 3 packages should be uploaded to experimental) endif That's the main use case I referred to. We can imagine more use cases in the context of other distributions than Debian. Ubuntu for example could want to adjust the behaviour when targetting a source packages for an older release (since they always use the codename in that field). > Currently I can only see possible abuses but no proper uses for it, so > unless there is something I miss, I'd rather request that variable to > be removed, as it can only harm. What kind of abuses do you see? Cheers, -- Raphaël Hertzog ◈ Debian Developer Follow my Debian News ▶ (English) ▶ (Français)
https://lists.debian.org/debian-dpkg/2011/09/msg00046.html
CC-MAIN-2015-35
refinedweb
159
52.09
Class Migration This class is given to you when you migrate your database from one version to another. It contains two properties: OldRealm and NewRealm. The NewRealm is the one you should make sure is up to date. It will contain models corresponding to the configuration you've supplied. You can read from the OldRealm and access properties that have been removed from the classes by using the dynamic API. Namespace: Realms Assembly: Realm.dll Syntax public class Migration Properties| Improve this Doc View Source NewRealm Declaration public Realm NewRealm { get; } Property Value| Improve this Doc View Source OldRealm Declaration public Realm OldRealm { get; }
https://docs.mongodb.com/realm-legacy/docs/dotnet/latest/api/reference/Realms.Migration.html
CC-MAIN-2021-25
refinedweb
104
65.12
Though it's hard to believe that it has already been five years, the World Wide Web Consortium (W3C) published XML 1.0 as a Recommendation on February 10, 1998, making XML five years old today. Since its first introduction, the Extensible Markup Language has become pervasive nearly everywhere that information is managed. With its companion and follow-on specifications, XML Namespaces, the XML Information Set, XSL Transformations (XSLT), XML Schema, XML Linking, and so on, XML has changed not only the way people publish documents on the Web but also the way people manage information internal to their enterprise. As two original members of the XML Working Group, we have witnessed many changes, some good, some a little disconcerting. In our continuing seven year effort to define XML, it is a good time to reflect for a moment about the hopes which accompanied the development of XML, what has happened since, and what should happen next. Before XML had a name, a team of twelve people came together for a simple reason and with modest expectations. We were all professionals with significant shared experience both with the World Wide Web and with using computers to process and manage information using SGML, the direct ancestor of XML. The Web was becoming ubiquitous — we wanted to use it to publish our SGML-encoded information. The ten-year-old SGML made information reusable; its power was its ability to describe information in a way that was independent of the system it was intended to be used on. But SGML had its problems, it was difficult to learn, its acceptance was limited to documentation professionals, and it was very difficult to use SGML with the new medium known as the Web. Our Working Group formed around the shared belief that the two technologies could be made to work together to make it easier to share and reuse information. Working under the auspices of the World Wide Web Consortium (W3C), we began by agreeing upon ten goals which are still listed in the first chapter of the XML specification. The nameless subset of SGML we were developing should be easy to use on the Internet, support a wide variety of applications, be compatible with SGML, and so on. The goal of bringing together the two powerful ideas of the Web and of descriptive markup energized our group and drove us to work evenings and meet by teleconference not only on Tuesday but also Saturday mornings. Whenever we lost our way, someone would ask, “Is this feature necessary for success?” The group worked to transform these goals and experiences into a formal language, a language designed to make sharing reusable information ubiquitous. Just as interchangeable parts drove the Industrial Age, reusable information powers the Information Age. Our shared experience with SGML had taught us that information becomes more valuable when it can be shared and reused. And the Web would let us share information with wider audiences than we ever imagined. We knew that SGML was the best approach for reusing the kinds of information we worked with, but we needed to make SGML easier to learn, understand, and implement, while retaining its core values; in short, SGML fit for the Web. The core value of SGML that we wanted to build into XML is that of descriptive markup. Markup is information inserted into a document that computers use; in the case of SGML, markup takes the form of tags inserted into documents to mark their structure. Descriptive markup uses markup to label the structure and other properties of information in a way that is independent of both the system it's created on and of the processing to be performed on it. We did not want XML to be a fixed set of tags: we wanted XML, like SGML, to be a meta-language. Meta-languages are languages used to create vocabularies that are relevant to their information. User defined, processing-independent markup is easier to reuse and can be processed in new and often unexpected ways. Like SGML, XML was intended to help information owners escape being locked in to a particular vendor. Descriptive markup also makes information independent of any particular piece of software. System-dependent and proprietary formats hinder the reuse of information and make the data owner dependent on the vendors whose software can create and manipulate those formats. With an SGML fit for the Web, it would be easy and reliable for computers (and humans) to use descriptive, structural markup in their documents. By tagging data descriptively, the information owner can make documents into semantically rich data and avoid the kind of presentation-oriented markup used just because it looks right, markup we called “crufty tag salad”. To our surprise, we did it. The 25-page XML specification could be easily learned and implemented. XML is a meta-language that allows you to design markup languages that describes what is important to you. XML provides elements and attributes to capture logical structure and enables semantic understanding. Working ‘under the radar’ we were able to balance features against complexity. Our practical litmus test “is it necessary for success?” helped us create a language fit for the Web. Before we knew it, all sorts of people started using XML — best of all, doing so without the permission or guidance of the Working Group. Database people, transaction designers, system engineers, B2B developers all crashed our party. Why, an outsider even got an article on XML published in Time magazine! People flocked to talks given about XML; tools were created, and not by just a few but also by the largest software companies in the world. The press reported, at first with lots of misunderstanding but later with growing insight into how XML could make its mark on the Information Age. Rapid growth in the XML community was good luck, because it meant there were a lot more tools than there ever would have been otherwise. But it also had an even more important consequence: information that had been stored in document systems, word processors, and databases were suddenly accessible in the same format and could be processed with the same tools. Remarkably, XML became pervasive nearly everywhere that text-based information is managed by computers! The forces of Change. Of course, success brought pressures to fix the things the Working Group got wrong. Experience working with XML has shown that parts of the design don't work quite as well as other parts. The mechanisms for declaring and using entities, the rules for processing well-formed documents, and the limited possibilities of nesting full XML documents inside other XML documents are all occasionally sources of difficulty. Because XML was so stripped down, it was easy to adopt and extend; because it was so stripped down, adopters almost had to extend it. And we did. And those other people did, too. The original small Working Group with its common, shared experience gave way to lots of groups with differing goals and backgrounds. XML grew stronger for the new insights. You now have XML + XLINK + XSL + Namespaces + Infoset + XML Linking + XPointer Framework + XPointer namespaces + XPointer xptr() + XSLT + XPath + XSL FO + DOM + Sax + stylesheet linking PI + XML Schema + XQuery + XML Encryption + XML Canonicalization + XML Signature + DOM Level 2 + DOM Level 3. And as it grew, It became more complex and confusing. What started as a trim 25-page spec, SGML slimmed down for the Web, now has become a complex set of specs totaling hundreds of pages. While these specs describe powerful technologies, taken as a whole, who would describe them now as slim or trimmed down? Five years ago, XML tools could be developed by a good programmer in a week; now it may take full-time teams of the best programmers to keep up. Usability has suffered a bit. How do we keep XML useful? Should we add even more functionality, describing it in hundreds of pages of ever-growing complexity? If we do, we risk finding that ever fewer people can understand all the intricacies of XML. Should we merge the major XML specifications into a single much larger spec which defines XML, its information set, its data model, and also provides XML vocabularies for linking, schema validation, query, transformation, etc., etc.? Such a spec could be clearer but it might rather daunting. It would certainly be less modular and more difficult to evolve. Should we continue to break different parts of functionality out into different specifications? This help keep the individual parts of the puzzle more manageable. If we do, should we rethink how the XML specifications are layered? Should we fix or change how details such as entities are declared or perhaps eliminate the need for the older DTD syntax? Should we change the rules about where structure can be declared so as to make it easier to nest information? Shall we eliminate or redesign attributes? Or perhaps we need to split everything into two formats, one designed for machine-level processing and the other for humans? All of these could improve XML, but would they address the fundamental complexity that we have created? Our entire computing architecture is in flux. Not only are the computers themselves changing, but new network computing approaches are continuing the evolution of the Web, grids, peer-to-peer. Ever more distributed processes are raising new questions and providing new opportunities to help manage information glut. With these new architectures comes an increased need to interact with data. Large numbers of people must have intimate knowledge of information and of how to build systems to manage it. If these people cannot easily understand XML and its companion specs, they will find something else which is slimmer and trimmer. As we look back at what we worked so hard to achieve, what remains to be answered is whether we, the community who are defining XML and building the tools to use it, can use what we learned over the last five years as a guide, and re-ask with passion and enthusiasm the question asked so often years ago — “is this necessary for success?” Dave Hollander (above, left), CTO of Contivo, and Michael Sperberg-McQueen (above, right), Architecture Domain Lead of the W3C, were members of the W3C Working Group which developed the XML 1.0 specification. They serve as co-chairs of the W3C XML Coordination Group and of the W3C XML Schema Working Group.
http://www.w3.org/2003/02/xml-at-5.html
crawl-002
refinedweb
1,740
59.43
I have a scala class inheriting from SimpleSwingApplication.This class defines a window (with def top = new MainFrame) and instanciates an actor. the actor's code is simple: class Deselectionneur extends Actor { def act() { while (true) { receive { case a:sTable => { Thread.sleep(3000) a.peer.changeSelection(0,0,false,false) a.peer.changeSelection(0,0,true,false) } } } } } and the main class uses also "substance", a API allowing gui customization(there's no more ugly swing controls with it!). the actor is called when I leaves a given swing table with my mouse; then the actor is called & deselects all the rows of the table. the actor behaves very well, but when I launch my program, each times the actor is called, I get this error message: org.pushingpixels.substance.api.UiThreadingViolationException: State tracking must be done on Event Dispatch Thread do you know how I can remove this error message? You need to move the gui update onto the EDT Something like (I haven't compiled this) case a:sTable => { scala.swing.Swing.onEDT { Thread.sleep(3000) // this will stop GUI updates a.peer.changeSelection(0,0,false,false) a.peer.changeSelection(0,0,true,false) } } Some background on EDT can be found here:
http://www.dlxedu.com/askdetail/3/a5efdba689084847330e55003ec4009b.html
CC-MAIN-2018-30
refinedweb
204
55.03
ld(1) BSD General Commands Manual ld(1) NAME ld -- linker SYNOPSIS ld files... [options] [-o outputfile] DESCRIPTION The ld command combines several object files and libraries, resolves ref- erences, and produces an ouput file. ld can produce a final linked image (executable, dylib, or bundle), or with the -r option, produce another object file. If the -o option is not used, the output file produced is named "a.out". Universal The linker accepts universal (multiple-architecture) input files, but always creates a "thin" (single-architecture), standard Mach-O output file. The architecture for the output file is specified using the -arch option. If this option is not used, ld attempts to determine the output architecture by examining the object files in command line order. The first "thin" architecture determines that of the output file. If no input object file is a "thin" file, the native 32-bit architecture for the host is used. Usually, ld is not used directly. Instead the gcc(1) compiler driver invokes ld. The compiler driver can be passed multiple -arch options and it will create a universal final linked image by invoking ld multiple times and then running lipo(1) merge the outputs into a universal file. Layout. Sections created from files with the -sectcreate option will be laid out at after sections from .o files. The use of the -order_file option will alter the layout rules above, and move the symbols specified to start of their section. Libraries A static library (aka static archive) is a collection of .o files with a table of contents that lists the global symbols in the .o files. ld will only pull .o files out of a static library if needed to resolve some sym- bol reference. Unlike traditional linkers, ld will continually search a static library while linking. There is no need to specify a static library multiple times on the command line. A dynamic library (aka dylib or framework) is a final linked image. Putting a dynamic library on the command line causes two things: 1) The generated final linked image will have encoded that it depends on that dynamic library. 2) Exported symbols from the dynamic library are used to resolve references. Both dynamic and static libraries are searched as they appear on the com- mand line. Search paths ld maintains a list of directories to search for a library or framework to use. The default library search path is /usr/lib then /usr/local/lib. The -L option will add a new library search path. The default framework search path is /Library/Frameworks then /System/Library/Frameworks. (Note: previously, /Network/Library/Frameworks was at the end of the default path. If you need that functionality, you need to explicitly add -F/Network/Library/Frameworks). The -F option will a new framework search path. The -Z option will remove the standard search paths. The -syslibroot option will prepend a prefix to all search paths. Two-level namespace By default all references resolved to a dynamic library record the library to which they were resolved. At runtime, dyld uses that informa- tion to directly resolve symbols. The alternative is to use the -flat_namespace option. With flat namespace, the library is not recorded. At runtime, dyld will search each dynamic library in load order when resolving symbols. This is slower, but more like how other operating systems resolve symbols. Indirect dynamic libraries If the command line specifies to link against dylib A, and when dylib A was built it linked against dylib B, then B is considered an indirect dylib. When linking for two-level namespace, ld does not look at indi- rect dylibs, except when re-exported by a direct dylibs. On the other hand when linking for flat namespace, ld does load all indirect dylibs and uses them to resolve references. Even though indirect dylibs are specified via a full path, ld first uses the specified search paths to locate each indirect dylib. If one cannot be found using the search paths, the full path is used. Dynamic libraries undefines When linking for two-level namespace, ld does not verify that undefines in dylibs actually exist. But when linking for flat namespace, ld does check that all undefines from all loaded dylibs have a matching defini- tion. This is sometimes used to force selected functions to be loaded from a static library. OPTIONS Options that control the kind of output -execute The default. Produce a mach-o main executable that has file type MH_EXECUTE. -dylib Produce a mach-o shared library that has file type MH_DYLIB. -bundle Produce a mach-o bundle that has file type MH_BUNDLE. -r Merges object files to produce another mach-o object file with file type MH_OBJECT. -dylinker Produce a mach-o dylinker that has file type MH_DYLINKER. Only used when building dyld. -dynamic The default. Implied by -dylib, -bundle, or -execute -static Produces a mach-o file that does not use the dyld. Only used building the kernel. -arch arch_name Specifies which architecture (e.g. ppc, ppc64, i386, x86_64) the output file should be. -o path Specifies the name and location of the output file. If not specified, `a.out' is used. Options that control libraries -lx This option tells the linker to search for libx.dylib or libx.a in the library search path. If string x is of the form y.o, then that file is searched for in the same places, but without prepending `lib' or appending `.a' or `.dylib' to the filename. -weak-lx This is the same as the -lx but forces the library and all references to it to be marked as weak imports. That is, the library is allowed to be missing at runtime. -weak_library path_to_library This is the same as listing a file name path to a library on the link line except that it forces the library and all ref- erences to it to be marked as weak imports. -reexport-lx This is the same as the -lx but specifies that the all sym- bols in library x should be available to clients linking to the library being created. This was previously done with a separate -sub_library option. -reexport_library path_to_library This is the same as listing a file name path to a library on the link line and it specifies that the all symbols in library path should be available to clients linking to the library being created. This was previously done with a sepa- rate -sub_library option. -lazy-lx This is the same as the -lx but it is only for shared libraries and the linker will construct glue code so that the shared library is not loaded until the first function in it is called. -lazy_library path_to_library This is the same as listing a file name path to a shared library on the link line except that the linker will con- struct glue code so that the shared library is not loaded until the first function in it is called. -upward-lx This is the same as the -lx but specifies that the dylib is an upward dependency. -upward_library path_to_library This is the same as listing a file name path to a library on the link line but also marks the dylib as an upward depen- dency. -Ldir Add dir to the list of directories in which to search for libraries. Directories specified with -L are searched in the order they appear on the command line and before the default search path. In Xcode4 and later, there can be a space between the -L and directory. -Z Do not search the standard directories when searching for libraries and frameworks. -syslibroot rootdir Prepend rootdir to all search paths when searching for libraries or frameworks. -search_paths_first This is now the default (in Xcode4 tools). When processing -lx the linker now searches each directory in its library search paths for `libx.dylib' then `libx.a' before the moving on to the next path in the library search path. -search_dylibs_first Changes the searching behavior for libraries. The default is that when processing -lx the linker searches each directory in its library search paths for `libx.dylib' then `libx.a'. This option changes the behavior to first search for a file of the form `libx.dylib' in each directory in the library search path, then a file of the form `libx.a' is searched for in the library search paths. This option restores the search behavior of the linker prior to Xcode4. -framework name[,suffix] This option tells the linker to search for `name.frame- work/name' the framework search path. If the optional suffix is specified the framework is first searched for the name with the suffix and then without (e.g. look for `name.frame- work/name_suffix' first, if not there try `name.frame- work/name'). -weak_framework name[,suffix] This is the same as the -framework name[,suffix] but forces the framework and all references to it to be marked as weak imports. -reexport_framework name[,suffix] This is the same as the -framework name[,suffix] but also specifies that the all symbols in that framework should be available to clients linking to the library being created. This was previously done with a separate -sub_umbrella option. -lazy_framework name[,suffix] This is the same as the -framework name[,suffix] except that the linker will construct glue code so that the framework is not loaded until the first function in it is called. You cannot directly access data or Objective-C classes in a framework linked this way. -upward_framework name[,suffix] This is the same as the -framework name[,suffix] but also specifies that the framework is an upward dependency. -Fdir Add dir to the list of directories in which to search for frameworks. Directories specified with -F are searched in the order they appear on the command line and before the default search path. In Xcode4 and later, there can be a space between the -F and directory. . Options that control additional content -sectcreate segname sectname file The section sectname in the segment segname is created from the contents of file file. The combination of segname and sectname must be unique D there cannot already be a section (segname,sectname) from any other input. -filelist file[,dirname] Specifies that the linker should link the files listed in file. This is an alternative to listing the files on the command line. The file names are listed one per line sepa- rated only by newlines. (Spaces and tabs are assumed to be part of the file name.) If the optional directory name, dirname is specified, it is prepended to each name in the list file. -dtrace file Enables dtrace static probes when producing a final linked image. The file file must be a DTrace script which declares the static probes. Options that control optimizations -dead_strip Remove functions and data that are unreachable by the entry point or exported symbols. -order_file file Alters the order in which functions and data are laid out. For each section in the output file, any symbol in that sec- tion that are specified in the order file file is moved to the start of its section and laid out in the same order as in the order file file. Order files are text files with one symbol name per line. Lines starting with a # are comments. A symbol name may be optionally preceded with its object file leaf name and a colon (e.g. foo.o:_foo). This is useful for static functions/data that occur in multiple files. A symbol name may also be optionally preceded with the architecture (e.g. ppc:_foo or ppc:foo.o:_foo). This enables you to have one order file that works for multiple architectures. Lit- eral c-strings may be ordered by by quoting the string (e.g. "Hello, world\n") in the order file. -no_order_inits When the -order_file option is not used, the linker lays out functions in object file order and it moves all initializer routines to the start of the __text section and terminator routines to the end. Use this option to disable the automatic rearrangement of initializers and terminators. -no_order_data By default the linker reorders global data in the __DATA seg- ment so that all global variables that dyld will need to adjust at launch time will early in the __DATA segment. This reduces the number of dirty pages at launch time. This option disables that optimization. -macosx_version_min version This is set to indicate the oldest Mac OS X version that that the output is to be used on. Specifying a later version enables the linker to assumes features of that OS in the out- put file. The format of version is a Mac OS X version number such as 10.4 or 10.5 -ios_version_min version This is set to indicate the oldest iOS version that that the output is to be used on. Specifying a later version enables the linker to assumes features of that OS in the output file. The format of version is an iOS version number such as 3.1 or 4.0 -image_base address Specifies the perferred load address for a dylib or bundle. The argument address is a hexadecimal number with an optional leading 0x. By choosing non-overlapping address for all dylibs and bundles that a program loads, launch time can be improved because dyld will not need to "rebase" the image (that is, adjust pointers within the image to work at the loaded address). It is often easier to not use this option, but instead use the rebase(1) tool, and give it a list of dylibs. It will then choose non-overlapping addresses for the list and rebase them all. This option is also called -seg1addr for compatibility. -no_implicit_dylibs When creating a two-level namespace final linked image, nor- mally the linker will hoist up public dylibs that are implic- itly linked to make the two-level namespace encoding more efficient for dyld. For example, Cocoa re-exports AppKit and AppKit re-exports Foundation. If you link with -framework Cocoa and use a symbol from Foundation, the linker will implicitly add a load command to load Foundation and encode the symbol as coming from Foundation. If you use this option, the linker will not add a load command for Foundation and encode the symbol as coming from Cocoa. Then at runtime dyld will have to search Cocoa and AppKit before finding the symbol in Foundation. -exported_symbols_order file When targeting Mac OS X 10.6 or later, the format of the exported symbol information can be optimized to make lookups of popular symbols faster. This option is used to pass a file containing a list of the symbols most frequently used by clients of the dynamic library being built. Not all exported symbols need to be listed. -no_zero_fill_sections By default the linker moves all zero fill sections to the end of the __DATA segment and configures them to use no space on disk. This option suppresses that optimization, so zero- filled data occupies space on disk in a final linked image. -merge_zero_fill_sections Causes all zero-fill sections in the __DATA segment to be merged into one __zerofill section. Options when creating a dynamic library (dylib) -install_name name Sets an internal "install path" (LC_ID_DYLIB) in a dynamic library. Any clients linked against the library will record that path as the way dyld should locate this library. If this option is not specified, then the -o path will be used. This option is also called -dylib_install_name for compati- bility. -mark_dead_strippable_dylib Specifies that the dylib being built can be dead strip by any client. That is, the dylib has no initialization side effects. So if a client links against the dylib, but never uses any symbol from it, the linker can optimize away the use of the dylib. -compatibility_version number Specifies the compatibility version number of the library. When a library is loaded by dyld, the compatibility version is checked and if the program's version is greater that the library's version, it is an error. The format of number is X[.Y[.Z]] where X must be a positive non-zero number less than or equal to 65535, and .Y and .Z are optional and if present must be non-negative numbers less than or equal to 255. If the compatibility version number is not specified, it has a value of 0 and no checking is done when the library is used. This option is also called -dylib_compatibil- ity_version for compatibility. -current_version number Specifies the current version number of the library. The cur- rent version of the library can be obtained programmatically by the user of the library so it can determine exactly which version of the library it is using. The format of number is X[.Y[.Z]] where X must be a positive non-zero number less than or equal to 65535, and .Y and .Z are optional and if present must be non-negative numbers less than or equal to 255. If the version number is not specified, it has a value of 0. This option is also called -dylib_current_version for compatibility. Options when creating a main executable -pie This makes a special kind of main executable that is position independent (PIE). On Mac OS X 10.5 and later, the OS the OS will load a PIE at a random address each time it is executed. You cannot create a PIE from .o files compiled with -mdy- namic-no-pic. That means the codegen is less optimal, but the address randomization adds some security. When targeting Mac OS X 10.7 or later PIE is the default for main executa- bles. -no_pie Do not make a position independent executable (PIE). This is the default, when targeting 10.6 and earlier. -pagezero_size size By default the linker creates an unreadable segment starting at address zero named __PAGEZERO. Its existence will cause a bus error if a NULL pointer is dereferenced. The argument size is a hexadecimal number with an optional leading 0x. If size is zero, the linker will not generate a page zero seg- ment. By default on 32-bit architectures the page zero size is 4KB. On 64-bit architectures, the default size is 4GB. The ppc64 architecture has some special cases. Since Mac OS X 10.4 did not support 4GB page zero programs, the default page zero size for ppc64 will be 4KB unless -macosx_version_min is 10.5 or later. Also, the -mdynamic-no-pic codegen model for ppc64 will only work if the code is placed in the lower 2GB of the address space, so the if the linker detects any such code, the page zero size is set to 4KB and then a new unread- able trailing segment is created after the code, filling up the lower 4GB. -stack_size size Specifies the maximum stack size for the main thread in a program. Without this option a program has a 8MB stack. The argument size is a hexadecimal number with an optional lead- ing 0x. The size should be an even multiple of 4KB, that is the last three hexadecimal digits should be zero. -allow_stack_execute Marks executable so that all stacks in the task will be given stack execution privilege. This includes pthread stacks. Options when creating a bundle -bundle_loader executable This specifies the executable that will be loading the bundle output file being linked. Undefined symbols from the bundle are checked against the specified executable like it was one of the dynamic libraries the bundle was linked with. Options when creating an object file -keep_private_externs Don't turn private external (aka visibility=hidden) symbols into static symbols, but rather leave them as private exter- nal in the resulting object file. -d Force definition of common symbols. That is, transform ten- tative definitions into real definitions. Options that control symbol resolution -exported_symbols_list filename The specified filename contains a list of global symbol names that will remain as global symbols in the output file. All other global symbols will be treated as if they were marked as __private_extern__ (aka visibility=hidden) and will not be global in the output file. The symbol names listed in file- name must be one per line. Leading and trailing white space are not part of the symbol name. Lines starting with # are ignored, as are lines with only white space. Some wildcards (similar to shell file matching) are supported. The * matches zero or more characters. The ? matches one charac- ter. [abc] matches one character which must be an 'a', 'b', or 'c'. [a-z] matches any single lower case letter from 'a' to 'z'. -exported_symbol symbol The specified symbol is added to the list of global symbols names that will remain as global symbols in the output file. This option can be used multiple times. For short lists, this can be more convenient than creating a file and using -exported_symbols_list. -unexported_symbols_list file The specified filename contains a list of global symbol names that will not remain as global symbols in the output file. The symbols will be treated as if they were marked as __pri- vate_extern__ (aka visibility=hidden) and will not be global in the output file. The symbol names listed in filename must be one per line. Leading and trailing white space are not part of the symbol name. Lines starting with # are ignored, as are lines with only white space. Some wildcards (similar to shell file matching) are supported. The * matches zero or more characters. The ? matches one character. [abc] matches one character which must be an 'a', 'b', or 'c'. [a-z] matches any single lower case letter from 'a' to 'z'. -unexported_symbol symbol The specified symbol is added to the list of global symbols names that will not remain as global symbols in the output file. This option can be used multiple times. For short lists, this can be more convenient than creating a file and using -unexported_symbols_list. -reexported_symbols_list file The specified filename contains a list of symbol names that are implemented in a dependent dylib and should be re- exported through the dylib being created. -alias symbol_name alternate_symbol_name Create an alias named alternate_symbol_name for the symbol symbol_name. By default the alias symbol has global visibil- ity. This option was previous the -idef:indir option. -alias_list filename The specified filename contains a list of aliases. The symbol name and its alias are on one line, separated by whitespace. Lines starting with # are ignored. -flat_namespace Alters how symbols are resolved at build time and runtime. With -two_levelnamespace (the default), the linker only searches dylibs on the command line for symbols, and records in which dylib they were found. With -flat_namespace, the linker searches all dylibs on the command line and all dylibs those original dylibs depend on. The linker does not record which dylib an external symbol came from, so at runtime dyld again searches all images and uses the first definition it finds. In addition, any undefines in loaded flat_namespace dylibs must be resolvable at build time. -u symbol_name Specified that symbol symbol_name must be defined for the link to succeed. This is useful to force selected functions to be loaded from a static library. -U symbol_name Specified that it is ok for symbol_name to have no defini- tion. With -two_levelnamespace, the resulting symbol will be marked dynamic_lookup which means dyld will search all loaded images. -undefined treatment Specifies how undefined symbols are to be treated. Options are: error, warning, suppress, or dynamic_lookup. The default is error. -rpath path Add path to the runpath search path list for image being cre- ated. At runtime, dyld uses the runpath when searching for dylibs whose load path begins with @rpath/. -commons treatment Specifies how commons (aka tentative definitions) are resolved with respect to dylibs. Options are: ignore_dylibs, use_dylibs, error. The default is ignore_dylibs which means the linker will turn a tentative definition in an object file into a real definition and not even check dylibs for con- flicts. The dylibs option means the linker should check linked dylibs for definitions and use them to replace tenta- tive definitions from object files. The error option means the linker should issue an error whenever a tentative defini- tion in an object file conflicts with an external symbol in a linked dylib. See also -warn_commons. Options for introspecting the linker -why_load Log why each object file in a static library is loaded. That is, what symbol was needed. Also called -whyload for compat- ibility. -why_live symbol_name Logs a chain of references to symbol_name. Only applicable with -dead_strip . It can help debug why something that you think should be dead strip removed is not removed. -print_statistics Logs information about the amount of memory and time the linker used. -t Logs each file (object, archive, or dylib) the linker loads. Useful for debugging problems with search paths where the wrong library is loaded. -whatsloaded Logs just object files the linker loads. -order_file_statistics Logs information about the processing of a -order_file. -map map_file_path Writes a map file to the specified path which details all symbols and their addresses in the output image. Options for controling symbol table optimizations -S Do not put debug information (STABS or DWARF) in the output file. -x Do not put non-global symbols in the output file's symbol ta- ble. Non-global symbols are useful when debugging and getting symbol names in back traces, but are not used at runtime. If -x is used with -r non-global symbol names are not removed, but instead replaced with a unique, dummy name that will be automatically removed when linked into a final linked image. This allows dead code stripping, which uses symbols to break up code and data, to work properly and provides the security of having source symbol names removed. -non_global_symbols_strip_list filename The specified filename contains a list of non-global symbol names that should be removed from the output file's symbol table. All other non-global symbol names will remain in the output files symbol table. See -exported_symbols_list for syntax and use of wildcards. -non_global_symbols_no_strip_list filename The specified filename contains a list of non-global symbol names that should be remain in the output file's symbol ta- ble. All other symbol names will be removed from the output file's symbol table. See -exported_symbols_list for syntax and use of wildcards. Rarely used Options -v Prints the version of the linker. -allow_heap_execute Normally i386 main executables will be marked so that the Mac OS X 10.7 and later kernel will only allow pages with the x- bit to execute instructions. This option overrides that behavior and allows instructions on any page to be executed. -fatal_warnings Causes the linker to exit with a non-zero value if any warn- ings were emitted. -no_eh_labels Normally in -r mode, the linker produces .eh labels on all FDEs in the __eh_frame section. This option suppresses those labels. Those labels are not needed by the Mac OS X 10.6 linker but are needed by earlier linker tools. -warn_compact_unwind When producing a final linked image, the linker processes the __eh_frame section and produces an __unwind_info section. Most FDE entries in the __eh_frame can be represented by a 32-bit value in the __unwind_info section. The option issues a warning for any function whose FDE cannot be expressed in the compact unwind format. -warn_weak_exports Issue a warning if the resulting final linked image contains weak external symbols. Such symbols require dyld to do extra work at launch time to coalesce those symbols. -objc_gc_compaction Marks the Objective-C image info in the final linked image with the bit that says that the code was built to work the compacting garbage collection. -objc_gc Verifies all code was compiled with -fobjc-gc or -fobjc-gc- only. -objc_gc_only Verifies all code was compiled with -fobjc-gc-only. -dead_strip_dylibs Remove dylibs that are unreachable by the entry point or exported symbols. That is, suppresses the generation of load command commands for dylibs which supplied no symbols during the link. This option should not be used when linking against a dylib which is required at runtime for some indirect reason such as the dylib has an important initializer. -allow_sub_type_mismatches Normally the linker considers different cpu-subtype for ARM (e.g. armv4t and armv6) to be different different architec- tures that cannot be mixed at build time. This option relaxes that requirement, allowing you to mix object files compiled for different ARM subtypes. -no_uuid Do not generate an LC_UUID load command in the output file. -root_safe Sets the MH_ROOT_SAFE bit in the mach header of the output file. -setuid_safe Sets the MH_SETUID_SAFE bit in the mach header of the output file. -interposable Indirects access to all to exported symbols when creating a dynamic library. -init symbol_name The specified symbol_name will be run as the first initial- izer. Only used when creating a dynamic library. -sub_library library_name The specified dylib will be re-exported. For example the library_name for /usr/lib/libobjc_profile.A.dylib would be libobjc. Only used when creating a dynamic library. -sub_umbrella framework_name The specified framework will be re-exported. Only used when creating a dynamic library. -allowable_client name Restricts what can link against the dynamic library being created. By default any code can link against any dylib. But if a dylib is supposed to be private to a small set of clients, you can formalize that by adding a -allowable_client for each client. If a client is libfoo.1.dylib its -allow- able_client name would be "foo". If a client is Foo.frame- work its -allowable_client name would be "Foo". For the degenerate case where you want no one to ever link against a dylib, you can set the -allowable_client to "!". -client_name name Enables a bundle to link against a dylib that was built with -allowable_client. The name specified must match one of the -allowable_client names specified when the dylib was created. -umbrella framework_name Specifies that the dylib being linked is re-exported through an umbrella framework of the specified name. -headerpad size Specifies the minimum space for future expansion of the load commands. Only useful if intend to run install_name_tool to alter the load commands later. Size is a hexadecimal number. -headerpad_max_install_names Automatically adds space for future expansion of load com- mands such that all paths could expand to MAXPATHLEN. Only useful if intend to run install_name_tool to alter the load commands later. -bind_at_load Sets a bit in the mach header of the resulting binary which tells dyld to bind all symbols when the binary is loaded, rather than lazily. -force_flat_namespace Sets a bit in the mach header of the resulting binary which tells dyld to not only use flat namespace for the binary, but force flat namespace binding on all dylibs and bundles loaded in the process. Can only be used when linking main executa- bles. -sectalign segname sectname value The section named sectname in the segment segname will have its alignment set to value, where value is a hexadecimal num- ber that must be an integral power of 2. -stack_addr address Specifies the initial address of the stack pointer value, where value is a hexadecimal number rounded to a page bound- ary. -segprot segname max_prot init_prot Specifies the maximum and initial virtual memory protection of the named segment, name, to be max and init ,respectively. The values for max and init are any combination of the char- acters `r' (for read), `w' (for write), `x' (for execute) and `-' (no access). -seg_addr_table filename Specifies a file containing base addresses for dynamic libraries. Each line of the file is a hexadecimal base address followed by whitespace then the install name of the corresponding dylib. The # character denotes a comment. -segs_read_write_addr address Allows a dynamic library to be built where the read-only and read-write segments are not contiguous. The address speci- fied is a hexadecimal number that indicates the base address for the read-write segments. -segs_read_only_addr address Allows a dynamic library to be built where the read-only and read-write segments are not contiguous. The address speci- fied is a hexadecimal number that indicates the base address for the read-only segments. -segaddr name address Specifies the starting address of the segment named name to be address. The address must be a hexadecimal number that is a multiple of 4K page size. -seg_page_size name size Specifies the page size used by the specified segment. By default the page size is 4096 for all segments. The linker will lay out segments such that size of a segment is always an even multiple of its page size. . install_name specifies the path where the library normally resides. file_name specifies the path of the library you want to use instead. For example, if you link to a library that depends upon the dynamic library libsys and you have libsys installed in a nondefault location, you would use this option: -dylib_file /lib/lib- sys_s.A.dylib:/me/lib/libsys_s.A.dylib. -prebind The created output file will be in the prebound format. This was used in Mac OS X 10.3 and earlier to improve launch per- formance. -weak_reference_mismatches treatment Specifies what to do if a symbol is weak-imported in one object file but not weak-imported in another. The valid treatments are: error, weak, or non-weak. The default is non-weak. -read_only_relocs treatment Enables the use of relocations which will cause dyld to mod- ify (copy-on-write) read-only pages. The compiler will nor- mally never generate such code. -force_cpusubtype_ALL The is only applicable with -arch ppc. It tells the linker to ignore the PowerPC cpu requirements (e.g. G3, G4 or G5) encoded in the object files and mark the resulting binary as runnable on any PowerPC cpu. -dylinker_install_name path Only used when building dyld. -no_arch_warnings Suppresses warning messages about files that have the wrong architecture for the -arch flag -arch_errors_fatal Turns into errors, warnings about files that have the wrong architecture for the -arch flag. -e symbol_name Specifies the entry point of a main executable. By default the entry name is "start" which is found in crt1.o which con- tains the glue code need to set up and call main(). -w Suppress all warning messages -final_output name Specifies the install name of a dylib if -install_name is not used. This option is used by gcc driver when it is invoked with multiple -arch arguments. -arch_multiple Specifes that the linker should augment error and warning messages with the architecture name. This option is used by gcc driver when it is invoked with multiple -arch arguments. -twolevel_namespace_hints Specifies that hints should be added to the resulting binary that can help speed up runtime binding by dyld as long as the libraries being linked against have not changed. -dot path Create a file at the specified path containing a graph of symbol dependencies. The .dot file can be viewed in GraphViz. -keep_relocs Add section based relocation records to a final linked image. These relocations are ignored at runtime by dyld. -warn_stabs Print a warning when the linker cannot do a BINCL/EINCL opti- mization because the compiler put a bad stab symbol inside a BINCL/EINCL range. -warn_commons Print a warning whenever the a tentative definition in an object file is found and a external symbol by the same name is also found in a linked dylib. This often means that the extern keyword is missing from a variable declaration in a header file. -read_only_stubs [i386 only] Makes the __IMPORT segment of a final linked images read-only. This option makes a program slightly more secure in that the JMP instructions in the i386 fast stubs cannot be easily overwritten by malicious code. The downside is the dyld must use mprotect() to temporarily make the seg- ment writable while it is binding the stubs. -slow_stubs [i386 only] Instead of using single JMP instruction stubs, the linker creates code in the __TEXT segment which calls through a lazy pointer in the __DATA segment. - posed) because they would be direct calls. -no_function_starts By default the linker creates a compress table of function start addresses in the LINKEDIT of final linked image. This option disables that behavior. -no_version_load_command By default the linker creates a load command in final linked images that contains the -macosx_version_min. This option disables that behavior. -no_objc_category_merging By default when producing final linked image, the linker will optimize Objective-C classes by merging any categories on a class into the class. Both the class and its categories must be defined in the image being linked for the optimization to occur. Using this option disables that behavior. -object_path_lto filename When performing Link Time Optimization (LTO) and a temporary mach-o object file is needed, if this option is used, the temporary file will be stored at the specified path and remain after the link is complete. Without the option, the linker picks a path and deletes the object file before the linker tool completes, thus tools such as the debugger or dsymutil will not be able to access the DWARF debug info in the temporary object file. -page_align_data_atoms During development, this option can be used to space out all global variables so each is on a separate page. This is use- ful when analyzing dirty and resident pages. The information can then be used to create an order file to cluster commonly used/dirty globals onto the same page(s). Obsolete Options -segalign value All segments must be page aligned. This option is obsolete. -seglinkedit Object files (MH_OBJECT) with a LINKEDIT segment are no longer supported. This option is obsolete. -noseglinkedit This is the default. This option is obsolete. -fvmlib Fixed VM shared libraries (MH_FVMLIB) are no longer sup- ported. This option is obsolete. -preload Preload executables (MH_PRELOAD) are no longer supported. This option is obsolete. -sectobjectsymbols segname sectname Adding a local label at a section start is no longer sup- ported. This option is obsolete. -nofixprebinding The MH_NOFIXPREBINDING bit of mach_headers has been ignored since Mac OS X 10.3.9. This option is obsolete. -noprebind_all_twolevel_modules Multi-modules in dynamic libraries have been ignored at run- time since Mac OS X 10.4.0. This option is obsolete. -prebind_all_twolevel_modules Multi-modules in dynamic libraries have been ignored at run- time since Mac OS X 10.4.0. This option is obsolete. -prebind_allow_overlap When using -prebind, the linker allows overlapping by default, so this option is obsolete. -noprebind LD_PREBIND is no longer supported as a way to force on pre- binding, so there no longer needs to be a command line way to override LD_PREBIND. This option is obsolete. -sect_diff_relocs treatment This option was an attempt to warn about linking .o files compiled without -mdynamic-no-pic into a main executable, but the false positive rate generated too much noise to make the option useful. This option is obsolete. -run_init_lazily This option was removed in Mac OS X 10.2. -single_module This is now the default so does not need to be specified. -multi_module Multi-modules in dynamic libraries have been ignored at run- time since Mac OS X 10.4.0. This option is obsolete. -no_dead_strip_inits_and_terms The linker never dead strips initialization and termination routines. They are considered "roots" of the dead strip graph. -A basefile Obsolete incremental load format. This option is obsolete. -b Used with -A option to strip base file's symbols. This option is obsolete. Obsolete option to produce a load map. Use -map option instead. -Sn Don't strip any symbols. This is the default. This option is obsolete. -Si Optimize stabs debug symbols to remove duplicates. This is the default. This option is obsolete. -Sp Write minimal stabs which causes the debugger to open and read the original .o file for full stabs. This style of debugging is obsolete in Mac OS X 10.5. This option is obso- lete. -X Strip local symbols that begin with 'L'. This is the default. This option is obsolete. -s Completely strip the output, including removing the symbol table. This file format variant is no longer supported. This option is obsolete. -m Don't treat multiple definitions as an error. This is no longer supported. This option is obsolete. -ysymbol Display each file in which symbol is used. This was previ- ously used to debug where an undefined symbol was used, but the linker now automatically prints out all usages. The -why_live option can also be used to display what kept a sym- bol from being dead striped. This option is obsolete. -Y number Used to control how many occurrences of each symbol specified with -y would be shown. This option is obsolete. -nomultidefs Only used when linking an umbrella framework. Sets the MH_NOMULTIDEFS bit in the mach_header. The MH_NOMULTIDEFS bit has been obsolete since Mac OS X 10.4. This option is obsolete. -multiply_defined_unused treatment Previously provided a way to warn or error if any of the sym- bol definitions in the output file matched any definitions in dynamic library being linked. This option is obsolete. -multiply_defined treatment Previously provided a way to warn or error if any of the sym- bols used from a dynamic library were also available in another linked dynamic library. This option is obsolete. -private_bundle Previously prevented errors when -flat_namespace, -bundle, and -bundle_loader were used and the bundle contained a defi- nition that conflicted with a symbol in the main executable. The linker no longer errors on such conflicts. This option is obsolete. -noall_load This is the default. This option is obsolete. -seg_addr_table_filename path Use path instead of the install name of the library for matching an entry in the seg_addr_table. This option is obsolete. -sectorder segname sectname orderfile Replaced by more general -order_file option. -sectorder_detail Produced extra logging about which entries from a sectorder entries were used. Replaced by -order_file_statistics. This option is obsolete. SEE ALSO as(1), ar(1), cc(1), nm(1), otool(1) lipo(1), arch(3), dyld(3), Mach- O(5), strip(1), rebase(1) Darwin March 7, 2011 Darwin Mac OS X 10.8 - Generated Mon Aug 20 07:23:16 CDT 2012
http://www.manpagez.com/man/1/ld/
CC-MAIN-2014-15
refinedweb
7,011
65.93
Join devRant Search - "lonely" - - Working in the midnight at home, feeling lonely... Hearing commit sound from the world. Telling you that you are not alone8 - Ladies and Gentlemen I've upgraded from being lonely to In a relationship 'Cause I finally found the girl of my dreams. 💑 HaHa April Fool's Day 😂😂 Ouch!6 - devRant should make their own version of Tinder. All this lonely devs could really use a partner to cheer up their life 😄12 - - - And when I was busy wasting my time on my girlfriend who is my ex now, my friends were busy coding an AI chat-bot. Now, I use their chat-bot to talk to when lonely. Moral : Girlfriends ditch you.... code doesn't. Love code.15 - - Assembly: He’s the nerd. He speaks very quickly and uses short sentences. Very few people talk to him. He’s considered to be an autist asperger by a majority of the class because he finishes the exams so quickly it’s insane and he faces a lot of difficulties in speaking with others. He’s at school but already dressed like an engineer. Ada: She’s a foureyes nerd. When she gets the answer she’s doesn’t make any mistake. Ada often corrects the teacher when she writes a line a little ambiguous. She’s building a rocketship in her backyard and she’s always speaking about this weird hobby. Python: He’s Mr Popular. He likes skate, brags about all the parties he’s invited to. He’s good in all the subjects taught in class but he’ll do them a bit slower than the others. Everyone loves him because he explainsthings so well, sometimes the teacher herself asks Python to explain some part of the course. He’s dressed with a hoodie, a baggy and glasses on the top of the head ;) Java: She is one of the toppers of the class and very popular. She’s very good in all the topics. The teacher loves her but she’s a very talkative person. Scala/Kotlin: They are twin sisters and the best friends of Java. Unfortunately, they are not as popular and it’s often Java who takes the lead in the group. It’s very difficult to distinguish one from another. Both are far less talkative than Java but Scala speaks a bit differently than Kotlin and Java. C: He’s the topper of the class. He’s so fast in completing the exams that the teacher really thinks he’s copying Assembly’s work. He has a little brother C++ and they share a lot in common together. He’s the chess major and often plays chess with Assembly and his big brother. Go: He’s the new kid on the bloc. He doesn’t like C++ and his friends and he wants to prove he can do better than them. Of course, he prefers playing Go over Chess. APL: He’s a lonely guy. No one understands him when he speaks. Even the teacher is surprised when APL shows a correct answer after several lines of incomprehensible pictograms. People think that he was born in a foreign country… or a foreign planet ? HTML/CSS: These twin brothers are very different. One is dressed in black and white and the other is dressed with everything except black and white. HTML is very talkative and annoying and the CSS is very artistic. CSS is the best student in Art lessons and HTML performs well in written expression. LaTeX: She’s friend of HTML. The teacher likes her because she has a gift of writing. LaTeX likes the mathematical courses because she can draw fancy greek letters. The teacher knows this well and she is often asked to write a formula on the black board. VBA: He’s in the back, looking through the windows. Not really interested in the courses taught in class. In the exams, he answers always with a table. C#: He’s in the back playing yet another game on his smartphone. He likes being next to the windows also. JavaScript: People often mix up Java and JavaScript because they have a similar name. But they are definitly not the same. Javascript spends a lot of time with HTMLand CSS. He’s as artistic as CSS but he prefers things that move. He likes actions and movies. CSS dreams to be a painter wheras JavaScript wants to be a film-maker. Haskell: He’s a goth. Dressed up in dark. Doesn’t talk to anyone. He doesn’t understand why others write pages when he can write a couple of lines to answer the same question. Julia: She’s the newest student here. She doesn’t have any friends yet but her secret aim is to be as popular as Python and as fast as C. Credit: Thomas jalabert5 -.29 - -)?12 - The worst part of being a Dev? The lonely feeling when you are the only one who likes to develop in your group of friends5 - - A fanfic based on devRant-chan. The character was created by @caramelCase and a drawing by @ichijou. This is freestyle. I'll think of an image of a scene and go with the flow. I won't remove my fingers from the keyboard and I won't edit or change anything. That's how I come up with my best ideas. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Notes: B/N = Boss' name (I was too lazy to think of one.) Anything in between astericks is in italics. Ex.) *this is in italics.* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It was an early January morning when devRant-chan was situated in her desk, typing away on her laptop. She was working on a Python script for her barbaric client when she could've been out with friends. Oddly enough, her Sunday was surged with tranquility. Normally, Sunday is when her irksome boss barks orders at her on the phone. "This is wrong!" "What is this?" "Change it!" devRant-chan resented her boss but loved her job. After all, "you can't force yourself to like everyone," was something her elder brother would tell her. She released a slight chuckle, the one she would only display at the thought of her brother. Her musings were interrupted when a concerning thought crawled into her mind like an undesirable intruder. Why hasn't her boss called to complain yet? It's not that she enjoyed his complaining, which she didn't. She simply found it odd, since he's done this every Sunday morning, since she was a junior developer. Unless he found someone else to complain to? In that case, good riddance! But still, it wasn't a euphoric feeling to be replaced. She was already accustomed to his Sunday morning calls that it feels almost lonely not to receive them. She should call him... Just in case some situation—or—problem—has emerged. She dialed his number, waiting patiently for a reply. "Hello," said her boss. "Ah, hello," said devRant-chan. "I called, wondering—" "You've reached the voicemail of B/N, please leave a message after the beep." "Damn..." mumbled devRant-chan with a sharp exhale. "I always fall for that." Why didn't her boss answer the phone? It was odd of him, considering he's always answered her calls. She was about to dial her coworker when she received an email, which stimulated her attention. The subject of the email read: *Important. Please read.* She opened the email. It was her boss. The email read: *Hello.* *In case you aren't aware, I had quit my job, due to the stress. I've left the manager in charge. Starting tomorrow, he will be your new boss.* *-B/N* Before she could rejoice in excitement, she detected a strange change of voice, emitting from the email. Did her boss really write this? That's when she spotted something. The word "tomorrow." Her boss didn't write this. He would never use words such as "tomorrow," or "today." He would use time instead. If this was her boss, he would say "in 24 hours." She checked the IP of the email. Oddly enough, it was her boss' IP. Still, the pieces didn't fit the puzzle. Her boss didn't complain, answer her call, or use his style of speaking in the email. Something happened to him and she knows it. Whatever it is, has something to do with the manager, and she was determined to figure it out. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This was just a quick random fanfic, and I'm not sure if I'll continue it. As I said, I didn't plan anything, since it's freestyle. I might or might not continue it, so I'll think it over.8 - - >.7 - - - - You are lonely because -> You don't have a partner because -> You are a nocturnal person because -> Night might be the best time for development because -> You are too much obsessed with programming because -> You are lonely because -> ...8 - - - -!9 - - Left my laptop at work... :/ how can I sleep knowing my dear laptop is alone and cold.. feeling sad :(1 - - - It's lonely when you work as lone developer. No one value ur browswerify, webpack, es6 or react whatsoever..5 - !9 - I am getting tired of people being so ignorant. Apparently if you study on your own free time and talk about science and Philosophy, you are a very boring person (aka Me). I always like to talk about new Scientific discoveries, how philosophies apply to our everyday lives, diffrent math equations and algorithms and the global news. In my school, everyone seems to get bored by me. I guess if I do not know what team won the last game, I cannot be accepted as an individual.40 - - - - I'mma be waay to real with you all here, I'm sad, lonely, and scared that I don't take as many oppertunities to "viva la vida" as I should, and that ultimately I will live an unfufilling life and or die alone.29 - PSA. I have established "Depressed Lonely Maggot" Club. Our symbol will be an ugly crying maggot. You are invited :310 - - - My favorite thing on my desk would be my laptop. If only it hadn't DIED three months ago after being impossibly slow for three months before that. RIP you useless machine, RI bloody P ._.11 - I always hear devs talking about the many ideas they have in their minds to start projects. Am I the only one that can't imagine a project that is really usefull/profitable?6 - - - !dev Following... I get kicked out of my fucking house because my house”mates” just pissed off without paying rent or finding someone to replace them in time. Great.. really fucking awesome. There’s nothing else available in my town and I have time till the end of the month. Thanks for that asshole. Never, ever sharing a house / apartment again. I’m better off doing my own thing. Fucking lonely as I’m used to be but at least there’s no one to stab me in the back.1 -.3 - Cause day and night The lonely coder seems to free his mind at night He's all alone through the day and night The lonely coder seems to free his mind at night (at, at, at night) - I am feeling lonely and depressed. Don't feel like to code. I am introvert, don't have friends. Idk what to do. 😫15 - I'm feeling so lonely at work. No one is interested in talking to me. They talk only when they need something done. I've been and I did my best being nice to everyone ☹️22 - - - - Sitting here with my friends. They are drinking and having a blast... I am on devRant and thinking to start a new project.10 - - Customer: I have a problem with the new BI program. Me: ok, show me what's wrong. Customer: when I double-click the icon the program start but does not do anything else... Me: because you didn't ask anything to the program, you have to launch the utilities by clicking menus like, for example, to make a New report or to Open another report pre-saved.. the program doesn't know what you think. Customer: ah.......ok, maybe It's time for me to put a paper bag (like for bread) on my head and do 5 minutes of embarassment lonely - - - It's 11 pm. I'm almost drunk and I have realized I have spent too much of my life learning. I have spent too much time working. I may only be 25 but I still am dedicating up to 100+ hours a week to my job and it needs to stop. It has only left me sad alone and drunk. I hope others on here try to have some social life because sitting here drunk and lonely sucks. Maybe I shouldn't be so picky. Who knows. Enjoy life when you can.10 -! - Being non existant. But at least he won't do any mistakes that way 😀. (I'm so lonely in this city!)1 - I was so lonely that I made a Live-Stream of my 3D Printer Printing... Gosh im lonely... - - Once again, answered my own question in stack overflow. That my problems are so unique, it makes me lonely even in a dev forum.2 - - - When/how do nocturnal people interact with other people and get into a relationship? It feels so lonely here.8 - - (!= - My last job sucked because of ridiculous deadlines, never-ending demos with hacked together fake sites, etc...but I still get nostalgic and miss it because I worked with some really cool people.1 - ! Rant To any games programmers here (I feel so lonely surrounded by these Web devs) If I tree falls in a scene with no audio listeners does it still make a sound...4 - - That one group project in collage, where everyone was "useful"... and several lonely all-nighters later I managed to get through the exam... a truly career defining experience... That's why I'm now a cactus farmer. - - Ice King. Lonely, weird, has a penguin for a debugger and doesn't do as he commands... Yup, that's a Linux guy alright - Just met a startup that has a programmer intern but no IT supervisor. I felt so sorry for her that I decided to show her a few cool tools that she can use in her work. She was still using Xampp, Google Chrome, command prompt and paper trails (for all of the passwords she had to manage to different accounts) Shown her how to use Docker, Git Bash and WSL, FireFox Developer Edition, VS Code (if she decides to not use that unregistered Sublime Text editor) and LastPass (personal preference). Best of luck!2 - - -.8 - What do you guys do to get away from conputers? I, for one started practicing card magic and i say it's really helped me. Card magic and programming are 2 polar opposites. For card magic, social inter action is necessary while programming gets pretty lonely. Anyways, i'm really curious about your other hobbyes39 - - - ... - The client in my previous two rants officially hired me as their Head of Tech Support. I moved into the tech support office today, with a super comfy ergonomic chair and a huge table. If only there's someone else here...... - - - - - I live coding but I feel lonely. None of my friends code and I don't have a girlfriend to spent time with .8 - - Every data communication example has Alice and Bob communicating with each other. I wonder after all these years how are they able to maintain a stable relationship. I wish my girlfriend understood this. - - - - - - Google cloud servers got knocked out and now snapchat died too because they depend on goovle cloud servers I hVe never felt so empty and lonely without snapchat ☠️☠️☠️14 - - - - - - My mum said that i should get some fresh air outside... Now I'm a true dev... Feels lonely But i can write programs which is nice - Being the only Android dev in the company, I felt very lonely. Google is the only friend I have during work.2 - yellow lemon tree sound starts: "i'm sittin' here in a boring room, just another rainy sunday afternoon, i'm wastin' my time, i got nothin' to, i'm feelin' so lonely i'm waiting for my fucking graph coloring program to finally finishing this fucking piece of graph coloring in which i spent the last four days figuring out what the goddamn problem is and for some reason my arraylists and my hashmaps didn't get along that well and now i hope that i have finally found the solution to my problem and let this fucking piece of shit of program run otherwise i'll get crazy, but nothing ever happens, ... , and i wondeeeer ... *dum dum dum* *ding* - - Am I the only one around here that nobody is asking me to make any app/system for them? Not even by friends or - C'mon, I'm making all the logic and only ask you to make the site look better than my plain html structure. Please do your thing, because the lonely "Slightly improved layout" commit is not helping very much - - I've been watching my points and I'm wondering where the 7k people are at? Does it feel lonely at the top?4 - - I feel a little bit lonely when reading all the rants about Windows 😞 Are there more people like me who really can't work with macs but love Windows pc's? I mean, they are fast too, and customizable, and fancy, and work almost always.16 - - Half of my time coding is spent coming up with descriptive, unique var names that won't clash with damn namespace.2 - If you're feeling lonely and want a lot of responses in any dev forum, just talk shit about a widely used programming language.2 - I need to use the ugly Windows to create a dammit dynamic pdf file because the Adobesoftware isnt avaiable for Macintosh... #FirstWorldProblems18 - - - a nice feeling being complimented for the effort and frontend. but it is still a pity i am the only one appreciating my tidy, efficient and scalable backend too! - > wants to build small "for fun" company (if you could even call it a company) with friends > nobody interested because they only want to see money well... crap... Guess the first few years will be lonely T_T3 - You know it realy hits you in the face when you are still alone with your computer when you are suddenly invited for a wedding of some of your friends - - There's nothing like having your boss elbow twist you into 1 hour of not-work-related zoom time past your working hours just because he's feeling lonely from the lockdown =/5 - Git push is now followed with a tab of GitHub.audio and waiting to see it show up and ping. I'm lonely. - !!! - - !rant. Just got my first on site job! Fucking tired of working remotely. Time to work without it getting boring/lonely.5 - "Nobody can tell you if what you’re doing is good, meaningful or worthwhile. The more compelling the path, the more lonely it is. " - Hugh Macleod - pls rember happy dey wen u fel sad and lonely pls rember happy day Recite this when your code doesn't compile1 - - If you're coding, thinking and manually/auto debugging way too often several time a day, then you're likely to be suffering from "Geekonomous Schizophrenia", the Symptoms of that are: . 1. You grow a habit to cut the B$ in real-life conversations. . 2. You get instantaneously angry and disturbed when your mom/siblings/friends are interrupting you during your work. . 3. Not to mention you cannot tolerate irrational words from Socially Accepted Normal Chaps (SANC) . 4. You have nothing to speak unless a SANC starts the conversation themself. . 5. You tend to correct these SANCs mid-semi-technical-talk whenever these do factual errors. . 6. You get overwhelmingly excited and ecstatic to talk to someone of your expertise or at least a person who can intellectually handle your tech-blabbers and dev-rants! . 7. You start doing minor-to-major experiments regarding different things in real life as you do virtually with your codes and try to predict the outcome the next time. . 8. Best of all - whenever you are "loned-out" you don't feel lonely since you have many people and string of thoughts to talk to and inside your head there's a grand meeting going on. . Relatable? We're on same lines then! 😊 - - - Ah, the joys of using a bleeding edge web framework! After updating a bunch of NuGet packages, I get the TypeInitializationException from hell. Googling the error message turns up void, because seems to be me and about a dozen other devs using this framework. 2-3 new threads per week in the support forum and mentioned in a total of 288 StackOverflow questions. It feels lonely using this framework, but the design is so darn promising...5 - I would love to have an ability to make my rubber duck be my companion :D Does this count as a superpower? And no, I'm not this lonely xD3 - :) - I tried to convince the actual bug that landed in the middle of the code on my screen a few minutes ago to pose with my (AWESOME!!) stickers. He was lonely among my compiling code and took off so you get me instead :) - typical dev offer they look for a dev that should migrate their existing system to a new one the old dev wrote a system that is archaic now and he wants to quit developing and if you "want" to do more than just coding they would like you to support them in - managing social media - layouting / photoshop - creating videos they search ONE developer to do this and are not really planing on expanding - I got only very vague respones regarding this topic typical We search an "allrounder / one man show"... what do you guys think? they invited me for a meeting next week. I think i will go for the impression and see afterwards how I should proceed. But kinda iffy and the fact that I will be the only dev makes me wonder about the fact that I may feel lonely fast, stressed aaaand no real option to educate myself because I will have no free time and if potentially I (the whole dev team) don't work, then no work gets done.8 - - - Feeling really lonely as the only one who cares about ethical tech. Everyone around me just wants to build money making products and it doesn't matter if it adds value, only if it makes money. I wanna do good things with tech but it's getting harder. And my company just put a new CEO in charge who has a business plan but no vision. No added value. Just taking money from customers, making them addicted to the product. That's all that matters.9 - easy: i give up on my attempts to have social life as they're unsuccessful anyway, and the time i save on not attempting i can use for wallowing in lonely depress... i mean coding! - I feel so dumb , lonely and thoughtless right now as if i am still stuck in my memories of 13 with no experience, knowledge or frienship gained from past 10 years( or maybe 23 years,13 wasn't also no fun) :/ - Engineering life is not easy 😐😐😐😐all those assigments, practicals, vivas...... And above that.... ..no GF 😔😔😔10 - Man it really sucks to be a stranger among hundreds of people. You're are not alone but you're lonely and that sucks more. Currently attending a wedding function of daughter or son of co-worker of my mom because I had to drive her to this place. How can i make this situation good - So for someone who wants to get their foot in the door and learn a lot about computer science where do I start?12 - Are there any Belgian/Flemish people here? I just feel very lonely when I look around in my neighbourhood thinking nobody has even heard of devRant... :. - - I hate those people that comment on threads saying shit like “just google it” or “oh fuck another [insert popular topic] thread we don’t need another one.” Fuck off. It’s an online discussion forum. If it’s shit content the mods will remove it, otherwise no one is forcing your sorry, lonely pathetic ass to stop scrolling, click the thread, and read the god damn post. Just fucking turn off your computer and read a book.2 - - Long shot, but I've not long moved to Manchester (uk) and I don't know anyone. Are there any dev related events around here that you guys know about?5 - Staying late, like always. EXCEPT MY GOAT FUCKING, PILLOW IMPREGNATING PIECE OF SHIT PHONE DOESNT PLAY AUDIO ANYMORE! FUCK! Cant use headphones, can't blast it from the speakers, it is so quiet in the office, I actually hear the field cricket WALKING across the ground! - - Does any of you guys live in London? If so, what is life like there? Is it easy for someone to live alone there or is it too lonely?2 - Foreveralone developers, sit relaxed. With all the events cancelled and people locked in their homes, at least now you can be certain that there is absolutely nothing left you can do to find a partner. Enjoy working from home. Everyone does! Those who enjoy company of a romantic partner, enjoy! You got it. Those who are lonely, will remain. - . - - Trying to install Ubuntu and just wondering which is better? Encrypt for security Erase and Install Use LVM - - Top Tags
https://devrant.com/search?term=lonely
CC-MAIN-2020-45
refinedweb
4,339
74.08
On Mon, Jan 04, 2016 at 10:05:21PM -0800, Benno Rice wrote: > Hi Konstantin, > > I recently updated my dev box to r292962. After doing this I attempted to set > up PostgreSQL 9.4. When I ran initdb the last phase hung. Using procstat -kk > I found it appeared to be stuck in a loop inside a posix_fadvise syscall. I > could not ^C or ^Z the initdb process. I could kill it but a subsequent > attempt to rm -rf the /usr/local/pgsql/data directory also got stuck and was > unkillable by any means. Rebooting allowed me to remove the directory but the > initdb process still hung when I re-ran it. > > I tried PostgreSQL 9.3 with similar results. > > Looking at the source code for initdb I found that it calls posix_fadvise > like so[1]: > > /* > * We do what pg_flush_data() would do in the backend: prefer to use > * sync_file_range, but fall back to posix_fadvise. We ignore errors > * because this is only a hint. > */ > #if defined(HAVE_SYNC_FILE_RANGE) > (void) sync_file_range(fd, 0, 0, SYNC_FILE_RANGE_WRITE); > #elif defined(USE_POSIX_FADVISE) && defined(POSIX_FADV_DONTNEED) > (void) posix_fadvise(fd, 0, 0, POSIX_FADV_DONTNEED); > #else > #error PG_FLUSH_DATA_WORKS should not have been defined > #endif > > Looking for recent commits involving POSIX_FADV_DONTNEED I found r292326: > > > <> > > Backing this revision out allowed the initdb process to complete. > > My current theory is that some how we???re getting ENOLCK or EAGAIN from the > BUF_TIMELOCK call in bnoreuselist: > > > <> > > Leading to an infinite loop in vop_stdadvise: > > > > <> > > I haven???t managed to dig any deeper than that yet. > > Is there any other information I could give you to help narrow this down? Advertising I do not see this issue locally. When the hang in initdb occur, what is the state of the initdb thread which performs advise() ? Is it "brlsfl" sleep, or is the thread running ? If buffer lock is not available, and this is the cause of the ENOLCK/EAGAIN, then the question is who is the owner of the corresponding buffer lock. You could overview the state of the system with 'ps' command in ddb, and 'show alllocks' would list owner, unless buffer was async. Also, I do not quite understand the behaviour of SIGINT/SIGKILL. Could it be that the process was not killed by SIGKILL as well ? It would be consistent with the vnode lock still owned and preventing the accesses. _______________________________________________ freebsd-current@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
https://www.mail-archive.com/freebsd-current@freebsd.org/msg163836.html
CC-MAIN-2018-13
refinedweb
403
64.61
Download presentation Presentation is loading. Please wait. Published byDangelo Widdicombe Modified over 2 years ago 1 The Time Value of Money The Effects of Compound Interest Concentration of Wealth Present Value Robert M. Hayes 2005 2 Overview §Time Value of MoneyTime Value of Money §Effects of Compound InterestEffects of Compound Interest §Concentration of WealthConcentration of Wealth §Present ValuePresent Value 3 Why is there a time value for money? §A dollar in hand today is worth more than a dollar in hand tomorrow. Why is that? l I could buy something today and thus get the use today of what I buy. l I could invest today and gain the return from that investment. l I could avoid the loss of value due to inflation in costs. l I could lend the money today and gain the interest on that loan. §Why is there interest on a loan? l There needs to be a return, given the value today vs. tomorrow. l The loss of value from the other potential uses must be recognized. l There are risks that the loan may not be repaid. 4 The Relevant Variables §There are therefore four relevant variables in dealing with the time value of money: l The initial amount lent, called the principal amount l The time period of the loan l The interest rate l The time period to which the interest rate applies §Note that there are two separate and potentially different (in fact, usually different) time periods involved: (1) the time period of the loan, and (2) the time period to which the interest rate applies. 5 Methods of Calculating Interest Simple vs. Compound Interest §Simple interest is applied to the initial amount, called the principal, for a given time period for interest. If the period of the loan is greater than the time period for interest, the simple interest will be repeated, at the same amount, and accumulate during successive time periods for interest until the end of the time period of loan. §Compound interest is applied to the initial sum, plus any previous accumulated interest that has not been paid, for each successive time period for interest. §The rationale for compound interest is that the interest is in fact money that should be in hand at the end of the time period for interest, i.e., at the time it is due. Therefore, if that interest is not received, it is, in effect, also lent and therefore should also bear interest. 6 The Relevant Formulas §Let the four relevant variables be represented as follows: l Principal amount, P l Time period of loan, L l Interest rate. I l Time period for interest, T §Let C be the total amount due at the end of L, and let N be the ratio of the two time periods, L/T §For simple interest, the formula for total amount due, C, at the end of the time period of loan, L, is: l C = P* (1 + (L/T)*I) = P* (1 + N*I) §For compound interest, the formula is: l C = P*(1 + I) (L/T) = P*(1 + I) N §Note that compound interest is exponential. 7 The Power of Compound Interest §Albert Einstein is reputed to have said, Compound interest, not E = MC 2, is the greatest mathematical discovery of all time" §He also is credited with discovering what is called the compound interest Rule of 72. The Rule of 72 says that the principal amount will double in 72/I years, where I is the rate of interest. For example, if the interest rate is 6%, the principal amount will double in 12 years. §To illustrate, suppose that in 1955, a person invested $5,000 in a mutual fund and, during the ensuing 50 years, all dividends were reinvested. Today, that fund is worth $160,000. §Lets apply the Rule of 72: $160,000/$5,000 = 32. That is 2 5, so the original investment doubled five times. That means it doubled every 10 years, so the average interest rate was 7.2%. §Of course, during that 50 years, the inflation rate averaged about 4%, so the net gain was probably not that much. 8 The Effect on Concentration of Wealth §Interest, especially compound interest, plays a significant role in the concentration of wealth. §In a society, considered from an economic standpoint, there are two primary means for production: Labor and Capital. The former represents the results of the contributions of individuals as the agents of production; the latter, the results from investment of accumulated past savings in the tools for production. §The relationship between production, on the one hand, and labor and capital, as the means for production, on the other, is usually represented by a production function, a relatively simple example of which is the Cobb-Douglas production function. §In a very real sense, interest represents the societal income from the investment of the capital. §The question at hand here is the relationship between interest and the concentration of wealth. 9 §To examine that relationship, lets let the Capital owned by person k be C(k), the income from Capital be i*C(k) (so the interest rate, or return on capital, is i), the production generated by person k from Labor be L(k), and the total expenditures related to person k be E(k). §It is relevant and even important to note that the total expenditures, E(k), related to a person consist of three components: (1) personal expenditures (for self and dependents), (2) production expenditures (represented in part by overhead, which includes management, space, etc., and in part by materials), and (3) societal expenditures (represented primarily by government and, thus, taxes). §(The difference between the E(k) and the personal expenditures of person k, is what Karl Marx refers to as surplus value. That is, it is the excess of a persons production over what is directly received for it.) §Normally, one would expect the total expenditures, over all persons, to be less than or at most equal to the total production over all persons (otherwise the accumulated social wealth of the past will be dissipated). §If L(k) – E(k) + i*C(k) > 0, there will be a net addition to societal capital (and, of course, if L(k) – E(k) + i*C(k) < 0, a net reduction to societal capital) from person k. Lets suppose that person k is permitted to keep the increase (or lose the decrease) and add it to (or subtract it from) C(k). 10 §Now, consider two persons, P1 and P2. Let C(k), k = 1, 2 be their respective ownership of the societal wealth. So their income from Capital will be i*C(k), respectively. Let their respective production from their Labor be the same, L, and let their respective consumption also be the same, E. Thus, in this context, they differ only in their relative wealth. §Then, their respective net savings will be S(k) = i*C(k) + L – E. The total societal net savings will be S(1) + S(2) = i*(C(1) + C(2)) + 2*(L – E). §The individual net savings result in a new distribution of capital wealth: C(k) = C(k) + S(k) = C(k) + i*C(k) + L – E §Let C(1) = C(2) + X, so that, if X > 0, P1 has more wealth than P2. §Then, C(1) = C(2) + X + i*(C(2) + X) + L – E = (1 + i)*(C(2) + X) + L – E and C(2) = (1 + i)*C(2) + L – E §C(1)/C(2) = 1 + (1 + i)*X/C(2) = 1 + (1 + i)*X/((1 + i)*C(2) + L – E) §If L – E (1 + i)*C(2) + L – E and therefore (1 + i)/((1 + i)*C(2) + L – E) > 1/C(2) §Therefore, C(1)/C(2) > 1 + X/C(2) = C(1)/C(2) §Thus, if the expenditures related to a person are greater than the production related to that person, that persons relative share of the wealth will be reduced, even though his amount of wealth may increase. 11 Stages in Wealth Concentration §I think there is value in understanding the stages in wealth concentration, especially as represented by the effects of interest. §At the simplest level, such as a primitive agricultural society, every person is effectively at the level of subsistence, making just enough to meet the needs of themselves and their dependents. §At the next level, there is societal capital, as an investment in tools for production, that permits a more complex society, with greater production than mere subsistence. For a variety of reasons, there is almost certain to be some degree of concentration of wealth, with some persons in the society having more than others. And, as just shown, the degree of concentration is almost certain to increase over time. 12 §At the next level, the degree of concentration reaches the point where those with the most wealth do not need to subsist on the results of their labors, but can do so solely on the income from their wealth. §Lets suppose that subsistence requires an income of at least Z. In current economic terms, that might be the federal poverty level, which for a family of 4 is about $20,000. If the interest rate is, just for illustration, at 5%, a level of wealth of $400,000 would generate that level of income, without the need to work. Working would then, of course, provide the resources for life beyond the poverty level, or for increasing ones wealth, or for some mix of the two. 13 §At the next level, the degree of concentration reaches the point where those with the most wealth do not need to subsist on the results of their labors, but can do so solely on the income from the income from their wealth. §Continuing with the example of Z = $20,000 as the subsistence level, that means that the income from the wealth generates $400,000 per annum so, at 5%, the wealth must be $8,000,000. §The important point here is that the growth in wealth no longer depends at all upon labor, but can be generated solely from the interest. Indeed, if the person could subsist on the $20,000 per annum, the capital wealth would increase by nearly 5% per annum and thus would double in 15 years! §At this level or perhaps at the next one, the capital ceases to be money. It becomes power and control. 14 §I want to examine one final level simply to show what happens. Let the wealth accumulated by a person be such that subsistence can be obtained from the income of the income on the income (three levels remove from the need for labor). §Continuing with Z = $20,000 as the subsistence level, the wealth would need to generate $8,000,000 in income so, at 5%, that implies wealth of $160,000,000. Clearly, this is at the level where money represent power. §And we have not gotten even close to Bill Gates! 15 Application to U.S. National Economy §Simply to illustrate some of the relationships among the things I have just discussed, lets look at the U.S. national economy. §From the 1997 Input/Output Tables, we have the following data: l Total Intermediate Input = $6.7 Trillion l Product from Capital, Labor (Value Added) = $8.8 Trillion l Government taxation = $2.7 Trillion l Additions to capital = $1.3 Trillion l Net for Capital and Labor = $4.8 Trillion §From the Cobb-Douglas production model: l 8.8 = a*(L) b *(C) (1-b) l Let C = K*L. Then 8.8 = a*L*K (1-b) l Net for Capital and Labor = L + i*C = 4.8 l If i = 5%, then L +.05*C = 4.8 l If a* K (1-b) = 10, then L =.88 and C = 3.92 16 Present Value §The Role of Present Value of Money §Calculating Present and Future Value of Money §Using Net Present Value Analysis §Selecting a Discount Rate §Identifying Cash Flows to Consider §Determining Cash Flow Timing §Selecting the Best Alternative §Identifying Issues and Concerns 17 The Role of Present Value of Money §Why is a dollar today worth more than a dollar a year from now? l Investment l Inflation l Use and Enjoyment §The Role of the Discount Rate l The bases for choice of the discount rate l The role of risk assessment l The role of capitalization rates l The effect of the time period 18 Present and Future Value of Money §Present Value and Future Value §Effects of inflation §Present value of a cash stream. §Present value of a cash stream in perpetuity 19 Calculating Present Value t P = F y /(1 + i) y y = 1 P = present dollar value F y = future dollar value in year y i = annual rate of return (e.g., 0.05 is 5% per annum) y = the succession of years t = number of years in the future 20 If the future dollars are the same for each year, say Fy = F, t 1 t (1 + r) t-1 1 let S = so (1 + r)*S = = y=1 (1 + r) y y=1 (1 + r) y y=0 (1 + r) y (1 + r) t - 1 (1 + r)*S – S = – = 1 – = (1 + r) 0 (1 + r) t (1 + r) t (1 + r) t Hence: (1 + r) t - 1 P = F*S = F r*(1 + r) t 21 §There are times when the present value analysis needs to consider a cash stream in perpetuityfor an infinite period of time. Consider the formula shown above (1 + r) t - 1 P = F*S = F r*(1 + r) t but let t be infinity. Note that the second term in the expression on the right becomes zero and the first term, 1/r. The result is that P = F/r 22 Using Net Present Value Analysis §Illustrative Contexts for use of Present Value l Lease-purchase l Different lease alternatives l Life-cycle cost l Trade-off of acquisition costs and costs of operation §Factors Affecting Net Present Value l The timing of the cash flow l The discount rate 23 Steps in Net Present Value Analysis §Step 1. Select the discount rate. §Step 2. Identify the costs/benefits to be considered §Step 3. Establish the timing of the costs/benefits. §Step 4. Calculate net present value of alternatives §Step 5. Select the option with best net present value. 24 Selecting A Discount Rate §Nominal Discount Rates §Real Discount Rates §Selecting the Rate for Analysis. 25 Nominal Discount Rates §Most benefit-cost analyses should use nominal discount rates (i.e., discount rates that include the effect of actual or expected inflation or deflation). 26 Real Discount Rates §For some projects, it may be more reasonable to assess in terms of constant dollars. The real discount rate is the nominal discount rate adjusted to eliminate the effect of anticipated inflation/deflation. 27 Determining the Discount Rate §Once the type of discount rate has been selected (whether nominal or real), the values to be used are then determined from the appropriate table (using linear interpolation to determine values for years between those in the table). 28 Identifying Cash Flows To Consider §Cash Flow l Identify all relevant cash flows, both costs and benefits l Alternatives should clearly identify the cash flows that are specifically significant §Points to Consider in Identifying Costs and Benefits l Include the same cash flows in all alternatives l Include cash flows in which alternatives will differ l Do not include cash flows that are identical for alternatives l Do not include sunk costs or benefits §Analysis Period l For leasing contexts, use the leasing period plus renewal l For acquisition contexts, use life cycle period l For equipment context, use amortization period 29 Representative Costs & Benefits § Net Purchase Price §Costs for Transportation, Installation, Site preparation §Costs for Design, Training, and Management. §Repair and improvement costs, including: l Estimated unplanned service calls l Improvements required to assure continued operation. §Operation and maintenance, including: l Operating labor and supply requirements; and l Routine maintenance. §Disposal costs and salvage value, including: l Cost of modifications to return equipment to original configuration l Cost or modifications to return facilities to original configuration l Salvage value at the end of the period for analysis 30 Determining Cash Flow Timing §Bases for determining cash flow timing l Offer-Identified Cash Flows. l Government-Identified Cash Flows. §End-of-year payment l When to use End-of-Year Discount Factors l End-of-Year Discount Factor Calculation. l Repetitive End-of-Year Cash Flows. §Mid-Year Payment l When to Use Mid-Year Discount Factors. l Mid-Year Discount Factor Calculation. l Repetitive Mid-Year Cash Flows. 31 Calculating Net Present Value to Select The Best Alternative §Lease-Purchase Decision, Example 1 §Lease-Purchase Decision, Example 2 32 Lease-Purchase Decision, Example 1 §Which of the following will result in the lowest total cost of acquisition? §A: Proposal to lease the asset for 3 years. The annual lease payments are $10,000 per year, the first payment due at the beginning of the lease and the remaining two payments due at the beginning of Years 2 and 3. §B: Proposal to purchase the asset for $29,000. It has a 3-year useful life. Salvage value at the end of 3-year period will be $2,000. 33.) 34 §Step 4. Calculate net present value. The table below summarizes for each alternative. §Step 5. Select the offer with the best net present value. In this example, it is Offer B, the offer with the smallest negative net present value. 35 Lease-Purchase Decision, Example 2 §Which of the following will result in the lowest total cost of acquisition? § A: Proposal to lease the asset for 3 years. The monthly lease payments are $1,500; that is, the total amount for each year is $18,000. These payments are spaced evenly over the year, so the use of a MYDF would be appropriate. § B: Proposal to purchase the asset for $56,000. It has a 3-year useful life. At the end of the 3-year period it will have a $3,000 salvage value. 36.) 37 §Step 4. Calculate net present value. The table below summarizes for each alternative. §Step 5. Select the offer with the best net present value. In this example, it is Offer A, the offer with the smallest negative net present value. 38 Identifying Issues and Concerns §Is net present value analysis used when appropriate? §Are the dollar estimates for expenditures and receipts reasonable? §Are the times projected for expenditures and receipts reasonable? §Are the proper discount rates used in the net present value calculations? §Are the proper discount factors used in analysis? §Are discount factors properly calculated from the discount rate? §Have all cash flows been considered? 39 THE END Similar presentations © 2016 SlidePlayer.com Inc.
http://slideplayer.com/slide/1506895/
CC-MAIN-2016-44
refinedweb
3,164
59.43
Calculations¶ AiiDA calculations can be of two kinds: - JobCalculation: those who need to be run on a scheduler - InlineCalculation: rapid executions that are executed by the daemon itself, on your local machine. In the following, we will refer to the JobCalculations as a Calculation for the sake of simplicity, unless we explicitly say otherwise. In the same way, also the command verdi calculation refers to JobCalculation’s. Check the state of calculations¶ Once a calculation has been submitted to AiiDA, everything else will be managed by AiiDA: the inputs will be checked to verify that they are consistent. If the inputs are complete, the input files will be prepared, sent on cluster, and a job will be submitted. The AiiDA daemon with then monitor the scheduler, and after execution the outputs automatically retrieved and parsed. During these phases, it is useful to be able to check and verify the state of a calculation. There are different ways to perform such an operation, described below. The verdi calculation command¶ The simplest way to check the state of submitted calculations is to use the verdi calculation list command from the command line. To get help on its use and command line options, run it with the -h or --help option: verdi calculation list --help Possible calculation states¶ The calculation could be in several states. The most common you should see: NEW: the calculation node has been created, but has not been submitted yet. WITHSCHEDULER: the job is in some queue on the remote computer. Note that this does not mean that the job is waiting in a queue, but it may be running or finishing, but it did not finish yet. AiiDA has to wait. FINISHED: the job on the cluster was finished, AiiDA already retrieved it and stored the results in the database. In most cases, this also means that the parser managed to parse the output file. FAILED: something went wrong, and AiiDA rose an exception. The error could be of various nature: the inputs were not enough or were not correct, the execution on the cluster failed, or (depending on the output plugin) the code ended without completing successfully or producing a valid output file. Other possible more specific “failed” states include SUBMISSIONFAILED, RETRIEVALFAILEDand PARSINGFAILED. - For very short times, when the job completes on the remote computer and AiiDA retrieves and parses it, you may happen to see a calculation in the COMPUTED, RETRIEVINGand PARSINGstates. Eventually, when the calculation has finished, you will find the computed quantities in the database, and you will be able to query the database for the results that were parsed! Directly in python¶ If you prefer to have more flexibility or to check the state of a calculation programmatically, you can execute a script like the following, where you just need to specify the ID of the calculation you are interested in: from aiida import load_dbenv load_dbenv() from aiida.orm import JobCalculation ## pk must be a valid integer pk calc = load_node(pk) ## Alternatively, with the UUID (uuid must be a valid UUID string) # calc = JobCalculation.get_subclass_from_uuid(uuid) print "AiiDA state:", calc.get_state() print "Last scheduler state seen by the AiiDA deamon:", calc.get_scheduler_state() Note that, as specified in the comments, you can also get a code by knowing its UUID; the advantage is that, while the numeric ID will typically change after a sync of two databases, the UUID is a unique identifier and will be preserved across different AiiDA instances. Note calc.get_scheduler_state() returns the state on the scheduler (queued, held, running, ...) as seen the last time that the daemon connected to the remote computer. The time at which the last check was performed is returned by the calc.get_scheduler_lastchecktime() method (that returns None if no check has been performed yet). The verdi calculation gotocomputer command¶ Sometimes, it may be useful to directly go to the folder on which the calculation is running, for instance to check if the output file has been created. In this case, it is possible to run: verdi calculation gotocomputer CALCULATIONPK where CALCULATIONPK is the PK of the calculation. This will open a new connection to the computer (either simply a bash shell or a ssh connection, depending on the transport) and directly change directory to the appropriate folder where the code is running. Note Be careful not to change any file that AiiDA created, nor to modify the output files or resubmit the calculation, unless you really know what you are doing, otherwise AiiDA may get very confused! Set calculation properties¶ There are various methods which specify the calculation properties. Here follows a brief documentation of their action. c.set_max_memory_kb: require explicitely the memory to be allocated to the scheduler job. c.set_append_text: write a set of bash commands to be executed after the coll to the executable. These commands are executed only for this instance of calculations. Look also at the computer and code append_text to write bash commands for any job run on that computer or with that code. c.set_max_wallclock_seconds: set (as integer) the scheduler-job wall-time in seconds. c.set_computer: set the computer on which the calculation is run. Unnecessary if the calculation has been created from a code. c.set_mpirun_extra_params: set as a list of strings the parameters to be passed to the mpirun command. Example: mpirun -np 8 extra_params[0] extra_params[1] ... exec.xNote: the process number is set by the resources. c.set_custom_scheduler_commands: set a string (even multiline) which contains personalized job-scheduling commands. These commands are set at the beginning of the job-scheduling script, before any non-scheduler command. (prepend_texts instead are set after all job-scheduling commands). c.set_parser_name: set the name of the parser to be used on the output. Typically, a plugin will have already a default plugin set, use this command to change it. c.set_environment_variables: set a dictionary, whose key and values will be used to set new environment variables in the job-scheduling script before the execution of the calculation. The dictionary is translated to: export 'keys'='values'. c.set_prepend_text: set a string that contains bash commands, to be written in the job-scheduling script for this calculation, right before the call to the executable. (it is used for example to load modules). Note that there are also prepend text for the computer (that are used for any job-scheduling script on the given computer) and for the code (that are used for any scheduling script using the given code), the prepend_text here is used only for this instance of the calculation: be careful in avoiding duplication of bash commands. c.set_extra: pass a key and a value, to be stored in the Extraattribute table in the database. c.set_extras: like set extra, but you can pass a dictionary with multiple keys and values. c.set_priority: set the job-scheduler priority of the calculation (AiiDA does not have internal priorities). The function accepts a value that depends on the scheduler. plugin (but typically is an integer). c.set_queue_name: pass in a string the name of the queue to use on the job-scheduler. c.set_import_sys_environment: default=True. If True, the job-scheduling script will load the environment variables. c.set_resources: set the resources to be used by the calculation like the number of nodes, wall-time, ..., by passing a dictionary to this method. The keys of this dictionary, i.e. the resources, depend on the specific scheduler plugin that has to run them. Look at the documentation of the scheduler (type is given by: calc.computer.get_scheduler_type()). c.set_withmpi: True or False, if True (the default) it will call the executable as a parallel run.
https://aiida.readthedocs.io/projects/aiida-core/en/v0.4.1/state/calculation_state.html
CC-MAIN-2020-50
refinedweb
1,274
53.41
// (by Ariel Badichi) #include <boost/static_assert.hpp> #include <boost/type_traits/is_same.hpp> #include <boost/type_traits/add_pointer.hpp> #include <boost/mpl/apply.hpp> #include <boost/mpl/placeholders.hpp> #include <boost/mpl/lambda.hpp> namespace mpl = boost::mpl; using namespace mpl::placeholders; template<typename F, typename T> struct twice : mpl::apply<F, typename mpl::apply<F, T>::type> {}; int main() { typedef mpl::lambda<boost::add_pointer<_> >::type add_pointer_lambda; typedef twice<twice<add_pointer_lambda, _>, int>::type p; BOOST_STATIC_ASSERT((boost::is_same<int ****, p>::value)); return 0; } Can anyone explain why the explicit lambda call is necessary? I had thought that the apply embedded in twice would deal with the add_pointer placeholder expression, but GCC 3.3.3 gives a compile error with the following: BOOST_STATIC_ASSERT(( boost::is_same< twice< twice< boost::add_pointer<_1>, // needs to be wrapped in mpl::lambda<...>::type ??? _1 >, int >::type, int**** >::value )); Any ideas? -- Matt Brecknell Section 3.3 has a nice explanation for why this won't work. - Ariel Matt: Section 3.3 does explain why versions of the twice metafunction from earlier sections of the book don't work directly with placeholder expressions. But then section 3.3.2 says the first argument to mpl::apply can be any lambda expression (including those built with placeholders). Since your definition twice uses mpl::apply to invoke the argument metafunction, I expected to be able to use placeholder expressions without needing to use mpl::lambda. For example, I believe the following should work with your definition of twice (though I'm not in front of an MPL-capable compiler at the moment, so I can't check): BOOST_STATIC_ASSERT(( boost::is_same< twice< boost::add_pointer<_>, // no mpl::lambda required here int >::type, int** >::value )); I'm currently working on the theory that the problem with my version of the nested invocation of twice might be something to do with the way the outer twice is evaluating placeholders in the inner twice, but I haven't figured it out yet. -Matt In your previous snippet, consider the inner twice first. Consider the second argument. It's just the type _1. It's nothing special in the context of the inner twice. Therefore, the result of the inner twice is _1 **. The inner twice is a metafunction, not a lambda expression. So you can't mpl::apply it. If you use mpl::lambda on boost::add_pointer, then the inner twice will use it to generate a lambda expression, which can be passed to mpl::apply, and therefore to the outer twice. At least I think that's how it goes (I wonder if Dave could clear this up?) - Ariel I agree that the inner twice<> would evaluate to _1** if I had immediately evaluated the inner twice<> with ::type. But since I didn't, I thought that the outer twice<> would regard the inner twice<> as a placeholder expression (and not just a metafunction, as you then suggest). I therefore expected that the mpl::apply in the inner twice<> would convert the boost::add_pointer<_1> to a metafunction class, while leaving the second argument (bare _1) for the outer twice<> to substitute. That's why I used _1 in both places, even though they are meant to refer to different things in different contexts. In any case, I don't see why your reasoning would apply to my formulation, without also applying to yours. After all, you have a "_" placeholder in the same place in your version. The only material difference is that you have wrapped boost::addpointer<_> in mpl::lambda<>. I don't follow your reasoning about mpl::lambda causing the inner twice<> to "generate a lambda expression" for the outer twice<>: if the inner twice<> is a placeholder expression, then it's already a lambda expression (book section 3.6). Obviously, my reasoning has gone wrong somewhere, but I'm not yet convinced it's for any of the reasons you have suggested. -Matt I accept that the inner twice is a placeholder expression. The outer twice then substitutes BOTH placeholders for a type (for each mpl::apply). Using mpl::lambda protects the placeholder for boost::add_pointer from substitution. How this works exactly, I'm not sure. - Ariel The reason twice< twice< add_pointer<_1>, _1 >, int > doesn't work is that, without an explicit scope specification, mpl::apply< twice< add_pointer<_1>, _1 >, int >::type is equivalent to template< typename T > struct twice_add_pointer : twice< typename add_pointer<T>::type, T > { }; mpl::apply< twice_add_pointer<_1>, int >::type which, I hope, is clearly erroneous. With explicit scoping along the lines of, a working one-liner would be twice< twice< scope< add_pointer<_1> >, _1 >, int > Hope this clarifies things, People/Aleksey Gurtovoy.
http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?action=browse&diff=1&id=CPPTM_Answers_-_Exercise_3-4
CC-MAIN-2018-34
refinedweb
777
55.13
CORS handling as a cherrypy tool. Project description CORS support for CherryPy License License is indicated in the project metadata (typically one or more of the Trove classifiers). For more details, see this explanation. In a nutshell In your application, either install the tool globally. import cherrypy_cors cherrypy_cors.install() Or add it to your application explicitly. import cherrypy_cors app = cherrypy.tree.mount(...) app.toolboxes['cors'] = cherrypy_cors.tools Then, enable it in your cherrypy config. For example, to enable it for all static resources. config = { '/static': { 'tools.staticdir.on': True, 'cors.expose.on': True, } } See simple-example for a runnable example. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cherrypy-cors/
CC-MAIN-2018-43
refinedweb
123
55
Entering edit mode 11 months ago peter.berry5 ▴ 60 Hi I am trying to use the pathfindR package to do enrichment analysis on my data. According to the vignette the package is downloaded from CRAN by install.packages("pathfindR") this gives the following output Installing package into �� (as �lib� is unspecified) Warning in install.packages : dependencies ‘org.Hs.eg.db’, ‘KEGGREST’, ‘KEGGgraph’ are not available trying URL '' Content type 'application/zip' length 2782854 bytes (2.7 MB) downloaded 2.7 MB package ‘pathfindR’ successfully unpacked and MD5 sums checked so I believe the package has been installed successfully. However, when I call the package by library(pathfindR) I get #Error in library("pathfindR") : there is no package called ‘pathfindR’ Any thoughts/suggestions? So the install was not complete. You should install those packages first. @GenoMax, thanks, I did wonder about that bit. Any suggestions on how to fix that problem? I understood the dependencies installed automatically and that no action from me was required. Just install the required packages. Those dependencies are not automatically installed! @GenoMax and Bruno. Have done that and it it's working. Just for info the other issue I had was I needed to install a JRE which I apparently hadn't done before. Thanks for the help. It's appreciated. I have the same problem today! Error: package or namespace load failed for ‘pathfindR’: .onAttach failed in attachNamespace() for 'pathfindR', details: call: fetch_java_version() error: Java version not detected. Please download and install Java from “” Error: loading failed Execution halted ERROR: loading failed Could someone tell me where I am going wrong? I updated my Java and installed KEGGgraph, etc separately as well
https://www.biostars.org/p/479609/#9493757
CC-MAIN-2021-49
refinedweb
277
60.92
Web. MVC vs Web Forms Many programmers feel they have to choose one technology over another. But, there is really no reason you can’t program in both MVC and Web Forms and use each for what they are good at.. Many MVC programmers give many reasons why you should use MVC as opposed to Web Forms. The reasons I hear more often are: 1. Fine control over the HTML generated 2. Ability to unit test 3. Can control the “id” attribute generated 4. Easier to use jQuery and JavaScript because no generated “id” attribute 5. Ability to use Friendly URLs 6. No ViewState sent to client Of course, programmers that like Web Forms have many reasons why they want to stick with it: 1. Rapid Application Development 2. Less code to write because of rich server controls 3. Lots of server controls to choose from 4. Great third-party support 5. Easy to preserve state 6. They like the event driven model 7. Hides the complexity of the web 8. More mature so there is more information with respect to questions, problems, etc. In this article let’s explore how Web Forms will let you get very close the MVC model. There are many things you can take advantage to get Web Forms to work almost exactly the same as MVC, and you do it using the tools you already know and love. HTML 5 and CSS 3 In any Web Form page you can use HTML 5 elements and attributes. Even if you are using a server control you can still add any attribute to the control and it will emit the attribute for you. CSS 3 is just a new version of CSS and we have always been able to use CSS with Web Forms, so there is no difference here. The TextBox control now supports the HTML 5 input types such as date, tel and email through the TextMode property. ViewState Most Web Form pages do not need ViewState to work property. There are just a few cases when you will need to turn on ViewState. I recommend disabling ViewState in the Web.Config and then just turn it on when you get to a page that just does seem to work right. By turning this off you will see an increase in performance because not as much HTML is sent down to the client. Store ViewState on the Server If you still need to use ViewState, but you do not want the extra hidden field to be sent to the client, you can easily store ViewState on the server. This functionality has been available since .NET 1.0. You simply override two events in the base page class; LoadPageStateFromPersistenceMedium and SavePageStateToPersistenceMedium. These two events will allow you to store and restore the view state on the server and not send the hidden field to the client. Friendly URLs Using “RESTful” or so-called SEO-friendly URLs is all the rage today. What is nice about these URLs is it makes for a cleaner query string, the user does not know the actual page name and is typically easier for users to access and remember. Instead of these… Use Friendly URLs instead… To add friendly URLs to your project simply go to Tools | Nuget Package Manager | Manage NuGet Packages for Solution... Search online for “Microsoft.AspNet.FriendlyUrls” and install the “Microsoft.AspNet.FriendlyUrls.Core”. Add a class called RouteConfig and add the following using statements at the top of the file: using System.Web.Routing; using Microsoft.AspNet.FriendlyUrls; Now add a method named RegisterRoutes. public static void RegisterRoutes(RouteCollection routes) { routes.EnableFriendlyUrls(); routes.MapPageRoute("", "Default", "~/Default.aspx"); } You can add as many MapPageRoutes as you want in this method. You just have to keep the first and second parameters unique. Now in your Global.asax you will add a using statement at the top of this file: using System.Web.Routing; In the Application_Start() event you will call the static method you created in the RouteConfig class. RouteConfig.RegisterRoutes(RouteTable.Routes); MVVM I have written a blog post and an article on using the Model-View-View-Model approach to software development. If you use MVC or Web Forms you should be using a View Model. The View Model is where all of your data access happens and all the code to control the UI should also be located. The controller in MVC calls the View Model and the code-behind in Web Forms calls the View Model. This means that all unit testing happens on the View Model and you do not need to test against a controller or the code-behind file. You can read more about this model at the following two links. Unit Testing As just mentioned above, if you use the MVVM design pattern you get the benefits of being able to do unit testing and take advantage of TDD if you so desire. jQuery and JavaScript Using jQuery and JavaScript is an absolute must when building today’s modern web applications. Web Forms used to be criticized because when it generated the HTML controls, it “munged” the id attribute. This means that the id that you used in your ASPX page was something different when it ended up on the client. This makes it hard to know what the name is when you want to reference that control in jQuery or JavaScript. Microsoft gave us the ClientID property and the UniqueID properties to access the id and name attributes respectively. However starting with .NET 4.0 they added a ClientIDMode to the page level and to the Web.Config file. This allows you to set the ClientIDMode=”Static” and the id you set in your ASPX page is what will be generated on the client. This makes integrating with jQuery and JavaScript just as easy as it is with MVC. Bootstrap Twitter Bootstrap is a very powerful and popular CSS and JavaScript library that allows you to create responsive web sites. We have been using bootstrap for a few years now and it works just fine in Web Forms. We have successfully implemented templates we purchased from wrapbootstrap.com and themeforest.net. We typically take these templates and integrate the navigation and other elements into our Web Forms master pages. We then build very nice looking responsive web applications using Web Forms. GridView Web pages love tables! However tables are not always a good thing on smaller devices like a smart phone. Using bootstrap styles in combination we can make the GridView work much better. A better approach is to use the GridView so you get the built-in paging and all the great features of the GridView but make it not look so tabular. I wrote a blog about how to create an alternate view of tabular data. Check out this approach on how to present data that will work better on a mobile device: Additional Guidance One thing I like to do is consider what other folks are saying about Web Forms vs MVC. If you look at the following names, you can see what some of the heavyweights in the industry have to say. Scott Guthrie “Web Forms and MVC are two approaches for building ASP.NET apps. They are both good choices.” Dino Esposito ” Jeffrey Palermo “It is rarely a good idea to trash your current application and start over” K. Scott Allen “…figure out for yourself what framework will be the best for you, your team, and your business.” Microsoft A good overview of which to use when. Public Sites Favors MVC What we have gathered from our own experience and from reading what others are using MVC for are the following types of sites. · Blogs · Web content · Marketing · Membership · Shopping · Mobile Business Apps Favor Web Forms We have found that for building business applications that are doing a lot of CRUD operations that Web Forms lends itself really well to these types of applications. · Line-of-business · SaaS · Extranet Summary In the end it is up to you which approach you are going to use when developing your web applications. But you should not let folks that are clearly biased toward MVC sway your choice. Web Forms is still a great choice for developing web applications. Just because Microsoft releases a new technology does not mean that you should immediately trash everything you know and jump on it. As you have seen in this article, Web Forms and MVC are based on the same underlying technology and both can generate fast, small, responsive web applications. At the same time both can be unit tested, take advantage of MVVM, HTML 5, CSS 3 and jQuery libraries. So don’t throw away all your hard-earned skills, just take advantage of the tricks in this article and develop modern web applications with Web Forms. Past Blog Content Blog Archive 2015 2014 (18) 2013 (11) 2012 (19) 2011 (29) 2010 (19) 2009 (28) 2008 (0) 2007 (14) 2006 (6)
https://weblogs.asp.net/psheriff/web-forms-is-not-dead?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+PaulSheriffsOuterCircleBlog+%28Paul+Sheriff%27s+Outer+Circle+Blog%29
CC-MAIN-2022-21
refinedweb
1,503
72.26
2009/10/14 kiorky <kiorky at cryptelium.net>: > In the context of migration, for example. > Take the plone "collective" namespace, all the modules won't be updated at the > same time, we will have a painful cohabitation time. Well, making coordinated releases there is tricky, since there is different people having the rights on PyPI. But that is one of the very few namespaces this would be a problem at. But the current solutions do not have any problems in being mixed, so I still don't see why a new solution should have that problem. >> I see no reason that they should not continue to work. > Choose to go on that new stuff that is not backward compatible. That new stuff doesn't exist yet. How do you know it will not work? > I ve heard that before. Hay, Tarek, go out that body ! Yep, i know, but these > are point that we must keep in mind before the new implementation prevent us > from providing bits of backward compatibility. You have gotten the idea that just because the namespace PEP will not be backwards compatible with setuptools 0.6, that means that you can't mix all of them in the same system. If this is correct, I would like to hear *why* this is so. Because then we need to change that. But unless you can explain why it will be impossible to mix them, then I will trust my gut feeling that it is possible. -- Lennart Regebro: Python, Zope, Plone, Grok +33 661 58 14 64
https://mail.python.org/pipermail/distutils-sig/2009-October/013937.html
CC-MAIN-2017-04
refinedweb
259
83.46
Using XPath Queries The Microsoft® SQL Server™ 2000 support for annotated XDR schemas allows you to create XML views of the relational data stored in the database. You can use a subset of the XPath language to query the XML views created by an annotated XDR schema. The XPath query can be specified as part of a URL or within a template. The mapping schema determines the structure of this resulting fragment, and the values are retrieved from the database. This process is conceptually similar to creating views using the CREATE VIEW statement and writing SQL queries against them. Note To understand XPath queries, you must be familiar with the concepts of templates (for more information, see Using XML Templates), HTTP access to SQL Server (for more information, see Accessing SQL Server Using HTTP), mapping schema (for more information, see Creating XML Views Using Annotated XDR Schemas), and the XPath standard defined by the World Wide Web Consortium (W3C). An XML document consists of nodes such as an element node, attribute node, text node, and so on. For example, consider this XML document: <root> <Customer cid= "C1" name="Janine" city="Issaquah"> <Order oid="O1" date="1/20/1996" amount="3.5" /> <Order oid="O2" date="4/30/1997" amount="13.4">Customer was very satisfied</Order> </Customer> <Customer cid="C2" name="Ursula" city="Oelde" > <Order oid="O3" date="7/14/1999" amount="100" note="Wrap it blue white red"> <Urgency>Important</Urgency> </Order> <Order oid="O4" date="1/20/1996" amount="10000"/> </Customer> </root> In this document, Customer is an element node, cid is an attribute node, and Important is a text node. XPath (XML Path Language) is a graph navigation language. XPath is used to select a set of nodes from an XML document. Each XPath operator selects a node-set based on a node-set selected by a previous XPath operator. For example, given a set of <Customer> nodes, XPath can select all <Order> nodes with the date attribute value 7/14/1999. The resulting node-set contains all the orders with order date 7/14/1999. Note XPath language is defined by the W3C as a standard navigation language. The XPath language specification, XML Path Language (XPath) version 1.0 W3C Proposed Recommendation 8 October 1999, can be found at the W3C Web site at. A subset of this specification is implemented in SQL Server 2000. For more information, see XPath Guidelines and Limitations. Supported Functionality The table shows the features of the XPath language that are implemented in SQL Server 2000. Unsupported Functionality The table shows the features of the XPath language that are not implemented in SQL Server 2000. Specifying an XPath Query XPath queries can be specified directly in the URL or in a template that is specified in the URL. Parameters can be passed to the XPath queries specified directly in the URL or in the template using XPath variables. XPath Queries in a URL XPath queries can be directly specified in the URL, for example:[?root=ROOT] The root parameter is specified to provide a single top-level element. Any value can be specified for this parameter. If the query returns only one element (or if you want to receive a collection of top-level nodes), you do not have to specify this parameter. The SchemaVirtualName in the URL is a virtual name of schema type created using the IIS Virtual Directory Management for SQL Server utility. For more information, see IIS Virtual Directory Management for SQL Server. When you specify XPath queries in the URL, note the following URL-specific behavior: - XPath may contain characters such as # or + that have special meanings in the URLs. Escape these characters using the URL percent encoding, or specify the XPath in a template. For example, the URL[@CustomerID="#"] is truncated at the # symbol, resulting in an invalid XPath. - XPath expressions such as .. or // that resemble special file paths are interpreted by some browsers and modified before passing the URL to the server. Consequently, XPaths containing these expressions may not work as expected from the URL. For example: - The URL.. may be transformed by the browser to, which is invalid XPath. - The URL may be transformed by the browser to, which is different XPath. XPath Queries in a Template You can write the XPath queries in a template and specify the template in the URL. For example, this is a template with an XPath query: <ROOT xmlns: <sql:xpath-query Specify the XPath query </sql:xpath-query> </ROOT> This template file is stored in the directory specified at the time a virtual name of type template is created. For more information about creating virtual names, see Using IIS Virtual Directory Management for SQL Server Utility. This URL executes the template: The VirtualName specified in the URL is of template type. Note There is no namespace support for XPath queries specified directly in the URL. If you want to use a namespace in an XPath query, template should be used. For more information about templates, see Executing Template Files Using a URL. When you specify XPath queries in a template, note the following behavior: - XPath may contain characters such as < or & that have special meanings in XML (and template is an XML document). You must escape these characters using XML &-encoding, or specify the XPath in the URL. See Also Retrieving XML Documents Using FOR XML Accessing SQL Server Using HTTP IIS Virtual Directory Management for SQL Server
https://msdn.microsoft.com/en-us/library/aa226440(v=sql.80).aspx
CC-MAIN-2015-40
refinedweb
912
53.61
Code Metrics: Measuring LoC in .NET applications My previous posting gave quick overview of code metric called Lines of Code (LoC). In this posting I will introduce you how to measure LoC in Visual Studio projects using Visual Studio Code Analysis and NDepend. LoC difference in Visual Studio and NDepend LoC in .NET projects can be measured by Visual Studio and NDepend. Visual Studio and NDepend give different results. How they define their LoC-s? - Visual Studio - Indicates the approximate number of lines in the code. The count is based on the IL (Intermediate Language) code and is therefore not the exact number of lines in the source code file. The calculation excludes white space, comments, braces and the declarations of members, types and namespaces (taken from MSDN library article Code Metrics Overview). - NDepend - NDepend computes this metric directly from the info provided in PDB files. The LOC for a method is equals to the number of sequence point found for this method in the PDB file. A sequence point is used to mark a spot in the IL code that corresponds to a specific location in the original source (taken from NDepend metrics definiton page). To get better idea about sequence point read Rick Byers blog posting DebuggingModes.IgnoreSymbolStoreSequencePoints. As Visual Studio and NDepend measure LoC differently they also get different results. Visual Studio measures count of IL instructions while NDepend measures sequence points. So basically Visual Studio gives you greater result than NDepend because on the level of IL instructions it is not possible to know if it was some executable unit of code or was it generated automatically. Example project I have simple project to test how Visual Studio and NDepend calculate LoC. My project contains one class. There are only couple of lines of code but it is enough to demonstrate differences. public class Person { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string FullName { get { return FirstName + " " + LastName; } } } Before digging deeper I will tell you the results. Visual Studio LoC is 9 and NDepend LoC is 1. This difference looks huge but you will see that it depends directly on different measuring methods. Visual Studio LoC When I run Visual Studio code analysis I get the following table. Why are automatic properties counted as lines of code? Well… take a look at my blog posting Is Automatic Property Same as Property? This posting shows you clearly how automatic properties work. Behind the curtains they are compiled as usual properties that use attribute to hold property value. FullName uses string concatenation and return – this is why it is handled as two lines of code. NDepend LoC The results table generated by NDepend for Person class is shown on right. You can see that the number of LoC is 1. This is because there is one sequence point generated and it is for FullName property. NDepend tells us also little bit more. If you look at the results window you can see that methods count for this class is 8 and fields count is 3. It is possible to get results similar to Visual Studio but you have to keep in mind that Visual Studio gives you approximate number, not exact one. If you have a lot of automatic properties in your classes then the number of methods may be interesting when making first estimates (read my previous entry and warnings about using LoC as estimation method). NDepend lets you also query analysis results. It has special query language called CQL that is similar to SQL. By example, this query returns all methods that have LoC greater than 15. SELECT METHODS WHERE NbLinesOfCode > 15 We can also write query for über-bloat methods. SELECT METHODS WHERE NbLinesOfCode > 100 As you can see syntax is very simple and after playing with queries for a while you are able to write them on the fly. You just have to know what code metrics are and how they are measured. Conclusion Measuring LoC with Visual Studio Code Analysis and NDpened are both very simple tasks to do. Visual Studio gives you a different results than NDepend due to different measuring method. If you need complete analysis of your code then NDepend is better way to go because it gives you lot more information than Visual Studio does. NDepend lets you also write queries based on LoC and it makes much easier to analyze your code.
http://weblogs.asp.net/gunnarpeipman/code-metrics-measuring-loc-in-net-applications
CC-MAIN-2015-35
refinedweb
743
63.59
Synchron. If you pay attention to the name, you will also understand that it is named SynchronousQueue with a reason, it passes data synchronously to other thread; it wait for the other party to take the data instead of just putting data and returning (asynchronous operation). If you are familiar with CSP and Ada, then you know that synchronous queues are similar to rendezvous channels. They are well suited for hand-off designs, in which an object running in one thread must sync up with an object running in another thread in order to hand it some information, event, or task. In earlier multi-threading tutorials we have learned how to solve producer consumer problem using wait and notify, and BlockingQueue and in this tutorial we will learn how to implement producer consumer design pattern using synchronous queue. This class also supports an optional fairness policy for ordering waiting producer and consumer threads. By default, this ordering is not guaranteed. However, a queue constructed with fairness property set to true grants threads access in FIFO order. Producer Consumer Solution using SynchronousQueue in Java As I have said before, nothing is better than a producer consumer problem to understand inter-thread communication in any programming language. In Producer consumer problem, one thread act as producer which produces event or task and other thread act as consumer. Shared buffer is used to transfer data from producer to consumer. Difficulty in solving producer consumer problem comes with edge cases e.g. producer must wait if buffer is full or consumer thread must wait if buffer is empty. Later one was quite easy as blocking queue provides not only buffer to store data but also flow control to block thread calling put() method (PRODUCER) if buffer is full, and blocking thread calling take() method (CONSUMER) if buffer is empty. In this tutorial, we will solve the same problem using SynchronousQueue, a special kind of concurrent collection which has zero capacity. In following example, we have two threads which is named PRODUCER and CONSUMER (you should always name your threads, this is one of the best practice of writing concurrent application). First thread, publishing cricket score, and second thread is consuming it. Cricket scores are nothing but a String object here. If you run the program as it is you won’t notice any thing different. In order to understand how SynchronousQueue works, and how it solves producer consumer problem, you either need to debug this program in Eclipse or just start producer thread by commenting consumer.start(); If consumer thread is not running then producer will block at queue. put(event); call, and you won’t see [PRODUCER] published event: FOUR. This happens because of special behaviour of SynchronousQueue, which guarantees that the thread inserting data will block until there is a thread to remove that data or vice-versa. You can test the other part of code by commenting producer. start(); and only starting consumer thread. import java.util.concurrent.SynchronousQueue; /** * Java Program to solve Producer Consumer problem using SynchronousQueue. A * call to put() will block until there is a corresponding thread to take() that * element. * * @author Javin Paul */ public class SynchronousQueueDemo{ public static void main(String args[]) { final SynchronousQueue<String> queue = new SynchronousQueue<String>(); Thread producer = new Thread("PRODUCER") { public void run() { String event = "FOUR"; try { queue.put(event); // thread will block here System.out.printf("[%s] published event : %s %n", Thread .currentThread().getName(), event); } catch (InterruptedException e) { e.printStackTrace(); } } }; producer.start(); // starting publisher thread Thread consumer = new Thread("CONSUMER") { public void run() { try { String event = queue.take(); // thread will block here System.out.printf("[%s] consumed event : %s %n", Thread .currentThread().getName(), event); } catch (InterruptedException e) { e.printStackTrace(); } } }; consumer.start(); // starting consumer thread } } Output: [CONSUMER] consumed event : FOUR [PRODUCER] published event : FOUR If you have send the output carefully then you would have noticed that order of events are reversed. Seems [CONSUMER] thread is consuming data even before [PRODUCER] thread has produced it. This happens because by default SynchronousQueue doesn’t guarantee any order, but it has a fairness policy, which if set to true allows access to threads in FIFO order. You can enable this fairness policy by passing true to overloaded constructor of SynchronousQueue i.e. new SynchronousQueue(boolean fair). Things to remember about SynchronousQueue in Java Here are some of the important properties of this special blocking queue in Java. It’s very useful to transfer data from one thread to another thread synchronously. It doesn’t have any capacity and blocks until there is a thread on the other end. - SynchronousQueue blocks until another thread is ready to take the element, one thread is trying to put. - SynchronousQueue has zero capacity. - SynchronousQueue is used to implement queuing strategy of direct hand-off, where thread hands-off to waiting thread, else creates new one if allowed, else task rejected. - This queue does not permit null elements, adding null elements will result in NullPointerException. - For purposes of other Collection methods (for example contains), a SynchronousQueue acts as an empty collection. - You cannot peek at a synchronous queue because an element is only present when you try to remove it; Similarly you cannot insert an element (using any method) unless another thread is trying to remove it. - You cannot iterate over SynchronousQueue as there is nothing to iterate. - A SynchronousQueue constructed with fairness policy set to true grants threads access in FIFO order. That’s all about SynchronousQueue in Java. We have seen some special property of this special concurrent collection, and learned how to solve classical producer consumer problem using SynchronousQueue in Java. By the way calling it a Queue is bit confusing because it doesn’t have any capacity to hold your element. Call to put() operation will not complete until there is a thread which is calling take() operation. It’s better be a rendezvous point between threads to share objects. In other words, its a utility to synchronously share data between two threads in Java, probably a safer alternative of wait and notify methods.
https://www.javacodegeeks.com/2014/06/synchronousqueue-example-in-java-producer-consumer-solution.html
CC-MAIN-2017-22
refinedweb
1,008
53.92
We are about to set up SBS 2011 at my small company < 10 users. My collaborator wants to name the SBS domain "example.local" . I'm of the opinion we should name the SBS domain "corp.example.com" and setup DNS so the "corp" record is a NS record to the SBS server's private IP. FYI: "Example.com" isn't the real domain name and while the website is hosted outside our office, email will be stored on the SBS server in our office after passing though a spam filtering smart host hosted elsewhere too. Personally, I'd go with example.local, which according to Wikipedia is Microsoft's recommended naming convention. I can't find any info to back this up, however Microsoft do have an article regarding naming a domain, which should be useful to you. I've done many installs using this naming convention, and it give you the flexibility to be be able to assign internal and external DNS names to any system on the network as required. (e.g. test.example.local would point to the internal IP address, and test.example.com would point to its external IP address - assuming it is accessible to the outside world of course). Note that you might find if you try to use test.example.com internally it won't work by default, but you can add another internal DNS zone (test.example.com) with a default record that points to the IP used by test.example.local, this way, test.example.com works internally and externally. I also maintain systems for a customer that shares a .com internally and externally, and I'm constantly struggling with the duplicate DNS namespaces. Edit: also note that .local is not a valid TLD according to RFC2606 We use the corp.example.com convention for our small domains. Don't use the .local tld for your domain as it is the tld for Zeroconf/Bonjour, thus any MacOS X clients would not be able connect easily. Also, using a real domain that is registered to you will make life easier for you and your users, especially when they are outside of your office. I'd recommend using example.corp. Typically administrators setup their internal domain name the same as their public domain name and simple things like getting to example.com can be problematic when it is also the internal domain. In a small business perhaps that may also be preferred if it is also hosting the companies website/etc. With using example.corp you have internal resolution and you can easily setup the records for example.com for resolution as needed and can give you added flexibility between internal and external resolution. By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 1324 times active
http://serverfault.com/questions/233698/sbs-domain-name-choice
CC-MAIN-2014-23
refinedweb
475
57.98
From the IBM WebSphere Developer Technical Journal. We began the original WebSphere Application Server ESB article series with an example showing how the System Integration Bus (SIBus) acts as the default JMS provider for the application server. That first article showed how to connect a J2EE application client, sending a plain JMS text message, to a JMS service provider running on the application server implemented as a message-driven bean (MDB). In this article, we will use this same example as our starting point -- but instead of sending the message from the client to the SIBus queue, and then the MDB service getting the message from the queue, we will route the message through a mediation running in WebSphere ESB. To place this example in a business context, we will use our shipping company scenario introduced in Part 1. We will assume that whenever a package has been delivered to a client, a message must be sent to the main system to confirm the delivery. This confirmation message is sent asynchronously; that is, no response to the message is required and the message is simply queued to the main system for processing. The enhanced architecture An ESB enables building decoupled systems by offering virtual service interfaces, meaning a client does not access the actual service provider directly; rather, it exchanges messages with the ESB instead. The ESB then routes messages to and from the actual service provider. This is true not just for Web services (that is, services exposed via a SOAP/HTTP binding), but for any service. In our example, we have a J2EE application receiving plain text messages from a JMS queue. In the context of the ESB, we view this application as a service provider. Similarly, we offer a service interface to the JMS client application. We won't spend a lot of time describing the service requester and provider code that we use for this example (the code is really no different from any typical JMS code; see Resources for a JMS tutorial), but let's go over the basics: The J2EE client application simply sends a message containing the number of the package that was delivered. Since this sender code is running in a J2EE client application, we don't hardcode the names of the JMS resources that are used; we use the java:comp/env namespace instead. Both names (one for the connection factory and one for the actual queue) are bound using resource references in the client application deployment descriptor. The client was originally written to send a simple text message but will be updated to send an XML message. For the MDB, things are even simpler: each MDB has a method called onMessage(), which is invoked as soon as a message arrives on the queue that the MDB listens to and it prints the message to System.out. One aspect that makes this interesting is that by routing through WebSphere ESB, we have now decoupled the client JMS configuration and the MDB configuration -- they will both be configured to talk to WebSphere ESB. If the MDB is deployed on a different server, the client application does not need to change. Since the message flows through WebSphere ESB, the message can be mediated in the ESB. Figure 1 shows the setup when the JMS requester and MDB are directly connected through a SIBus queue (described in Part 3 of our previous series), compared to what we are going to build in this article. Figure 1. Sample scenario setup In the updated architecture, note that we are using two queues: one between the client and the WebSphere ESB mediation, and one between the mediation and the actual receiver (that is, the service provider). Figure 1 also shows how WebSphere ESB artifacts still run within WebSphere Application Server, which provides the JMS runtime and hosts the message-driven bean. For simplicity, we are running the MDB in the same server as WebSphere ESB; in a production environment, the MDB would typically be run on a separate server requiring additional configuration steps. A mediation flow component in WebSphere ESB is just another type of service component defined in the Service Component Architecture (SCA). SCA requires a service interface definition that describes the service endpoint on the bus (the interface the client application will call). Having a formal (WSDL) description enables us to view the JMS message exchange as a service invocation. (See Resources for an introduction to SCA.) Overview of ESB creation and configuration Before beginning, here is an overview of the steps you will perform in this article. (If you prefer to minimize the work you need to do, the EARs for the service requester and service provider, and a project exchange file, with the completed WebSphere ESB mediation, are included in the download materials with this article. If you choose to use the project exchange file, you will still need to configure the deployment descriptors of the client and MDB applications, as well as the server itself.) Create the WebSphere ESB server. Create the service interface. You will use WebSphere Integration Developer to create a WSDL description of the service interface for the service requester to send messages onto the bus, and by the MDB as the service provider. Create the mediation. You will create the mediation module and construct the mediation flow component. The mediation export and import will both be configured with the service interface you create in Step 2, above. JMS bindings will be created for the export and import. The mediation will simply log the message as it flows through. Set up the service requester. You will modify the original test client that sent a simple text string to send an XML message. You will set up the JMS configuration of the client application and WebSphere Application Server, in which WebSphere ESB runs and to which the client connects. SIBus queues generated by WebSphere ESB will be used as part of the configuration. Set up the service provider. You will modify the MDB EAR configuration using SIBus queues generated by WebSphere ESB. - Use the latest version of WebSphere Integration Developer and WebSphere Enterprise Service Bus that are available. To write this article, we used WebSphere Integration Developer V6.0.1.2 and WebSphere ESB V6.0.1.2. A fixpack to bring both products to version 6.0.2 will be available in late December 2006 with significant functional and performance enhancements, so use these versions when they become available. A. Create the WebSphere ESB server Create a test server in WebSphere Integration Developer that you will use for testing. On the Servers panel in WebSphere Integration Developer, select New => Server. Select WebSphere ESB Server v6.0, accept the defaults, and then Finish. If you have already created a test server, you can reuse it, but be aware that there may be preexisting artifacts defined on it that could cause conflict. B. Create the service interface A service component interacts with external partners via imports and exports. In our case, the export interacts with the JMS client application, and the import interacts with the MDB. Figure 2. The SCA assembly of the mediation Both the export and the import need an interface definition that describes the exact format of the data that is exchanged. Both also have a binding that describes the specifics of the underlying protocol over which the data is sent. In this example, the import and export interface will be the same. Let's start with the interface. In the original example, we simply sent a string (the "package received" notifier). In our updated version, we want to formalize that by defining an XML schema that contains the definition of the structure of the message. This will make it easier to process the message in the mediation, as well as in the final receiver. We will use the business object editor and the interface editor in WebSphere Integration Developer to create the interface. The business object editor is used to describe the content of the message, and the interface editor describes how the message is wrapped into an operation envelope. These graphical tools generate schema and WSDL that describe the interface, and that can be given to service requesters and providers. Open WebSphere Integration Developer in the Business Integration perspective. Create a new mediation module, called PackageReceivedModule. A mediation module serves as the container for a mediation component and its included logic, and is mapped into a deployable EAR project under the covers. Open the business object editor by selecting the Data Types node in the navigation tree. Right-click on it to create a new business object. In the New Business Object wizard, name the business object PackageReceivedBOand leave the default values for all other fields. Click Finish. When the business object editor opens, add an attribute called messageto the business object and make it type string. The final definition looks like Figure 3. Figure 3. Business object definition This message attribute will store the payload of the message as it flows through WebSphere ESB. By the way, the business object definition is stored as an XML schema in an .xsd file. You can further modify it in any XML schema editor. For our purposes, however, there is no need to do that. Save your changes. You are now ready to create the actual service interface. Right-click on the Interfaces node in the navigation tree and select Create a new Interface. Name the new interface PackageReceivedIFand keep all remaining default values. Click Finish. The interface editor opens. In the interface editor, select Add One Way Operation and call it package_received. Add Input parameter, and name it packageReceivedBOof type business object PackageReceivedBO (browse for this type, which you created above). The interface should now look like Figure 4. Figure 4. Interface definition Save your changes. To view the WSDL that is generated, select PackageReceivedIF and choose Open with => XML Source Page Editor. Similarly, you can open the PackageReceivedBO schema. You are now ready to build the actual mediation component. In WebSphere Integration Developer, double-click on the PackageReceivedModule assembly icon to open the assembly editor. A default mediation flow component called Mediation1 is created. Rename this component to PackageReceivedMediation. Next, you need to create an import that connects the mediation to the actual service provider (in our case, the MDB). Drag an import from the palette and drop it to the right of the mediation flow component. Rename the import to MDBImport. Right-click on the import and select Add Interface. In the next dialog, choose the PackageReceivedIF interface that you created earlier. What basically happens here is that you assign the interface that you created to be used to send a message to the MDB; the message being sent from the ESB must conform to this interface. Connect the import to the mediation flow component using the wire tool. (Click OK on any pop-up window that asks if it is okay to generate a reference on the mediation flow component.) The service provider is implemented as an MDB so we are going to communicate with it over JMS; thus we are required to define a JMS binding on the import. Right-click the import and select the Generate Binding... => JMS Binding menu option. In the resulting dialog window, we define a set of critical values, as shown in Figure 5. Figure 5. Configure JMS import service JMS messaging domain -- Leave the default value, Point-to-Point, because we are using a specific queue to which messages are sent, as opposed to a pub-sub topic. Configure new messaging provider resources -- Selecting this means that all the relevant JMS resources, including queues and their respective SIBus artifacts, will be generated automatically for you once the mediation module is deployed to a server. Serialization type -- Messages that flow through the mediation are turned into a business object when they are received via an export, and they are transformed into the appropriate output format when going out via an import. In the case of JMS, you need to tell the system which class to use to do this transformation. WebSphere ESB provides a couple of pre-built classes that take specifically formatted XML messages and transform them to and from the business object format. In our case, you will use a class that takes a JMS TextMessage and converts it into a business object. The text message must follow a specific format so that the converter class can determine the right business object to use. We will revisit this later. (WebSphere ESB V6.0.2 will introduce support for JMS messages that do not contain XML, that can be mapped to a business object but can have arbitrary content instead. For a more detailed description of how to create your own serialization logic for JMS content, see Building a powerful, reliable SOA with JMS and WebSphere ESB.) JMS Function Selector -- Remember that we explained how we view the application receiving the JMS message as a service? This means that it supports one or more operations. In the case of a JMS-based service, one queue can be reused for multiple operations. Hence, you need to tell the system which type of message to map to which operation on the service. WebSphere ESB does this by setting a JMS header field, called TargetFunctionName, that contains the name of the invoked operation on the outgoing message. The receiving application, in our case the PackageReceivedMDB, can now distinguish between operations by inspecting this header. In our scenario, however, we are not using this, since the MDB only supports one operation anyway. Therefore, uncheck this option. After setting all values as described above, click OK to generate the binding. Select the import in the assembly editor and go to the Properties tab at the bottom of the tool. Within the Properties view, select the Summary tab. Your view should now look like Figure 6. Figure 6. Import properties Note how the name of the send queue is set to PackageReceivedModule/MDBImport_SEND_D. This name was generated when you created the binding, and the respective artifacts will be generated at deployment time. You will need to use this queue name later when you configure the MDB. There is one thing left to do, namely define the appropriate JMSType on the outgoing message. The MDB only handles messages that have the JMSType header field set to package_received (this is defined in the MDB's deployment descriptor). Define the JMSType in the Method bindings tab of the Properties view, as shown in Figure 7. Figure 7. Method bindings Go back to the assembly editor and drag and drop an export onto the canvas, to the left of the mediation component, and name it JMSClientExport. This export will expose the mediation to the JMS test client. Add the PackageReceivedIF interface to the export and connect it to the mediation flow component. This will also create an interface on the mediation flow component. Connect this export to the PackageRecievedMediation using the wire tool. Similar to the import, we need to generate JMS bindings to the export, since that is how the client will communicate with the mediation. Right-click the export and select the Generate Binding... => JMS Binding menu option. The resulting dialog window looks almost exactly like the one for the import. You need the same definitions here, with one exception, as you can see in Figure 8. Figure 8. JMS bindings for export For Serialization type, be sure to select Business Object XML using JMS TextMessage. The export will use the default JMS function selector class. As described above, the incoming JMS message must be mapped to a specific operation on the service interface, and the default selector class expects the TargetFunctionSelector JMS header field to be set to the name of the invoked operation. You will update the client application below to set this JMS header field. Click OK to generate the binding and look at its Properties view. Select the Binding - Endpoint configuration tab, and within that tab, the JMS Destinations tab. Expand Receive Destination Properties. The view should look like Figure 9. Figure 9. Receive Destination Properties Note how the name of the queue that receives messages for the mediation is set to "PackageReceivedModule.JMSClientExport_RECEIVE_D". We will be configuring the client application to use this queue name. After you have saved all the changes, you are ready for the final step, which is to create the flow component's implementation. Right-click PackageReceivedMediation in the assembly editor and select Generate Implementation. This will open the mediation flow editor. For our example, we will limit this to very simple mediation logic: you will log each message that goes through the mediation. This will help you visualize the messages flowing through the ESB. To do so, connect the package_received operation on PackageReceivedIF on the left (this represents the interface -- connected to the export -- of the mediation flow component) with the package_received operation on PackageReceivedIFPartner (this represents the reference -- connected to the import -- of the mediation flow component). In the bottom half of the editor, drop the Message Logger mediation primitive onto the canvas and wire the message flow so that each message goes through the logger, as shown in Figure 10. Figure 10. Wire each message through the logger Save all changes, both in the mediation flow editor and the assembly editor. The mediation module is complete and ready to be deployed on the server. Before you do that, however, you still have to adjust the requester code and provider configurations. D. Set up the service requester To complete this task, you need to perform these general steps: - Import the J2EE test client application into WebSphere Integration Developer. - Configure the queue name in the deployment descriptor to be the queue generated by the export of the WebSphere ESB mediation module. - Update the application code to send an XML message and to set the JMSHeader TargetFunctionName, as needed by WebSphere ESB. - Create a JMS connection factory on the WebSphere ESB server for the client to use to get a connection. Now, the details: Import the test client application, which is located in the PackageReceivedClient.ear file. This is an updated EAR file based on the one we have used in our previous article. The updates are explained below. - When you select Import => EAR, you may be asked to enable Base J2EE Support if you have not already done so. If so, answer OK and switch to the J2EE perspective. - During the import, make sure that you name the EAR project PackageReceivedClientEARand select WebSphere ESB Server v6.0 as the target server. The new Application Client project, named PackageReceivedClient, has a Main.java file in it that contains the code for the test client. Next, change the name of the queue that the test message will be sent to since, instead of sending the message to the queue the MDB listens on, you want the message sent to the WebSphere ESB export. This name is referenced in the deployment descriptor for the client project, so open it by double-clicking it. The deployment descriptor contains a reference called jms/PackageReceivedQueue. Change its WebSphere Bindings => JNDI name to PackageReceivedModule/JMSClientExport_RECEIVE_D, which is the name of the queue WebSphere ESB generated for its export. In the original test client, we sent a plain string to the queue. In our updated scenario, you will send an XML-formatted string that represents the business object that we defined in the mediation module. This enables the default serializer to transform this XML message into a business object we can manipulate in the mediation. Following the interface and business object definition schema you created earlier, the XML string should look like this: Therefore, the updated Main.java file sends this new message: Moreover, remember that in WebSphere ESB, you are using the default JMS function selector that picks the appropriate operation on the service interface to call. You need to add a JMS header field (TargetFunctionName) to indicate the operation you are invoking (package_received): You will note that we have already made those code changes to the code, so there is nothing for you to do here. You also need to do some JMS configuration to enable the J2EE client application (the service requester) to connect to the WebSphere ESB server using its JMS resources. Open the admin console for the test server (start the server if isn't already started), by going to the Servers view. Right-click on the WebSphere ESB v6.0 server and select Run administrative console. The J2EE client application uses a JMS connection factory to connect to the JMS queue. You will create this connection factory on the server and then let the client application retrieve it from there. The client then opens a connection to the JMS queue on that server. Note that the client uses a specific port to communicate with the messaging engine on the server, which by default is 7276. However, the test server running within the tool will most likely use a different port, so you must configure the connection factory with the correct port number. To find out which port number your server uses, go to the administrative console and click on the Servers => Application Servers... node. Select the server named server1. The screen will now show information about that server, with a link called Ports, as shown in the Figure 11. Figure 11. Find ports used by server Select the Ports link to retrieve a list of the ports that your server is using. The port number that is used for JMS communication is called SIB_ENDPOINT_ADDRESS. In Figure 12, the port number is 7278. Figure 12. Ports used by servers Remember this port number. Now, in the administrative console, navigate to Resources => JMS Providers => Default Messaging. Click the JMS connection factory link, and then New. Enter these values: - Name: TheConnectionFactory - JNDI name: jms/TheConnectionFactory - Bus name: SCA.APPLICATION.esbCell.bus - Provider endpoints: localhost:7278:BootstrapBasicMessaging The Provider endpoints definition should contain the port number for your messaging engine (which you looked up earlier); in our example, it was 7278. (The specific cell name varies based on individual configuration, so the name of the default SCA application bus may vary between installations.) You can find more details about connecting to a non-default JMS provider port in the WebSphere Application Server Information Center. Leave all other fields with their defaults and click on OK. Save your changes and close the administrative console. E. Set up the service provider As we have mentioned earlier, the service provider that receives the JMS message is implemented using a message-driven bean. We will use the same application that we used in the previous article series; the appropriate enterprise archive (EAR) file is included in the download materials for this article. To complete this task, you need to perform these general steps: Import the EAR with the MDB into WebSphere Integration Developer. Configure the deployment descriptor to listen on the queue generated by the WebSphere ESB mediation module's import. Create an ActivitationSpec for the MDB on the WebSphere ESB server (since it is the same server being used to run the MDB EAR). Now, the details: In WebSphere Integration Developer, switch to the J2EE perspective. Before you import the PackageReceived.ear file, start up the WebSphere ESB test server in WebSphere Integration Developer, if it is not already started. - When importing the EAR file, make sure you name the EAR project PackageReceivedEAR. - Select WebSphere ESB Server v6.0 as the target server for the project. Leave all other fields as their defaults. - After the import has completed, you should see a new EJB project called PackageReceived, with one MDB in it called PackageReceived. Change the MDB's deployment descriptor to configure it to get messages from the queue on which WebSphere ESB puts them. Double-click the deployment descriptor to open it in the editor. On the Bean tab for the PackageReceived MDB, remove the Message destination entry. You also need to change the WebSphere Bindings => Destination JNDI name of the queue that the message is received from to PackageReceivedModule/MDBImport_SEND_D, which was generated by WebSphere ESB for the mediation module's import. (If you look at the value of the messageSelector, you will see it is set to the JMSType that you set in the import binding of the mediation.) The deployment descriptor should now appear as shown Figure 13. Figure 13. Updated MDB deployment descriptor If you see any additional compile errors after the import, select Project => Clean... to rebuild the project and solve those errors. You also need to do some JMS configuration on the server the MDB runs on, which for simplicity is our WebSphere ESB server. Open the admin console for the test server, by going to the Servers view. Right-click on the WebSphere ESB v6.0 server and select Run administrative console (assume the server is started; if it isn't started, do so now). The service provider application uses a message-driven bean to receive messages from the ESB. This MDB is configured to use an activation specification that contains the definition of the queue, among other things. This is all standard J2EE business, and so we need to configure the activation specification before deploying the application. In the administrative console, expand Resources => JMS Providers => Default messaging. - In the lower right section of the screen, click the link named JMS activation specification. - On the following screen, click on New and enter the following values for the new activation specification: - Name: PackageReceivedActivationSpec - JNDI name: eis/PackageReceivedActivationSpec - Destination JNDI name: PackageReceivedModule/MDBImport_SEND_D - Bus name: SCA.APPLICATION.esbCell.bus (Again, the actual bus name may vary based on individual configuration.) Figure 14. Activation specification Leave all other default values and click OK. Save your changes. This is all you need to do for the service provider. F. Run an end-to-end test You are finally ready to deploy the entire example to the test server and run it. In WebSphere Integration Developer, from the top menu, select Project => Clean... => Clean all projects => OK. This action ensures consistency across the code generated for the project. In the Servers view, right-click on WebSphere ESB Server v6.0 and select Add and remove projects.... In the resulting dialog, select PackageReceivedModuleApp first, then click on Add >. Next, add PackageReceivedEAR to the server, then click Finish. By adding them in this order, you ensure that the PackageReceivedModuleApp is loaded first, which is necessary because it sets up the destinations that are used by the MDB EAR. If the projects are loaded in the wrong sequence, you might see errors on loading of the ActivationSpec that the destination is not found. Figure 15. Add available projects to configured projects Note that the client application does not get deployed to the server, since it runs as a client and thus is not deployed. When the publish step is completed, stop and start your server, using the Restart or the Stop and Start buttons in the tool. When restarted, both applications should start without problems. One way to ensure the applications have been started successfully is to launch the administrative console again and select Applications => Enterprise Applications. It should show a list of applications, including the two you just installed -- and all of them should be started (Figure 16). Figure 16. Application status To run the test client application, select Run => Run... from the WebSphere Integration Developer main menu. In the following dialog, select the WebSphere v6.0 Application Client configuration, then click on New. In this new configuration, select WebSphere ESB Server v6.0 as the WebSphere runtime, enter PackageReceivedClientEAR as the Enterprise application, and use PackageReceivedClient as the application client module. Also, check the Enable application client to connect to a server checkbox, and select Use specific server with the WebSphere ESB Server v6.0 option, as shown in Figure 17. Figure 17. Select configuration You can now click Run to start the client. Once the client completes, it will show the following in the console: Figure 18. Application status messages in console These console messages indicate that the message has been delivered to the first queue, from where it will be picked up by the mediation flow component, be logged, and then forwarded to the second queue, which delivers it to the MDB. You can switch the console view between different processes, showing output for both the application client, as well as the server process: Figure 19. Alternate console view Switch your console output to the test server process. Both the mediation flow component and the MDB run in the same server, so their output shows up in the console together. Figure 20. Console messages for mediation flow component and message-driven bean The database related printouts indicate that the message logger primitive has executed and the System.out messages come from the MDB. This article described how to turn a simple, point-to-point JMS scenario, with a sender application and a receiver application, into one where you establish an ESB mediation between them. The ESB provides decoupling and separation of concerns, since things like logging can now be handled within the ESB mediation, rather than needing to be done in the provider and/or consumer code itself, thus establishing two core principles of SOA. In our scenario, we used WebSphere ESB and its support for JMS protocol bindings to create the mediation. Even though we used JMS as the protocol on both sides of the ESB in this example, we will show in a future installment how you can run different protocols into and out of the mediation component, leaving it up to the ESB to handle the protocol switch. In the next article, we will focus on a more Web services-oriented scenario, before we continue and tie both of these worlds (JMS/MQ and Web services) together. Stay tuned! Oh, one more thing: As of Part 3 in this series, we will begin leveraging the new 6.0.2 release of both WebSphere ESB and the WebSphere Integration Developer tool! - Part 1: An introduction to using WebSphere ESB, or WebSphere ESB vs. SIBus - Part 3: Adding Web services and promoting properties Information about download methods - Building an ESB with WebSphere Application Server V6 Part 3: A simple JMS messaging example - An introduction to the IBM Enterprise Service Bus - JMS Tutorial - WebSphere Enterprise Service Bus product page - Getting started with WebSphere Enterprise Service Bus and WebSphere Integration Developer - Developing custom mediations for WebSphere Enterprise Service Bus - Building a powerful, reliable SOA with JMS and WebSphere ESB - Dynamic routing at runtime in WebSphere Enterprise Service Bus - Tutorial: Invoking a Web service with a JMS client - Service Component Architecture - Redbook: Enabling SOA Using WebSphere Messaging.
http://www.ibm.com/developerworks/websphere/techjournal/0612_reinitz/0612_reinitz.html
crawl-003
refinedweb
5,135
53.71
No, the "this" keyword cannot be used to refer to the static members of a class. This is because the “this” keyword points to the current object of the class and the static member does not need any object to be called. The static member of a class can be accessed directly without creating an object in Java. public class StaticTest { static int a = 50; static int b; static void show() { System.out.println("Inside the show() method"); b = a + 5; } public static void main(String[] args) { show(); System.out.println("The value of a is: " + a); System.out.println("The value of b is: " + b); } } Inise the show() method The value of a is: 50 The value of b is: 55
https://www.tutorialspoint.com/can-a-this-keyword-be-used-to-refer-to-static-members-in-java
CC-MAIN-2021-25
refinedweb
122
72.16
Keywords in Python In this part of the Python programming tutorial, we will introduce all keywords in Python language. List of keywords. The list of keywords may change in the future. #!/usr/bin/python # keywords.py import sys import keyword print "Python version: ", sys.version_info print "Python keywords: ", keyword.kwlist This script prints the version of Python and its actual keyword list. print keyword The print keyword is use to print numbers and characters to the console. (In Python 3, #!/usr/bin/python # tutorial.py print "*" * 24 print "*" * 24 print "*" * 24 print print "\tZetCode" print print "*" * 24 print "*" * 24 print "*" * 24 The $ ./tutorial.py ************************ ************************ ************************ ZetCode ************************ ************************ ************************ Control flow The while keyword is a basic keyword for controlling the flow of the program. The statements inside the while loop are executed, until the expression evaluates to False. #!/usr/bin/python # sum.py numbers = [22, 34, 12, 32, 4] sum = 0 i = len(numbers) while (i != 0): i -= 1 sum = sum + numbers[i] print "The sum is: ", sum In our script we want to calculate the sum of all values in the numbers list. We utilize the while loop. We determine the length of the list. The while loop is executed over and over again, until the i is equal to zero. In the body of the loop, we decrement the counter and calculate the sum of values. The break keyword is used to interrupt the cycle if needed. #!/usr/bin/python # testbreak.py import random while (True): val = random.randint(1, 30) print val, if (val == 22): break In our example, we print random integer numbers. If the number equals to 22, the cycle is interrupted with the break keyword. $ ./testbreak.py 14 14 30 16 16 20 23 15 17 22 The next example shows the continue keyword. It is used to interrupt the current cycle, without jumping out of the whole cycle. New cycle will begin. #!/usr/bin/python # testcontinue.py import random num = 0 while (num < 1000): num = num + 1 if (num % 2) == 0: continue print num, In the example we print all numbers smaller than 1000 that cannot be divided by number 2. The if keyword is the commonest used control flow keyword. The if keyword is used to determine, which statements are going to be executed. #!/usr/bin/python # licence.py age = 17 if age > 18: print "Driving licence issued" else: print "Driving licence not permitted" The if keyword tests if the the person is older than 18. If the condition is met, the driving license is issued. Otherwise, it is not. The else keyword is optional. The statement after the else keyword is executed, unless the condition is True. Next we will see, how we can combine the statements using the elif keyword. Stands for else if. #!/usr/bin/python # hello.py name = "Luke" if name == "Jack": print "Hello Jack!" elif name == "John": print "Hello John!" elif name == "Luke": print "Hello Luke!" else: print "Hello there!" If the first test evaluates to False, we continue with the next one. If none of the tests is True, the else statement is executed. $ ./hello.py Hello Luke! Output. The for keyword is used to iterate over items of a collection in order that they appear in the container. #!/usr/bin/python # lyrics.py lyrics = """\ Are you really here or am I dreaming I can't tell dreams from truth for it's been so long since I have seen you I can hardly remember your face anymore """ for i in lyrics: print i, In the example, we have a lyrics variable having a strophe of a song. We iterate over the text and print the text character by character. The comma in the print statement prevents from printing each character on a new line. $ ./lyrics.py A r e y o u r e a l l y h e r e o r a m I d r e a m i n g I c a n ' t t e l l d r e a m s f r o m t r u t h f o r i t ' s b e e n s o l o n g s i n c e I h a v e s e e n y o u I c a n h a r d l y r e m e m b e r y o u r f a c e a n y m o r e This is the output of the script. Boolean expressions First we will introduce keywords that work with boolean values and expressions: is, or, and, and not. #!/usr/bin/python # objects.py print None == None print None is None print True is True print [] == [] print [] is [] print "Python" is "Python" The == operator tests for equality The is keyword tests for object identity. Whether we are talking about the same object. Note that multiple variables may refer to the same object. $ ./objects keyword returns False. On the other hand, "Python" is "Python" returns True. This is because of optimization. If two string literals are equal, they have been put to same memory location. A string is an immutable entity. No harm can be done. The not keyword negates a boolean value. #!/usr/bin/python # grades.py grades = ["A", "B", "C", "D", "E", "F"] grade = "L" if grade not in grades: print "unknown grade" In our example we test, whether the grade value is from the list of possible grades. $ ./grades.py unknown grade The keyword and is used if all conditions in a boolean expression must be met. #!/usr/bin/python # youngmale.py sex = "M" age = 26 if age < 55 and sex == "M": print "a young male" In our example, we test if two conditions are met. The "young male" is printed to the console if variable age is less than 55 and variable sex is equal to M. $ ./youngmale.py a young male The keyword or is used if at least one condition must be met. #!/usr/bin/python # name.py name = "Jack" if ( name == "Robert" or name == "Frank" or name == "Jack" or name == "George" or name == "Luke"): print "This is a male" If at least one of the expressions is true, the print statement is executed. When we work with and/or keywords in Python programming language, short circuit evaluation takes place.) A typical example follows. #!/usr/bin/python x = 10 y = 0 if (y != 0 and x/y < 100): print "a small value" The first part of the expression evaluates to False. The second part of the expression is not evaluated. Otherwise, we would get a ZeroDivisionError. Modules The following keywords are used with modules. Modules are files, in which we organize our Python code. The import keyword is used to import other modules into a Python script. #!/usr/bin/python # pi.py import math print math.pi We use the import keyword to import the math module into the namespace of our script. We print the PI value. We use the as keyword if we want to give a module a different alias. #!/usr/bin/python # rand.py import random as rnd for i in range(10): print rnd.randint(1, 10), In this case, we import the random module. We will print ten random integer numbers. We give the random module a different alias, namely rnd. In the script we reference the module with the new alias. Notice that we cannot name the script random.py or rnd.py. We would get errors. $ ./rand.py 1 2 5 10 10 8 2 9 7 2 The from keyword is used for importing a specific variable, class or a function from a module. #!/usr/bin/python # testfrom.py from sys import version print version From the sys module, we import the version variable. If we want to print it, we do not need to use the module name. The version variable was imported directly to our namespace and we can reference it directly. $ ./testfrom.py 2.5.1 (r251:54863, Mar 7 2008, 03:41:45) [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] Functions Here we will describe keywords associated with functions. The def keyword is used to create a new user defined function. Functions are objects in which we organize our code. #!/usr/bin/python # function.py def root(x): return x * x a = root(2) b = root(15) print a, b The example demonstrates a simple new function. The function will calculate the square of a number. The return key is closely connected with a function definition. The keyword exits the function and returns a value. The value is than assigned to the a and b variables. The lambda keyword creates a new anonymous function. An anonymous function is a function, which is not bound to a specific name. It is also called an inline function. !/usr/bin/python # lambda.py for i in (1, 2, 3, 4, 5): a = lambda x: x * x print a(i), As you can see in the previous example, we do not create a new function with a def keyword. Instead of that we use an inline function on the fly. $ ./lambda.py 1 4 9 16 25 If we want to access variables defined outside functions, we use the global keyword. #!/usr/bin/python # testglobal.py x = 15 def function(): global x x = 45 function() print x Normally, assigning to x variable inside a function, we create a new local variable, which is valid only in that function. But if we use the global keyword, we change a variable ouside the function definition. $ ./testglobal.py 45 Exceptions Next we will work with keywords that are used with exception handling. $ cat films Fargo Aguirre, der Zorn Gottes Capote Grizzly man Notes on a scandal This is a file, containing some film titles. In the code example, we are going to read it. #!/usr/bin/python # files.py f = None try: f = open('films', 'r') for i in f: print i, except IOError: print "Error reading file" finally: if f: f.close() We try to read a films file. If no exception occurs, we print the contents of the file to the console. There might be an exception. For example, if we provided an incorrect file name. In such a case a IOError exception is raised. The except keyword catches the exception and executes its code. The finally keyword is always executed in the end. We use it to clean up our resources. In the next example, we show how to create a user defined exception using the raise keyword. #!/usr/bin/python # userexception.py class YesNoException(Exception): def __init__(self): print 'Invalid value' answer = 'y' if (answer != 'yes' and answer != 'no'): raise YesNoException else: print 'Correct value' In the example, we expect only yes/no values. For other possibilities, we raise an exception. $ ./userexception.py Invalid value Traceback (most recent call last): File "./userexception.py", line 13, in <module> raise YesNoException __main__.YesNoException Other keywords The del keyword deletes objects. #!/usr/bin/python # delete.py a = [1, 2, 3, 4] print a del a[:2] print a In our example, we have a list of four integer numbers. We delete the first numbers from the list. The outcome is printed to the console. $ ./delete.py [1, 2, 3, 4] [3, 4] Output. The pass keyword does nothing. It is a very handy keyword in some situations. def function(): pass We have a function. This function is not implemented yet. It will be later. The body of the function must not be empty. So we can use a pass keyword here, instead of printing something like "function not implemented yet" or similar. The assert keyword is used for debugging purposes. We can use it for testing conditions that are obvious to us. For example, we have a program that calculates salaries. We know that the salary cannot be less than zero. So we might put such an assertion to the code. If the assertion fails, the interpreter will complain. #!/usr/bin/python # salary.py salary = 3500 salary -= 3560 # a mistake was done assert salary > 0 During the execution of the program a mistake was done. The salary becomes a negative number. $ ./salary.py Traceback (most recent call last): File "./salary.py", line 9, in <module> assert salary > 0 AssertionError The execution of the script will fail with an AssertionError. The class keyword is the most important keyword in object oriented programming. It is used to create new user defined objects. #!/usr/bin/python # square.py class Square: def __init__(self, x): self.a = x def area(self): print self.a * self.a sq = Square(12) sq.area() In the code example, we create a new Square class. Then we instantiate the class and create an object. We compute the area of the square object. The exec keyword executes Python code dynamically. #!/usr/bin/python # execution.py exec("for i in [1, 2, 3, 4, 5]: print i,") We print five numbers from a list using a for loop. All within the exec keyword. $ ./execution.py 1 2 3 4 5 Finally, we mention the in keyword. #!/usr/bin/python # inkeyword.py print 4 in (2, 3, 5, 6) for i in range(25): print i, In this example, the in keyword tests if the number four is in the tuple. The second usage is traversing a tuple in a for loop. The built-in function range() returns integers 0 .. 24. $ ./inkeyword.py False 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 The yield keyword is used with generators. #!/usr/bin/python # yieldkeyword.py def gen(): x = 11 yield x it = gen() print it.next() The yield keyword exits the generator and returns a value. $ ./yieldkeyword.py 11 In this part of the Python tutorial, we have covered Python keywords.
http://zetcode.com/lang/python/keywords/
CC-MAIN-2015-22
refinedweb
2,324
75.1
Delete a dictionary entry by key #include <sys/strm.h> strm_dict_t* strm_dict_key_delete(strm_dict_t *dict, char const *key) This function creates a new dictionary that is an exact replica of the one specified by dict, except that the entry specified by key is deleted. If no such entry is found, the original dictionary is left unmodified and the same handle is returned. If an entry is found, the original dictionary handle is destroyed and a handle to the new dictionary is returned. On success, a handle to the new dictionary (when the entry was deleted) or to the existing dictionary (when the entry wasn't found). On failure, a null pointer.
https://www.qnx.com/developers/docs/7.1/com.qnx.doc.libstrm/topic/strm_dict_key_delete.html
CC-MAIN-2022-27
refinedweb
110
53.51
[SOLVED] Qt Creator not recognising Widgets despite inlcuding all necessary includes Qt creator doesnt recognise QTreeWidget when typed in. I literally included all header files: @#include <QDialog> #include <QtCore> #include <QtGui> #include <QWidget>@ And even the tutorial Im watching had LESS exhaustive header files and worked for them (they missed the QWidget include) I do understand that: @#include <QTreeWidgetItem>@ is one way of making sure the compiler includes the TreeWidgetItem, which does work if I include it, but I dont see why that needs to be in there; I included QWidget which includes ALL widgets. Infact in the tutorial the only includes were @#include <QDialog> #include <QtCore> #include <QtGui>@ i.e. it didnt even include QWidget and the tutorial pulled it off! Please can anyone explain why this is so??? - arnolddumas The thing is the tutorial you're reading was written for Qt4. In Qt5, some includes must be changed. In Qt4 the <QtGui> include was used to make all the widgets known by the compiler. But in Qt5, the widgets (QLineEdit, QTextEdit ....) have been moved to a brand new module called 'widgets'. Therefore you'll need to replace QtGui by QtWidgets Notice there's a 's' at the end of the QtWidgets include. So your includes should now look like that : @#include <QtCore> #include <QtWidgets> @ Also notice that QDialog is already included with QtWidgets. And don't forget to add the new 'widgets' module in your *.pro file. Now everything should just works fine. bq. I included QWidget which includes ALL widgets Including <QWidget> gives the compiler a complete declaration of the QWidget base class. It does not generally include a declaration of sub-classes like QLabel, QTextEdit etc.; they have their own includes. Including <QtGui> (Qt4) or <QtWidgets> (Qt5) does include complete declarations for all widgets. You should not get into the habit of including all of the GUI includes for non-trivial project unless you are particularly enamoured of longer-than-necessary build times. It's not clear what you mean by, "Qt creator doesnt recognise QTreeWidget when typed in." Qt Creator cannot perform auto-complete functions for classes without a complete declaration of a class. If you only have a forward declaration, perhaps provided by a related include that doesn't itself require a full declaration, then Creator knows only that the class exists but not how it is built so it cannot offer completion for member functions etc. For example, the QWidget include declares the existence but not the content of class QLayout, QVariant, QStyle etc. Thanks guys, I included QtWidgets and it seems to work!
https://forum.qt.io/topic/25545/solved-qt-creator-not-recognising-widgets-despite-inlcuding-all-necessary-includes
CC-MAIN-2018-05
refinedweb
431
62.17
Lin Dialog (this is available also through the Query main menu). Add references to the OpenAccess assemblies that you need, our data context assembly and the model that will be used to run the queries against. The list should look like: Where the Model.dll contains the model from our SilverLight integration demo (available here). We used the sample model because of practical reasons: It has IQueriable endpoints developed on the data context and it is already tested with Ado.Net Data Services. Our context implementation that currently serves Data Services is inside the Telerik.OpenAccess.40.dll. Actually we added references to the DataServices as well, because the Telerik.OpenAccess.40.dll is built with references to them. So after we have the references in place, it is time to add few namespaces that will allow LinqPad to resolve correctly the types and extents used in our sample query: You may want to click the "Set as default for new queries" button in the lower left so you don't need to set up these assembly and namespace references the next time you start LINQPad. Alternatively, when you save a * .linq query file it will save these references and reload them the next time you open the .linq query file. This can be handy for switching between different databases and data contexts. All the connection maintenance is done in the model assembly (those are our assumptions). In LINQPad, change the query type to "C# Statement(s)" and paste the following sample code: Model.ObjectScopeProvider1.AdjustForDynamicLoad(); OADataContext ctx = new OADataContext(); var q = from c in ctx.Orders select c; q.Dump(); Press F5 or Ctrl+E to run this and you should see the result: One additional thing that remains unsolved is the generated SQL dump, and that is what we will look into very
http://www.telerik.com/blogs/using-linqpad-with-telerik-openaccess-orm
CC-MAIN-2017-17
refinedweb
305
64.41
ERF(3) BSD Programmer's Manual ERF(3) erf, erff, erfc, erfcf - error function operators libm #include <math.h> double erf(double x); float erff(float x); double erfc(double x); float erfcf(float x); These functions calculate the error function of x. The erf() calculates the error function of x; where erf(x) = 2/sqrt(pi)*integral from 0 to x of exp(-t*t) dt. The erfc() function calculates the complementary error function of x; that is erfc() subtracts the result of the error function erf(x) from 1.0. This is useful, since for large x places disappear. math(3) The erf() and erfc() functions appeared in 4.3BSD. MirOS BSD #10-current April 20,.
http://mirbsd.mirsolutions.de/htman/i386/man3/erf.htm
crawl-003
refinedweb
118
57.16
Simplest Pytest Setup This is the most basic project layout for running pytest, a desirable test runner for your python code. Prerequisites First, get python3 installed. Then a virtual environment, like pipenv. Get dependencies Then use pipenv to get your pytest dependencies. cd my_project pipenv shell pipenv install pytest pytest-watch Create project files Then create your source file, vim module.py: def placeholder(): return True Then create your test file, vim module_test.py: from module import placeholder def test_placeholder(): assert placeholder() Ensure the import will work by making the directory a package: touch __init__.py. Run the tests Now you should be ready to run tests. pytest will run the tests once. ptw will run the tests in watch mode. That's the simplest pytest setup I could conceive. Can you think of a simpler one? Of a better one?
https://jaketrent.com/post/simplest-pytest-setup/
CC-MAIN-2022-40
refinedweb
141
77.74
this is my first post and I just started with the Raspberry Pi. For my project I need to draw grid lines and decided to use ajstarks lib. Actually it was quite easy to generate the lines but there is still one problem. I would like to draw lines with a minimum width of 1 pixel. Is this possible with openVG? As you can see below I use Line and Stroke functions to draw lines. In that specific case I try to draw a single line at the bottom of the HDMI display but it actually draws two rows at the bottom. i.e. the width of the line is 2 pixels. Changing the StrokeWidth to values below 1 just dims the line(s). How can I create lines with a single pixel width with openVG? Best regards, Kai Code: Select all #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include "VG/openvg.h" #include "VG/vgu.h" #include "fontinfo.h" #include "shapes.h" int main() { int width, height; char s[3]; init(&width, &height); // Graphics initialization Start(width, height); // Start the picture Background(0, 0, 0); // Black background Stroke(255, 255, 0, 1); // Set the stroke color StrokeWidth(1); // Set the stroke width Line(1, 1, 1920, 1); // define line End(); // End the picture fgets(s, 2, stdin); // end with [RETURN] finish(); // Graphics cleanup exit(0); }
https://www.raspberrypi.org/forums/viewtopic.php?f=69&t=186000
CC-MAIN-2018-51
refinedweb
228
85.08
I want to pass a vector to a function by value (not by reference). After exectuting the code arr[1], arr[2] and arr[3] are equal 0. What arguments should the fuction getAvarage arr #include <stdio.h> #include <stdlib.h> double getAverage(int v[]); int main() { int arr[4], i; for (i = 0 ; i < 4 ; i++){ printf("arr[%d]=", i); scanf("%d", &arr[i]); } printf("avrg=%lf", getAverage(arr)); printf("\n%d %d %d", arr[1], arr[2], arr[3]); return 0; } double getAverage(int v[]) { int i; double avg; double sum = 0; for (i = 0; i < 4; ++i) { sum += v[i]; v[i] = 0; } avg = sum / 4; return avg; } As covered by iharob, arrays decay to a pointer to their first element when passed to a function. This is a convenience feature. There are a few similar cases - functions decay to a pointer to the function when passed, almost anything decays to a boolean in a conditional statement. It's just a case of learning these as you go. There is a solution of sorts to passing an array by value. An instance of a struct will be passed by value, so putting the array in a struct will achieve the desired result. struct demo { int some_array[4]; }; void i_cant_change_it(struct demo x) { x.some_array[0] = 42; } This is because the struct instance, fortunately, doesn't decay to a pointer when passed to a function. If you want to be able to mutate the instance, such that the caller can see the change, the prototype looks like void i_might_change_it(struct demo *);
https://codedump.io/share/nnV6JV0A8dts/1/how-to-pass-a-vector-to-a-function-by-value-in-c
CC-MAIN-2017-13
refinedweb
263
70.13
best practices when making extensions, including how to be kind to your users. It assumes that you are already familiar with Building an Extension. User Interface Tools menu items Using the Tool menu option gives the author the maximum amount of choices. Whether the extensions should go at the top, bottom, or somewhere in between on the Tools menu, the author always has a choice. Ideally, the location would be below the Add-ons item, grouped with the other extension-related commands ( menuitem:insertafter="javascriptConsole,devToolsSeparator"). Sub-menus should be used for single extensions needing multiple menu items, and a Tools menu item should not be created for options and preferences (for options and preferences, see the add-on manager). If possible, create a menu item in the menu where it is most applicable; for instance, a bookmark sharing extension should be called from the Bookmarks menu. To maintain the default theme, avoid the use of an icon next to the menu items. Other UI elements In general, toolbar items are very useful to end users because they can be removed or added to various toolbars as necessary. Status bar items should only be added for extensions that need constant monitoring, such as ad blocking, page ranking, or cookie management. Likewise, use context menu items sparingly — only for tasks that are done frequently or on specific elements of a web page. Focus Don't steal focus. It's not your extension's job to take focus from the web content. If the user loads a website and they ask for focus, they should get it. Overriding their request isn't very nice. About dialogs There is a default popup About dialog that is created from install.rdf data; creating a new XUL About box is usually unnecessary. You can decrease download size by omitting a customized About box. Make one only if you have a special feature that needs to be included — for example, a custom updater. Theming If you have XUL buttons in your extension that do functions similar to ones that already exist in a browser — for example, a feed reader that reloads and stops — use icons from the browser's theme. The icons makes the extension lighter, while providing more consistency, especially for users using different themes. Extension icons Unique icons are usually worth their download weight. They allow for easy identification among other extensions in the Extensions manager. Coding practices Namespace conflicts There are many namespaces which extensions often must share with other consumers, be they other add-ons, web code, or the browser itself. These often include areas such as: - Object prototypes, such as String.prototype, which are often extended to add methods to native objects. - Global variables, such as top-level declarations on scripts loaded into shared windows or web pages. - Expando properties of shared objects, such as documentobjects, or DOM nodes. - IDs and class names in HTML and XUL documents, when extensions add elements to web pages or browser windows. chrome:or resource:packages, which are often defined in chrome.manifestfiles. about:page URLs. - XPCOM contract IDs, which are often registered in chrome.manifestfiles. While these are among the most common examples of namespaces in which conflicts can occur, there are many others. In general, care must be taken whenever defining a name anywhere that other code might do likewise. Strategies to avoid such conflicts include: - Avoid shared namespaces where possible - Many naming conflicts are best avoided by simply not sharing namespaces. Scripts can be loaded into their own globals, such as CommonJS modules, JavaScript modules, or Sandboxes, to avoid most global variable and prototype conflicts. Expando properties are best avoided using tools such as WeakMaps. - Prefix names in shared namespaces When shared namespaces can't be avoided, the simplest solution to prevent conflicts is to use a distinct prefix for all of your names. Class names for HTML elements created by the Cool Beans extension, for instance, might all be prefixed with cool-beans-. Global variables might all be defined as properties of the CoolBeansobject. Some namespaces have specific conflict prevention conventions, which should be followed as appropriate. XPCOM contract IDs, for instance, should always begin with an @, followed by a domain name that the author controls, e.g., "@example.com/foo/bar;1" It is important that the prefix that you use be unlikely to conflict with other code, and that it be indicative of the name of your add-on. Generic prefixes such as myextension-, or short prefixes such as ffx-, are likely to be used elsewhere, and therefore unsuitable to the purpose. - Call .noConflict(true)where applicable Many common libraries which create global variables provide a method called noConflict, or similar, which revert any global variables they've declared, and return the object itself. For instance, calling jQuery.noConflict(true)will remove the window.jQueryand window.$variables, and return the jQueryobject itself, for future use by the caller. When available, these methods should always be used to prevent conflicts with third-party code. Names and Metadata Naming Be creative! Don't be redundant and include "extension," "Mozilla/Firefox/Thunderbird," or the version number in the name. Be original! Descriptions Use something that is descriptive, but that would fit in the default add-on manager width. The Mozilla extensions (Inspector/Reporter/Talkback) believe starting with a verb is the best way. For example, "Does an action in the browser." Documentation Assume that the vast majority of your users don't have inner working knowledge of Mozilla. Make sure your extension's homepage states the "obvious," including the purpose of your extension. Users also appreciate when your extension is shipped with a simple how-to document. IDs Firefox/Thunderbird 1.5 or later are much more strict about the IDs of extensions than their 1.0 counterparts. Make sure they're valid. Version numbering Please follow the Mozilla pattern: major version dot current incarnation dot security/bugfix release (like 1.0.7). Internationalization Locales Always use locale DTDs and property files, even if providing your extension in one language. It will make translation of your extension to another language easier. It occurs more often than you would think. Options Firefox users like options. Lots of options. Try to include everything a user could ever want to customize in your extension, remembering more can be added later. For a large number of options for your extension, break the options window into multiple pages (tabs) that are well labeled. Don't hesitate to give long descriptions for each preference, as long as they are easy to understand, even for non-computer-savvy users. However, please make sure the default set of preferences is adequate — don't require people to tweak options in order to get your extension's core functionality. Preferences' internal names Internal Firefox preference names for extensions or to be clear, the name of the preference as it appears in the about:config, should start with " extensions.," then the name of the extension, with a dot, then the name of the preference. For instance, a boolean for the Reporter extension's option for hiding the privacy statement is " extensions.reporter.hidePrivacyStatement". General Dependencies Requiring a user to download another extension in order to use yours isn't nice. Avoid the dependency on other extensions, especially extensions that you didn't develop.
https://developer.mozilla.org/de/Add-ons/Extension_etiquette
CC-MAIN-2017-39
refinedweb
1,220
55.64
Type: Posts; User: controlsguy Thanks for your feedback, Paul. If this is the case, how can I replace these operators in my code with the equivalent in C? Thanks Paul. Will do. I can't find any topics on the "<<" I am using. How do these normally work? Paul, sorry for the confusion - The code needs to be in "C". Thank you for your help. When trying to compile, I am receiving errors which I am assuming are pretty generic and common: lin_interp.c:21: error: expected '=', ',', ';', 'asm' or '__attribute__' before '{' token... Truly sorry about that guys. See below. Thank you for your help. // Lin_Interp.c : Defines the entry point for the console application. // #include "PACRXPlc.h" /* Include file applicable for... I am trying to create C++ code that allows a linear equation (interpolator?) to take in a value and produce an output based on a table that is specified. The problem I am having is the GE compiler...
http://forums.codeguru.com/search.php?s=05c2f8f910c371c24dac5640bd85530a&searchid=6640417
CC-MAIN-2015-14
refinedweb
159
69.48
Wiki: SGESun Grid Engine (SGE) Reference by Oliver; May 7, 2014 IntroductionThe Sun (or Oracle) Grid Engine, abbreviated as SGE, is a suite of commands for scheduling programs on a computer cluster. Picture the following the scenario: You have 100 people sharing a computing cluster. Each wants to run programs on this resource. Sometimes user Alice wants to run a batch of very time- and memory-intensive programs; sometimes user Bob does; and sometimes they both do at the same time. The SGE framework manages all of this and tries to fairly allocate computing resources. The programs that Alice and Bob want to run go into a queue and the scheduler decides when they actually do. Simple-mindedly, we can think of a computer cluster as a bunch of computers chained together and can call each such computer a node. Instead of running a program—or a thousand programs—on your local computer, you can request to run these programs on nodes of the cluster. In SGE parlance, requesting a node on which to run your program is called "submitting a job" and the fundamental command to do this is called qsub. Running a program is called "running a job". What's the advantage of qsub-ing vis-à-vis just running a program on your local computer? Taking advantage of a computing cluster is useful when you want to parallelize heavily or perhaps run a memory intensive job that is not suitable for your local machine. A good example from bioinformatics is parallelizing a blast job. Sun Grid Engine for Dummies gives an eloquent introduction to this subject: Servers tend to be used for one of two purposes: running services or processing workloads. Services tend to be long-running and don't tend to move around much. Workloads, however, such as running calculations, are usually done in a more "on demand" fashion. When a user needs something, he tells the server, and the server does it. When it's done, it's done. For the most part it doesn't matter on which particular machine the calculations are run. All that matters is that the user can get the results. This kind of work is often called batch, offline, or interactive work. Sometimes batch work is called a job. Typical jobs include processing of accounting files, rendering images or movies, running simulations, processing input data, modeling chemical or mechanical interactions, and data mining. Many organizations have hundreds, thousands, or even tens of thousands of machines devoted to running jobs. Now, the interesting thing about jobs is that (for the most part) if you can run one job on one machine, you can run 10 jobs on 10 machines or 100 jobs on 100 machines. In fact, with today's multi-core chips, it's often the case that you can run 4, 8, or even 16 jobs on a single machine. Obviously, the more jobs you can run in parallel, the faster you can get your work done. If one job takes 10 minutes on one machine, 100 jobs still only take ten minutes when run on 100 machines. That's much better than 1000 minutes to run those 100 jobs on a single machine. But there's a problem. It's easy for one person to run one job on one machine. It's still pretty easy to run 10 jobs on 10 machines. Running 1600 jobs on 100 machines is a tremendous amount of work. Now imagine that you have 1000 machines and 100 users all trying to running 1600 jobs each. Chaos and unhappiness would ensue. To solve the problem of organizing a large number of jobs on a set of machines, distributed resource managers (DRMs) were created. (A DRM is also sometimes called a workload manager. I will stick with the term, DRM.) The role of a DRM is to take a list of jobs to be executed and distributed them across the available machines. The DRM makes life easier for the users because they don't have to track all their jobs themselves, and it makes life easier for the administrators because they don't have to manage users' use of the machines directly. It's also better for the organization in general because a DRM will usually do a much better job of keeping the machines busy than users would on their own, resulting in much higher utilization of the machines. Higher utilization effectively means more compute power from the same set of machines, which makes everyone happy. Here's a bit more terminology, just to make sure we're all on the same page. A cluster is a group of machines cooperating to do some work. A DRM and the machines it manages compose a cluster. A cluster is also often called a grid. There has historically been some debate about what exactly a grid is, but for most purposes grid can be used interchangeably with cluster. Cloud computing is a hot topic that builds on concepts from grid/cluster computing. One of the defining characteristics of a cloud is the ability to "pay as you go." Sun Grid Engine offers an accounting module that can track and report on fine grained usage of the system. Beyond that, Sun Grid Engine now offers deep integration to other technologies commonly being used in the cloud, such as Apache Hadoop. The BasicsSGE commands usually begin with the letter q for queue. Get an interactive node with 4 gigabytes for 8 hours: $ qrsh -l mem=4G,time=8::(this should be the first thing you do in the morning :D) Submit a job to the cluster: $ qsub myjob.shMonitor the status of your jobs: $ qstatIt looks like this: Delete a particular job: $ qdel [jobid] Some Useful SGE Commands and One-LinersSee the SGE commands at your disposal: $ ls -hl $( dirname $( which qstat ) )See all the nodes: $ qhostA picture: Count and tabulate all jobs on the cluster: $ qsumOn our system, this command is only available on the head (or login) nodes. A picture: See every job on the cluster submitted by any user: $ qstat -u "*" | lessDelete any job whose job name begins with prefix: $ qdel prefix*Delete all your jobs (except interactive nodes): $ qstat | sed '1,2d' | grep -v LOGIN | cut -f1 -d" " | xargs -i qdel {}Monitor your jobs every 30 seconds: $ while true; do qstat; sleep 30; done(stop with Cntrl-C) A more elegant way to do this is with the watch command, as we'll see in a second. See extended options (which will show how much memory your job's consuming, etc.) $ qstat -extThe same, but update every 2 seconds: $ watch qstat -extTips via my co-worker, Albert—to add arguments to jobs that are still in the queue: $ qalter [additional flag] [jobid]For example, to set the email notification: $ qalter -m be -M yourEmail@host.com [jobid]To suspend a running job: $ qmod -sj [jobid]This is useful, for example, to freeze a job when space is almost full. qsub-ing Flag StyleLet's take the following script, myscript.sh, as an example: #!/bin/bash echo "[start]" echo "[date] "`date` echo "[cwd] "`pwd` # my script sleep 5 echo "[end]"Let's make a logs directory to store our job's output and error information: $ mkdir logs # make a dir to store log filesWhatever our script would normally echo to std:out and std:error will go here instead. We can qsub this as follows: $ qsub -N myscript -e logs -o logs -l mem=1G,time=1:: -S /bin/sh -cwd ./myscript.sh Your job 3212900 ("myscript") has been submittedwhere we used the flags: - -N myscript Name is “myscript” - -e logs -o logs Error and Output go into logs/ directory - -l mem=1G,time=1:: Time and Mem request: 1 Gigabyte for 1 hr - -S /bin/sh Interpret script with sh (this could just as well be the path to perl, python, etc) - -cwd Run from current working directory $ cat logs/myscript.e3212900Whew! It's empty—that's good. Now the output: $ cat logs/myscript.o3212900 [start] [date] Tue May 6 13:23:22 EDT 2014 [cwd] /path/sge_test [end] qsub-ing Header StyleIf you like, you can also put the flag information directly into the script's header as follows: #!/bin/bash #$ -N myscript #$ -e logs #$ -o logs #$ -l mem=1G,time=1:: #$ -S /bin/sh #$ -cwd echo "[start]" echo "[date] "`date` echo "[cwd] "`pwd` # my script sleep 5 echo "[end]"Then to qsub, it's simply: $ qsub ./myscript.shHowever, I would stick to the flag style instead of hard-coding lines into your header. Using flags is a lot more flexible and keeps clutter out of your scripts. Job StatesWhen we qsub this script and query its status with qstat, we might see the following common job states: - qw - queued - Eqw - error - hqw - holding (waiting on another job) - r - running Useful qsub Syntax Example SyntaxExample job submission syntax: $ qsub -N myjob -e ./logs -o ./logs -l mem=8G,time=2:: -S /bin/sh -cwd ./myscript.sh Getting the Job IDGrab the job id in bash: $ jobid=$( qsub -S /bin/sh -cwd ./myscript.sh | cut -f3 -d" " )or save it in a file: $ qsub -N myjob -l mem=1G,time=1:: -S /bin/sh -cwd ./myscript.sh | cut -f3 -d" " > job_id.txtThis depends on the fact that qsub returns a sentence like: "Your job 3212900 ("myscript") has been submitted". We expect the third word to be the job id. Submitting Binary Commands Rather Than ScriptsSubmit a binary command, such as sleep, echo, or cat (rather than an uncompiled script): $ qsub -b y -cwd echo joe $ qsub -b y -N myjob -l mem=1G,time=1:: -S /bin/sh -cwd sleep 60A real example - find the size of every directory in /my/dir: $ for i in /my/dir/*; do name=$( basename $i ); echo $name; qsub -N space_${name} -e logs -o logs -V -cwd -b y du -sh $i; done Importing Shell Variables into Your JobsTo import shell environmental variables into your job use the flag -V: $ qsub -V …E.g.: $ qsub -V -N myjob -l mem=1G,time=1:: -S /bin/sh -cwd ./myscript.shIn practice, it's a good idea to always use this flag. To pass specific shell variables to your job, you can use the -v flag. For instance, to pass the DISPLAY variable to your job: $ qsub -v DISPLAY ... Parallelizing over Multiple CoresRun a job parallelizing over 4 cores: $ qsub -pe smp 4 -R y ... Piping into qsubPipe into qsub: $ echo "./myscript.sh" | qsub -N myscript -e logs -o logs -l mem=1G,time=1:: -S /bin/sh –cwd qsub-ing Scripts in Perl, Python, R, etcSubmit a script that is not bash (in this case we'll use R): $ qsub -V -N myscript -e logs -o logs -l mem=1G,time=1:: -S /nfs/apps/R/2.14.0/bin/Rscript -cwd ./test.r(replace the path to Rscript, of course, with your own) Array JobsFor some reason—I don't know why!—if you submit an array job of 1000 jobs it's supposedly gentler on the scheduler than if you submit 1000 regular jobs. So how do you submit an array job? Here's an example script called sample_array_job.sh: #!/bin/bash echo "example array job" echo "iterator = "${SGE_TASK_ID} case $SGE_TASK_ID in 1) var="hello";; 2) var="goodbye";; 3) var="world";; esac echo $varWhat's going on here? This job just echoes some things, but note the special shell variable SGE_TASK_ID. This variable iterates over the number of elements in your array job, which is submitted with the -t flag, as in: qsub -t 1-3:1 -V -N job -e logs -o logs -l mem=1G,time=1:: -cwd ./sample_array_job.shThe synax: -t 1-3:1means the array will range from 1 to 3 in steps of 1. So, what's the result? The output logs are as follows: ==> logs/job.o342892.1 <== example array job iterator = 1 hello ==> logs/job.o342892.2 <== example array job iterator = 2 goodbye ==> logs/job.o342892.3 <== example array job iterator = 3 worldNote that the case statement is a good way to get your script to do different things each iteration of the array. If you're using perl, for example, you'll find yourself referring to $ENV{'SGE_TASK_ID'}, and so on. How is Job Priority Assigned?How is Job Priority Assigned? This depends on the whims of the IT sysadmin gods. For C2B2, you can read about it here. Common ProblemsIf you get an Eqw, there are some simple blunders you might have made. The most common is, on our system jobs are forbidden to write in the home directory—an exclusive quirk of our system set by the system administrators. Another reason you might get an Eqw is if you are trying to write your logs into a folder that doesn't exist. Finally, one other SGE-specific problem is that if you use a construction like: # get the directory in which your script itself resides d=$( dirname $( readlink -m $0 ) )where you look for "sister scripts" in the directory where your script itself resides, you won't find them. The reason is that when you run a job, you script is actually copied to and run from a different directory altogether (something like /opt/gridengine/default/spool). How Much Time & Memory Should I Assign my Job?An often-asked question is, how much time and memory should I assign my job? Let's review how to query how much time and memory a program takes in ordinary unix. How much time did your program take? Use time: $ time ./myscript.sh [start] [date] Tue May 6 13:21:11 EDT 2014 [cwd] /path/sge_test [end] real 0m5.022s user 0m0.004s sys 0m0.015sHow much memory did your program take? Again, use time but with the -v flag. For example, to get information about zipping a file test.txt: $ /usr/bin/time -v gzip test.txt Command being timed: "gzip test.txt" User time (seconds): 19.77 System time (seconds): 0.74 Percent of CPU this job got: 24% Elapsed (wall clock) time (h:mm:ss or m:ss): 1:23.40 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 2416 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 185 Voluntary context switches: 94 Involuntary context switches: 551 Swaps: 0 File system inputs: 743616 File system outputs: 140216 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0In SGE land, you can see information about a job, including how much time and memory it consumed and its exist code with: $ qacct -j [jobid]On our system, this command is only available on the head (or login) nodes. Knowing how much memory and time a job consumed will give you a good idea of how much to allot for similar future jobs. You can get information about a given job, including reasons for error with: $ qstat -j [jobid] Dependencies in SGE and Jobs that Submit JobsThe simplest dependency in SGE is making one job wait on another job. Suppose job A should wait for job B, whose job id is 2. Then to make job A wait on job B, use the -hold_jid flag: $ qsub -N job_B -cwd ./job_B.sh Your job 2 ("job_B") has been submitted $ qsub -N job_A -cwd -hold_jid 2 ./job_A.shNow job A won't run until job B is finished. If you want job A to wait on multiple jobs, use a comma-delimited list of job ids after the hold flag. The following figure shows how you can run Job-1, Job-2, ..., Job-n in parallel and make Job-final wait on all of them: You can also hold a job directly (although I rarely have occasion to do this): $ qhold [jobid]and release it: $ qrls [jobid]Now let's imagine a multiple-step process involving a pipeline you want to parallelize—meaning submit multiple jobs in parallel—at certain points, but not at others. Here's a picture: Suppose Job-2 should wait on Job-1, which submits J1_1, J1_2, .., J1_n and J1_final. This is a common scenario: in the SGE framework, you may have occasion to want a script with jobs that submit jobs. But this leads to a new problem: how do you arrange dependencies such that jobs can depend on jobs submitted by jobs? Up until now, it seems that we can only make a job depend on another job which has already been submitted to the queue, since this is the point when we get its job id. If we want to make job A depend not on job B but, rather, on a job which job B submits, we have a problem because at the time we release job A and job B into the queue together, B's "child" job doesn't even exist yet. It's like an unborn child and, as such, we have no way to get its id—which is what we need to feed the -hold_jid flag to make our dependencies work out. What to do? The answer comes from one of the best qsub flags of all time: -sync y. The sync flag acts as a brake and halts the script's gears until the jobs finish. In so doing, it makes an SGE script behave a lot like a regular unix script: commands are executed in order. To illustrate, let's first consider the example of a script with a hold but without a sync: qsub 1 # into queue qsub 2 # into queue qsub 3 –hold_jid 1,2 # into queue, waiting on 1 and 2 command n command n+1Here, jobs 1 and 2 execute in parallel and job 3 waits for them, but command n will execute as soon as all these jobs are in the queue, irrespective of whether they're finished. Now let's consider the script with both a hold and a sync: qsub 1 # into queue qsub 2 # into queue qsub 3 –hold_jid 1,2 -sync y # into queue, waiting on 1 and 2 # EVERYTHING STOPS HERE UNTIL JOB 3 FINISHES command n command n+1In this case, jobs 1 and 2 execute in parallel and job 3 waits for them, but the script is paused waiting for job 3 to finish, so command n won't execute until all the jobs above it have finished. Do you see the power of this technique? With it we can, say, heavily parallelize our script up until one point, then collect all the output and proceed linearly again, then parallelize again, and so on. Think about about running a blast job in parallel then concatenating all the results and doing something with them. How to Toggle an SGE Option On & Off Without Re-structuring Your Script: A Python ImplementationIf you're writing a script you don't want its logic to be SGE-dependent. Ideally, you'd like to be able to toggle qsub on and off. After all, not everyone has access to the SGE suite. Let's take a detour into unix and observe that the standard bash command which qsub most resembles is sh. Sometimes, when you run a script verbosely, you want to echo commands before you execute them. A std:out log file of this type is invaluable if you want to retrace your steps later (as often happens in research). One way of doing this is to save a command in a variable, cmd, echo it, and then pipe it into sh. Your script might look like this: cmd="ls -hl"; # save the command in a variable echo $cmd; # echo the command echo $cmd | sh # run the commandNow look at the following parallelism: $ echo $cmd | sh $ echo $cmd | qsubVery cool! E.g.: $ cmd="./myscript.sh” $ echo $cmd ./myscript.sh $ echo $cmd | sh $ echo $cmd | qsub -N myscript -l mem=1G,time=1:: -S /bin/sh -cwdYou can see how this would be useful in a script with a toggle SGE switch. You could pass the shell command to a function and use an if-statement to choose if you want to pipe the command to sh or qsub. (One thing to keep in mind is that any quotation marks ( " ) within your command must be escaped with a slash) Here's a rough sketch of an implementation of this idea in python. First we start with a function to escape various pesky special characters: def escape_special_char(mystr): """fix string for shell commands by escaping quotes and dollar signs. The idea is, we want to be able to use the echo "cmd" | sh construction""" return mystr.replace('"','\\"').replace('$','\\$')Now we need a function that just runs a simple system command (and echoes it if a verbose flag is turned on): import subprocess def run_cmd(cmd, bool_verbose, bool_getstdout): """Run system cmd""" cmd = "echo \"" + cmd + "\" | sh" if (bool_verbose): print(cmd) proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.wait() (stdout, stderr) = proc.communicate() # if error, print it if stderr: print("ERROR: " + stderr), # return stdout if (bool_getstdout): return stdout.rstrip() else: return "0" # note: this must return a strIf a boolean flag is high, it will return the cmd's std:out. Now we'll use a function that qsubs a command and returns its job id if a flag is high: import subprocess def run_qsub(cmd, bool_verbose, bool_getstdout, qsubstr): """Run SGE qsub cmd""" cmd = "echo \"" + cmd + "\" | " + qsubstr if (bool_verbose): print(cmd) proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.wait() (stdout, stderr) = proc.communicate() # if error, print it if stderr: print("ERROR: " + stderr), # return job ID, assuming it's in the form: # Your job 127748 ("testawesome2") has been submitted if (bool_getstdout): return stdout.split()[2] else: return "0" # note: this must return a strNow we'll write a function that toggles between them: def whichcmd(cmd, args, wantreturn, wantqsub=0, jobname="myjob", holdstr="0", wantsync=0): """Run cmd as regular cmd or qsub cmd with SGE""" if ( args.sge and wantqsub ): qsubstr = "qsub " if ( holdstr != "0" ): qsubstr = qsubstr + "-hold_jid " + holdstr + " " if ( wantsync ): qsubstr = qsubstr + "-sync y " qsubstr = qsubstr + "-V -N " + jobname + \ " -e " + args.sgelog + " -o " + args.sgelog + \ " -l mem=" + args.sgemem + "G,time=" + args.sgetime + ":: -S /bin/sh -cwd " return run_qsub(cmd, args.verbose, wantreturn, qsubstr ) else: return run_cmd(cmd, args.verbose, wantreturn )In practice, we'll only use whichcmd and never run run_qsub or run_cmd. Something like this: # run cmd and store SGE job id myjobid = whichcmd(cmd, args, 1, args.sge, "myjob") if ( args.sge ): print("Your job " + myjobid + " has been submitted") sgejobids = myjobid + "," + sgejobids
http://www.oliverelliott.org/article/computing/wik_sge/
CC-MAIN-2017-39
refinedweb
3,805
69.01
Tree not rerendering on drag and dropDanielle Wheeler Jul 26, 2007 1:22 PM Recently I've created drag and drop listeners in my backing bean for a tree structure. As I drag and drop tree nodes on the JSP I can see in MyEclipse that the nodes are being added to and removed from the appropriate places. The problem is that the 'reRender' attribute of the tree does not appear to be funtioning. If I create a a4j:commandButton such as the 'Refresh Tree' one below, the tree will reRender when I click it. The tree will also rerender correctly if I manually refresh the page. I'd prefer that the tree rerender as soon as I finish a drop action however. Does anyone know or have any ideas on how to do this? Note: My treeNodes have to have dataTables in them so I can display multiple columns per node. I realize this creates another set of nested tables in XHTML but I don't know a way around it. Tree Code from JSP: <a4j:form> <rich:tree <rich:treeNode <t:dataTable <t:columns <h:outputText </t:columns> <t:column> <a4j:commandLink </t:column> </t:dataTable> </rich:treeNode> </rich:tree> <a4j:commandButton </a4j:form> Relevant Code from the Backing Bean: public class CreateTree { ... private EntityFamilyTree myDropZone; private String myDropType; public void processDrop(DropEvent dropEvent) { myDropZone = (EntityFamilyTree)((TreeNodeImpl)((HtmlTree)((HtmlTreeNode)dropEvent.getSource()) .getParent()).getTreeNode()).getData(); myDropType = dropEvent.getDragType(); } public String processDrag(DragEvent dragEvent) { if(myDropType.equals(dragEvent.getAcceptedTypes())) { EntityFamilyTree draggedNode = (EntityFamilyTree)((TreeNodeImpl)((HtmlTree)((HtmlTreeNode)dragEvent.getSource()) .getParent()).getTreeNode()).getData(); EntityFamilyTree draggedParent = (EntityFamilyTree)draggedNode.getParent(); String[] draggedUuid = dragEvent.getDragValue().toString().split(":"); // Remove dragged node from its previous parent node draggedParent.removeChild(draggedUuid[draggedUuid.length-1]); // Add dragged node to its new parent node myDropZone.addChild(draggedUuid[draggedUuid.length-1], draggedNode); return "success"; } return "failure"; } public String reset() { return null; } } EntityFamilyTree is a simple class that extends TreeNodeImpl and holds extra properties such as a list for the columns. Any help would be greatly appreciated. Thank you in advance, Danielle Wheeler 1. Re: Tree not rerendering on drag and dropDanielle Wheeler Jul 26, 2007 3:30 PM (in response to Danielle Wheeler) I found a workaround to my problem. If you put the reRender and ajaxSubmit="true" in the a4j:form instead of in the tree, it works fine. <a4j:form ... </a4j:form> 2. Re: Tree not rerendering on drag and dropWilliam Mitchell Sep 21, 2007 11:19 PM (in response to Danielle Wheeler) I was having a similar problem and found that in addition to using a4j:form instead of h:form, rich:tree switchType has to be either "ajax" or "client". With switchType="server", the tree abruptly closes up when dragging. 3. Re: Tree not rerendering on drag and dropMaksim Kaszynski Sep 22, 2007 1:31 PM (in response to Danielle Wheeler) "whm" wrote: I was having a similar problem and found that in addition to using a4j:form instead of h:form, rich:tree switchType has to be either "ajax" or "client". With switchType="server", the tree abruptly closes up when dragging. What do you mean by abruptly closing? It kills the browser? navigates to error page? 4. Re: Tree not rerendering on drag and dropWilliam Mitchell Sep 23, 2007 3:42 PM (in response to Danielle Wheeler) It's the tree itself that closes, as if queueCollapseAll() had been called. 5. Re: Tree not rerendering on drag and dropstefan r Feb 29, 2008 7:29 AM (in response to Danielle Wheeler) hello dwheeler et all, do you have a working example of a tree that rerenders correclty? i simply do not get it working.... thanks. BR
https://developer.jboss.org/thread/6991
CC-MAIN-2018-05
refinedweb
609
55.03
#include <sys/pccard.h> int32_t csx_ConvertSize(convert_size_t *cs); Solaris DDI Specific (Solaris DDI): This is a bit-mapped field that identifies the type of size conversion to be performed. The field is defined as follows: Converts bytes to devsize format. Converts devsize format to bytes. If CONVERT_BYTES_TO_DEVSIZE is set, the value in the bytes field is converted to a devsize format and returned in the devsize field. If CONVERT_DEVSIZE_TO_BYTES is set, the value in the devsize field is converted to a bytes value and returned in the bytes field. Successful operation. Invalid bytes or devsize. No PCMCIA hardware installed. This function may be called from user or kernel context. csx_ModifyWindow(9F), csx_RequestWindow(9F) PCCard 95 Standard, PCMCIA/JEIDA
http://docs.oracle.com/cd/E36784_01/html/E36886/csx-convertsize-9f.html
CC-MAIN-2015-27
refinedweb
118
50.53
Optimize sleep time in script with loop function By DeeJay7, in AutoIt General Help and Support Recommended Posts Similar Content - FMS Hello, I'm having a problem whit winwait on firefoxscreens whit the same title and text. On mine quest on this forum and internet I've found some work around and solutions. unfortunaly This wasn't working for me. I've tried searching tru winlist or finding some unique text but wasn't finding any solution. Does anyone know how to get the wright handle? I just want to move the browser to the right place in the end whit WinMove. thnx in advanced. #include <Array.au3> Global $A_URL[4][2] = _ [["url1" , "same_title"] , _ ["url2", "same_title"] , _ ["url3" , "other_title" ] , _ ["url4" , "other_title" ]] ;~ _ArrayDisplay($A_URL) ;-----kill all firefox.exe ;~ Run("taskkill /IM firefox.exe /F", "", @SW_HIDE) ;Sleep(5000) For $i = 0 To 1 ConsoleWrite("running 1 : " & $i & @CRLF) Local $ID = ShellExecute("firefox.exe", "-new-window " & $A_URL[$i][0] ,"C:\Program Files\Mozilla Firefox" ) ConsoleWrite("$ID = " & $ID & @CRLF) Next sleep(200) ;~ Local $screen1HWND = WinWait($A_URL[0][1],"") ;~ If Not WinActive($screen1HWND) Then WinActivate($screen1HWND) ;~ ConsoleWrite("$screen1HWND = " & $screen1HWND & @CRLF) ;~ Local $screen2HWND = WinWait($A_URL[1][1],"") ;~ If Not WinActive($screen2HWND) Then WinActivate($screen2HWND) ;~ ConsoleWrite("$screen2HWND = " & $screen2HWND & @CRLF) $sWinTitle = $A_URL[0][1] $avWinList = WinList($sWinTitle) For $n = 1 to $avWinList[0][0] ConsoleWrite("Window " & $n & ": Text: " & WinGetText($avWinList[$n][1]) & @LF) Next For $i = 0 To 1 ;~ WinWait("title1", "", 10) ;~ WinActive("title1", "") ;~ WinMove ("title1", "", $i , $i ) Next - By MFrancisca OK,! - naru -
https://www.autoitscript.com/forum/topic/166084-optimize-sleep-time-in-script-with-loop-function/
CC-MAIN-2019-35
refinedweb
248
52.8
wavelengthMembers Content count13 Joined Last visited Community Reputation109 Neutral About thewavelength - RankMember thewavelength replied to thewavelength's topic in AngelCodeHello, thanks! I want this to allow people use #include <path>. TC_VALUE seems only to allow " and ', so I thought of modifying it. The tokenizer does not seem to pass back TC_VALUE when there are no quotes. Of course I can work-around this, but I wanted to know firstly that there is no other possibility. Thanks and good work btw ;) By the way, I've got a question to the coding style. Is there a reason for using a non objective programming style in some cases? Example: engine->DiscardModule(const char*). I intuitionally tried to use module->Discard() and then engine->DiscardModule(module). Is the reason the more easy binding to other programming languages, like C? Or are there other reasons? Thanks! thewavelength posted a topic in AngelCode replied to thewavelength's topic in AngelCodeWow. So obvious and I was blind, but now I see. Thanks! thewavelength posted a topic in AngelCodeHello, I'm using Visual Studio 2012 and MSVC11 and get linker errors when using RegisterStdString function from the string addon when I enable namespaces. I don't know if this is also the case for older compiler versions. Fails with two linker errors (RegisterStdString and RegisterStdStringUtils): [source lang="cpp"]#define AS_USE_NAMESPACE #include <angelscript.h> #include <add_on/scriptstdstring/scriptstdstring.h> using namespace AngelScript; void test() { asIScriptEngine *engine = asCreateScriptEngine(ANGELSCRIPT_VERSION); RegisterStdString(engine); RegisterStdStringUtils(engine); }[/source] Compiles like a charm: [source lang="cpp"]#include <angelscript.h> #include <add_on/scriptstdstring/scriptstdstring.h> void test() { asIScriptEngine *engine = asCreateScriptEngine(ANGELSCRIPT_VERSION); RegisterStdString(engine); RegisterStdStringUtils(engine); }[/source] Is this a bug or am I doing something wrong? Thanks in advance! Edit: The forum removes include paths for some reason. These are the files included: <angelscript.h> <add_on/scriptstdstring/scriptstdstring.h> Edit 2: Of course I'm compiling both scriptstdstring.cpp and scriptstdstring_utils.cpp. - Hey, ok. Thank you Btw, I found a possible optimization. Because I don't know where to put this, I may write it here. In scriptstdstring.cpp, lines 91 and 92 (speaking of 2.25.0) could be replaced with: [source lang="cpp"]it = pool->insert(map<const char *, string>::value_type(s, string(s, length))).first;[/source] Bye. - Hi, Ok, thanks for your fast reply. Since I'm currently rebuilding the whole module before I run it, I shouldn't have problems with race conditions. Will these problems still exist if I use different engines? I don't think so. About my plans: I try to make AngelScript usable as a web scripting language. It works via fastcgi or scgi in a single application. The application implements a process spawner to allow real-per-user script execution. This improves security and performance. Then every spawned process runs under a real user, holding some threads. Currently there are as many threads as cores provided by the CPU. It also contains a load balancer. If a certain amount of requests is longer than 0.2s in the processing queue, a new thread will be started. The thread pool decides by itself when to reduce the number of threads. Currently, a context is created once per thread startup and is reused for every request in the specific thread. To allow much better experience with AS as a scripting language, I'm about to rewrite the whole preprocessor. It contains a bit more flexibility and is more dynamic. I don't know if you know PHP. If you know, you may know its mistakes and problems. I think that a lot of mistakes could be solved using AS, including caching and security improvements. - Oh, okay. So the contexts will not block each other when they are running in different threads though they are using the same engine? Be sure that this is my last question Thanks! - Hello, thanks! My situation is, that in one process there are four or more threads (the size is fixed). Now I invoke a call to one (randomly selected) thread that it shall execute a script. This script is only used once per time. Means: when the script has finished, the context shall be deleted. Everything should be cleared as if there were no running scripts before. I don't need modules. I don't need sharing between different scripts. I only need the #include-preprocessor command (which works well). The question for me is now whether it's more clever to use one engine per thread or to use only one engine... I think one engine per thread is a bit faster because I don't need to wait on mutex releases when I create a context... Since I'm not very experienced with AngelScript and threading in general I can't give a final answer to this question. Thanks in advance :-) Edit: I've found another issue.. What if I decide to start a fifth thread because I've got a lot of requests? How do I handle with asPrepareMultithread then? Edit 2: There's a mistake on the documentation page -> string str1 = "This is a string with \"escape sequences"."; thewavelength posted a topic in AngelCodeHello, I'm about to create a threaded environment of totally seperated engines. I found this page in the documentation: [url=""][/url] It says something about threading, but I don't think this matches my situation. I want to have about four totally seperated scripting engines which DO NEVER come in interaction with another engine. Another question is: DO I need four engines? Couldn't this be solved using four contexts? Also, the scripts are allowed to access only thread local variables and functions. So I am doing right with not implementing thread-safety mechanism? Are there other points I have to look at? Thanks! thewavelength replied to thewavelength's topic in AngelCodeWow, thanks. Now I've understood. But it isn't possible to pass e. g. integer variables between modules, is it? Btw: I'd suggest to post exactly the text you've posted to the documentation. It's a bit more clear in my opinion. thewavelength posted a topic in AngelCodeHello everybody, I'm fairly new to AS and I've read to the whole documentation twice to get a feeling for the design and the patterns. But there are two very important questions that were not answered: 1.) Is there module functionality? E. g. like in scripting languages as Python? I think I've read somewhere in the documentation that it is there, but I can't find any examples. I've also read somewhere that it is not possible to share variables through modules even when they are in the same context / execution environment. But this is a feature I need. What do you do for example, if you have such a large script, that it is impossible to structure it in one file? This is some times the case for my needs. 2.) What exactly is an "import"? I've read somewhere that you can "import" functions. Is this the same as using modules? I'm thankful for answers! thewavelength replied to thewavelength's topic in AngelCodeHi, thanks for the answers! @ _om_: Thanks! @ WitchLord: Wow, I didn't find the CScriptDictionary addon in the manual. Did I oversee it, or isn't there a link? So, you write array<array<array<int>>> x is the same as int[][][] y. In C, this means that y is of a static size. But in AngelScript, it does mean the same as x? So the size can grow dynamicly? To the modules: So I understood it correctly, that I can expose some modules in my host application, for example "players", "entities" and so on and so on. Now, when the user wants a script to be loaded, I create a new context in which the script is running, loading my modules and so the loaded script can use these? That would be great! Greeting ;) thewavelength posted a topic in AngelCodeHello, completly new to AngelCode I've read in and found it very interesting. It is very well documentated, I like that! At this point, a big THANK YOU to WitchLord I hope you understand that I ask these questions before I start to implement AngelCode, otherwise it would be much work without a sense. 1) Is it possible to create a map (like std::map)? I do hope so, but my hope is gone since I've searched in the forums and read that multiple template types are not possible. Even if not, would there be a possiblity to implement a stringmap or something equivalent? Hope is also not great since the opIndex-operator only takes int as argument, so myvar["x"] seems to be unavailable. 2) Is there a way to store arrays in arrays, to create a matrix? Firstly, I've got it right that array in AngelCode is nearly the same as std::vector in C++, right? Example: array<array<array<int>>> 3) Is there any documentation about modules? I think I don't fully understand what they are. Are they equivalent to modules in Python? Thanks!
https://www.gamedev.net/profile/186329-thewavelength/?tab=issues
CC-MAIN-2017-30
refinedweb
1,517
67.45
Read JSON into Dask DataFrames • May 9, 2022 This blog post explains how to read JSON into Dask DataFrames. It also explains the limitations of the JSON file format for big data analyses and alternatives that provide better performance. Dask is a great technology for working with JSON data because it can read multiple files in parallel. Other technologies, like pandas, can only read one JSON file at a time which is comparatively slow. Dask read JSON: simple examples Let’s look at a few examples of how to read JSON files into Dask DataFrames. There are multiple different ways to format JSON data, so it’s good to see how Dask can handle JSON data that’s formatted differently. Suppose you have a people1.json file with the following data. { "data": [{"name":"John","age": 30.0,"car":"honda"},{"name":"Sally","age": 54.0,"car":"kia"}] } Here’s how to read this JSON file into a Dask DataFrame. import dask.dataframe as dd ddf = dd.read_json("people1.json", orient="split") Here are the contents of the Dask DataFrame: ddf.compute() The orient="split" argument tells Dask to split the values of each object into different columns. JSON files are usually split into multiple lines so they’re easier to read. Let’s take a look at the same data with line breaks. Here are the contents of the people2.json file. { "data": [{ "name": "John", "age": 30.0, "car": "honda" }, { "name": "Sally", "age": 54.0, "car": "kia" }] } Here’s how to read people2.json into a Dask DataFrame. ddf = dd.read_json("people2.json", orient="split") We can use the same syntax as earlier when reading a JSON file that uses line breaks. JSON files are sometimes formatted with one object per row. Take a look at the people3.json file. {"name":"John","age": 30.0,"car":"honda"} {"name":"Sally","age": 54.0,"car":"kia"} Here’s how to read people3.json into a Dask DataFrame. ddf = dd.read_json("people3.json", lines=True) When the JSON data contains one object per row, you must set the lines=True argument. Now let’s look at how to read a JSON file with a more complicated structure into a Dask DataFrame. Dask read JSON: list Let’s look at a JSON file that contains a list and see how to read it into a Dask DataFrame. Here are the contents of the students.json file. { "data": [{ "name": "Li", "age": 15, "scores": [34, 99, 86] }, { "name": "Qu", "age": 18, "scores": [99, 100, 87] }] } Here’s how to read students.json into a Dask DataFrame. ddf = dd.read_json("students.json", orient="split") ddf.compute() You don’t need to do anything different when reading JSON data with lists into Dask DataFrames. Dask is intelligent enough to read the scores list into a single column. Here’s how to add an average_score column to the DataFrame. ddf["average_score"] = ddf["scores"].apply( lambda x: sum(x) / len(x), meta=("average_score", "float64") ) ddf.compute() Dask makes it easy to work with JSON files that contain lists. Let’s look at nested JSON data, which is more complex. Dask read JSON: nested data Let’s look at a more complex JSON file with nested data. Here’s the contents of the students2.json file. { "data": [{ "name": "george", "age": 16, "exam": { "subject": "geometry", "score": 56 } }, { "name": "nora", "age": 7, "exam": { "subject": "geometry", "score": 87 } }] } Here’s how to read this JSON data into a Dask DataFrame and split out exam_subject and exam_score to separate columns. ddf = dd.read_json("students2.json", orient="split") ddf["exam_subject"] = ddf.exam.apply(lambda x: x["subject"], meta=("exam_subject", "object")) ddf["exam_score"] = ddf.exam.apply(lambda x: x["score"], meta=("exam_score", "int64")) ddf.compute() That code is relatively straightforward, but a bit tedious. Let’s look at a more complex example that’s handled programmatically. Dask read JSON: flatten JSON Let’s look at a more complicated JSON file with a list of exams and see how to load them neatly into a Dask DataFrame. Here’s the JSON data. { "data": [{ "name": "george", "age": 16, "exams": [{ "subject": "geometry", "score": 56 }, { "subject": "poetry", "score": 88 } ] }, { "name": "nora", "age": 7, "exams": [{ "subject": "geometry", "score": 87 }, { "subject": "poetry", "score": 94 } ] }] } Let’s read the JSON data into a Dask DataFrame. ddf = dd.read_json("students3.json", orient="split") Now create a pandas function that’ll explode the exams into different rows and normalize the subject and score data into separate columns. def pandas_fn(df): exploded = df.explode("exams") return pd.concat( [ exploded[["name", "age"]].reset_index(drop=True), pd.json_normalize(exploded["exams"]), ], axis=1, ) You can use map_partitions to apply the pandas function to each partition in the DataFrame. ddf.map_partitions( pandas_fn, meta=( ("name", "object"), ("age", "int64"), ("subject", "object"), ("score", "int64"), ), ).compute() Dask read JSON: multiple files Dask is designed to read multiple JSON files into a DataFrame in parallel. Let’s create a directory with two JSON files and demonstrate how they can both be read into a Dask DataFrame. Suppose you have the following json-data/pets1.json file: { "data": [{ "name": "Triss", "species": "cat", "color": "orange" }, { "name": "Dale", "species": "dog", "color": "brown" }] } And this json-data/pets2.json file: { "data": [{ "name": "Gregg", "species": "bird", "color": "green" }, { "name": "Weston", "species": "wolf", "color": "gray" }] } Read all of this data into a Dask DataFrame and print the results. ddf = dd.read_json("./json-data/pets*.json", orient="split") ddf.compute() name species color 0 Triss cat orange 1 Dale dog brown 0 Gregg bird green 1 Weston wolf gray You can use the wildcard operator (*) to read multiple JSON files into a Dask DataFrame. Dask reads all the JSON files in parallel so the computation executes quickly. Dask will also run subsequent computations quickly because the data is partitioned and workloads are run on the partitions in parallel. Dask read_json: Data stored in S3 Let’s provision a Dask cluster with Coiled and run a query on a 662 million row dataset that’s stored in S3. Start by provisioning the Dask cluster. import coiled import dask.dataframe as dd import dask cluster = coiled.Cluster(name="powers-demo", n_workers=10) client = dask.distributed.Client(cluster) Read in some JSON data to a Dask DataFrame. ddf = dd.read_json( "s3://coiled-datasets/timeseries/20-years/json/*.part", storage_options={"anon": True, "use_ssl": True}, lines=True, ) You can run ddf.head() to see the first few rows of data. Now compute the number of unique values in the name column. ddf["name"].nunique().compute() As you can see, Dask makes it really easy to read lots of JSON data and run analytical queries. Conclusion Dask can read and process large datasets. This blog post has a great real-world example that shows how to read 75GB of JSON data and convert it to Parquet. JSON isn’t a great file format for big data analyses, so if you’re repeatedly querying the data, it’s best to use a file format that’s optimized for big data workflows, like Parquet. That said, there are many large JSON datasets and sometimes you need to work with what you’re provided. Dask’s parallel processing capabilities make it more than capable for querying large JSON datasets.
https://coiled.io/blog/dask-read-json-dataframe/
CC-MAIN-2022-21
refinedweb
1,196
66.84
New magic and clock behaviour¶ Clocks¶ The rule for clocks in Brian 1 was that you would either specify a clock explicitly, or it would guess it based on the following rule: if there is no clock defined in the execution frame of the object being defined, use the default clock; if there is a single clock defined in that execution frame, use that clock; if there is more than one clock defined, raise an error. This rule is clearly confusing because, for a start, it relies on the notion of an execution frame which is a fairly hidden part of Python, even if it is something similar to the (relatively clearer) notion of the calling function scope. The proposed new rule is simply: if the user defines a clock use it, otherwise use the default clock. This is not quite as flexible as the old rule, but has the enormous virtue that it makes subtle bugs much more difficult to introduce. Incidentally, you could also change the dt of a clock after it had been defined, which would invalidate any state updaters that were based on a fixed dt. This is no longer a problem in Brian 2, since state updaters are re-built at every run so they work fine with a changed dt. It is important to note that the dt of the respective clock (i.e. in many cases, defaultclock.dt) at the time of the run() call, not the dt during the NeuronGroup creation, for example, is relevant for the simulation. Magic¶ The old rule for MagicNetwork was to gather all instances of each of the various classes defined in the execution frame that called the run() method (similar to clocks). Like in the case of clocks, this rule was very complicated to explain to users and led to some subtle bugs. The most pervasive bug was that if an object was not deleted, it was still attached to the execution frame and would be gathered by MagicNetwork. This combined with the fact that there are, unfortunately, quite a lot of circular references in Brian that cause objects to often not be deleted. So if the user did something like this: def dosim(params): ... run() return something results = [] for param in params: x = dosim(params1) results.append(x) Then they would find that the simulation got slower and slower each time, because the execution frame of the dosim() function is reused for each call, and so the objects created in the previous run were still there. To fix this problem users had to do: def dosim(params): clear(True, True) ... run() return something ... While this was relatively simple to do, you wouldn’t know to do it unless you were told, so it caused many avoidable bugs. Another tricky behaviour was that the user might want to do something like this: def make_neuron_group(params): G = NeuronGroup(...) return G G1 = make_neuron_group(params1) G2 = make_neuron_group(params2) ... run() Now G1 and G2 wouldn’t be picked up by run() because they were created in the execution frame of make_neuron_group, not the one that run() was called from. To fix this, users had to do something like this: @magic_return def make_neuron_group(params): ... or: def make_neuron_group(params): G = NeuronGroup(...) magic_register(G, level=1) return G Again, reasonably simple but you can’t know about them unless you’re told.
http://brian2.readthedocs.io/en/2.0a8/developer/new_magic_and_clocks.html
CC-MAIN-2017-39
refinedweb
559
66.47
i have got my arduino to work with the gsm900 module. i wrote a code that send the messg sent from a phone on my serial moniter. the problem what im facing is that the messg recived in the serial moniter comes with the messg and a string of data. like the time stamp and the mode and also the senders number. i just need olny the number that i send from my phone to the module. so that i can save it as a string value. i dont want any other data. because if i have to save the data as a string it saves all data. i just want the number that i send. it would be grateful if anyone could help me out. any type of hints or suggestions will do. i have got my arduino to work with the gsm900 module. If you get the data in a cString or a String you can parse it to extract the relevant content. #include <EEPROM.h> #include <SoftwareSerial.h> String Mobile = ""; SoftwareSerial gsmSerial(6, 7); void writeStringToEEPROM(int addrOffset, const String &strToWrite) { byte len = strToWrite.length(); EEPROM.write(addrOffset, len); for (int i = 0; i < len; i++) { EEPROM.write(addrOffset + 1 + i, strToWrite[i]); } } String readStringFromEEPROM(int addrOffset) { int newStrLen = EEPROM.read(addrOffset); char data[newStrLen + 1]; for (int i = 0; i < newStrLen; i++) { data[i] = EEPROM.read(addrOffset + 1 + i); } data[newStrLen] = '\ 0'; // !!! NOTE !!! Remove the space between the slash "/" and "0" (I've added a space because otherwise there is a display bug) return String(data); } void setup() { Serial.begin(9600); gsmSerial.begin(9600); gsmSerial.println("AT+CNMI=2,2,0,0,0"); } void loop() { while (gsmSerial.available()) { gsmSerial.read(); } get_gsm(); } void get_gsm() { gsmSerial.listen(); while (gsmSerial.available() > 0) { Serial.println("active"); if (gsmSerial.find("Change")) delay(30000); gsmSerial.read(); delay(1000); while (gsmSerial.read() == 0) {} Mobile = gsmSerial.readString(); writeStringToEEPROM(0, Mobile); this my code this is the string i get .its all in numericals. +CMT: “+senders phone number”,"",“date,time” recived messg i want the reciverd messg to be take out and be saved as a string. Noticed that ? It does not happen if you use code tags… So please edit your post, select the code part and press the </> icon in the tool bar to mark it as code. It’s unreadable as it stands. (also make sure you indented the code in the IDE before copying, that’s done by pressing ctrlT on a PC or cmdT on a Mac) —- i want the reciverd messg to be take out and be saved as a string. Seems to me it’s everything after the first new line … Thanks alot sir for being so helpful and im sorry about the code part. and yes everything after the first new line. So if you keep the answer in a string, search for the position of ‘\n’ using indexOf() and then extract the substring (using substring() ) from the next character till the end Thanks alot sir for the help i got it to work. now i can diffrentiate the messg recived and take out required data from it. i would like to ask you somthing. is it possible to input a string in a argument. eg. mySerial.println(“AT+CMGS=”+ZZxxxxxxxxxx""); i used the saved string to write to EEPROM. and i want to input that data from the EEPROM in the place of (+ZZxxxxxxxxxx). and again thanks alot for the help and time. Sure, the easiest way is to print in multiple statements mySerial.print("AT+CMGS="); mySerial.println(your_string__here); void loop() { gsmSerial.read(); if (gsmSerial.find("hello")) { Serial.println("hello"); } if (gsmSerial.find("light")) { Serial.println("light"); } } here i hav another problem. in this void loop im trying to run two if statement in which the input is from a gsm module. the problem im facing here is that the code compiles but they dont run some times. mostly if one runs the other doesnot. its random. i think im not running the gsmSerial.read in a loop or im stuck in an if statement when it run. The challenge is that you listen for hello until timeout and if you did find it then print it but you don’t listen for light whilst you are expecting hello… What you need to write is a custom function that will wait for some keywords. One way to do so if there are no end marker is to listen and accumulate the data into a rolling buffer and keep checking against the presence of keywords. The buffer needs to be at least as long as the longest keyword void loop() { while (gsmSerial.available()) { gsmSerial.read(); hello(); light(); } can i do this. i have tryd it but i think im not doing it the right way. i added this in my void loop and added both the if statement in a void function. I’m not sure what the functions do but the read() would just eat one byte and not store it anywhere so answer is likely No… I think you are missing some understanding about how the serial line works, I would suggest to study Serial Input Basics to handle this
https://forum.arduino.cc/t/need-help-with-gsm-module-with-arduino/857983
CC-MAIN-2021-25
refinedweb
866
75
using System; using System.Collections.Generic; public class Program { public static void Main() { uint input; bool b = true; while(b) { b = uint.TryParse(Console.ReadLine(), out input); if(b) { Console.WriteLine(FindBestDeal(input)); } } } static Dictionary<uint, uint> cache = new Dictionary<uint, uint>(); public static uint FindBestDeal(uint n) { if(cache.ContainsKey(n)) { return cache[n]; } if(n/2 + n/3 + n/4 > n) { uint sum = FindBestDeal(n/2) + FindBestDeal(n/3) + FindBestDeal(n/4); cache[n] = sum; return sum; } else { return n; } } } This is my final solution to the Bytelandian Gold Coin problem on CodeChef. This was my first shot at a problem of a medium level of difficulty on this site. And honestly I thought it was surprisingly simple at first but that was only because I didn’t give the problem proper considerations. I busted out some code, submitted it feeling confident and then found out my solution was incorrect. I went through a number of iterations and finally came up with this. Here’s a description of the problem here. I won’t go through every iteration I went through but here are a couple of key concepts required for a problem like this. Recursion Recursion is a concept in which an algorithm calls itself. In the problem, we need to take a coin, divide it by two, three, and four, and add up the quotients. If the sum of quotients is greater, than we have to return the highest possible sum. And in order to do that, we have to divide all of the previous quotients by two, three, and four, and up up those quotients. For example, 24 becomes 12 + 8 + 6 which is a total of 26. 26 however is not the right answer because one of our new quotients is 12. And 12 becomes 6 + 4 + 3 which is 13. So the correct answer for 24 is 27. So basically you just have use the same logic on all of the cascading quotients as you do the initial input. Have a look at line 32 to see where the recursion begins. Memoization It’s weawy hawd for me to even think about this technique without giggling. It weawy takes me back to my pwe-speech class days when I had twouble with my ‘r’ sounds. Anyways, memoization is a technique used to optimize repeated methods. It works by caching data and then checking that cache to determine whether or not certain data has already been computed. It seems like a good practice for recursive algorithms as it saves tons of CPU cycles. And in this problem on CodeChef, you need to implement memoization in order to stay under the time limit. You can look at the memoization parts of this program on lines 21, 25-28, and 33. The uint Data Type There really isn’t much to this but because the problem potentially requires integers larger than a regular int can store, you need something with a larger boundary like a uint. Though if you test this program with 1000000000 you still get very close to the maximum integer a uint can store.
https://ccgivens.wordpress.com/tag/memoization/
CC-MAIN-2019-35
refinedweb
521
63.49
I often come across this question by various customers using Windows Azure Platform. “What is the IP range for Windows Azure? SQL Azure? AppFabric?, etc” The question arises due to the fact that organizations configure firewall/proxy to block/allow Inbound/Outbound traffic depending on rules they configure. If you are able to configure to allow outbound traffic to 0.0.0.0-255.255.255.255, it solves most of the connectivity problems which might occur while communicating with Windows Azure Platform. If for any reason if you are unable to configure this IP range, check with your Firewall/Proxy Administrators to see if they can allow outbound traffic to *.cloudapp.net address? If you end up needing specific IP ranges, hopefully below information will be helpful. Microsoft download link has the IP ranges for Windows Azure Datacenters. Please note that the content in this link is revised when there are any changes to the IP ranges. Note: Various Ports used in Windows Azure Platform Port(s) Description 80, 443 Default http, https ports used for various web scenarios 9350-9353 These ports are used by Windows Azure AppFabric service bus bindings Refer to for more details 1433 SQL Azure port 3389 This port is used for RDP access to VM’s Identifying Connectivity Issues: How to verify whether your firewall is blocking outbound traffic to Windows Azure Platform services? Detecting network related issues is tricky and involves networking expertise. However, many times, you can easily detect if the problem is with specific network or not by using below commands First command I generally use to detect networking issues is “ping”. Below is the example of ping command. C:\Users\hari>ping 1d6d6f26c0184f6c8ef29e1cf40a87e7.cloudapp.net Pinging 1d6d6f26c0184f6c8ef29e1cf40a87e7.cloudapp.net[111.221.109.188] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 111.221.109.188: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), In this example I1d6d6f26c0184f6c8ef29e1cf40a87e7.cloudapp.net is the application staging URL of my application. Replace this with your application URL while testing your application. Many servers block ping service by default and hence you may not receive any response for the ping command. It is same with cloud VM’s as well. Ping is by default disabled unless you’ve enabled it. What you should be interested is whether service address is resolving to IP address or not. In the above example, my application URL is resolving to 111.221.109.188. This indicates that there are no DNS related issues with my service. For more details on ping, please refer Next command I would use is telnet. For example C:\Users\hari>telnet 1d6d6f26c0184f6c8ef29e1cf40a87e7.cloudapp.net 80 In this example, 1d6d6f26c0184f6c8ef29e1cf40a87e7.cloudapp.net is my application URL and port 80 is the port my application is listening at. For testing your application, replace these with your application URL, port. If you see a blank window coming up as a result of the telnet command, it means your application is working fine and listening at specified port. If you see any errors as a result, it means either your application is not listening at the specified port or there might be networking issue with your specific network that is blocking the outbound traffic to the specified address. To quickly, isolate whether it is your specific network issue or not, try running the same command by using generic internet providers. For example, from your home network or by using USB Internet stick. Also, telnet is not installed by default on the machines. Please make sure you installed Telnet client on the machine before running the command. For more details on telnet command please refer to If you are unable to conclude the issue by running ping, telnet commands, you might need to use tracert command, capture network traces to analyze the networking issue. Involve your network administrators to help you with the same. Other Useful article(s) When using ServiceBus, Regardless of which datacenter your namespace is hosted on, you will have to open up IP Range/ports for the United States (South/Central) address range open if you set "ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Auto" (which is the default). The Microsoft.ServiceBus.dll assembly has code which connects to the United States (South/Central) datacenter.
http://blogs.msdn.com/b/narahari/archive/2011/08/01/ip-range-for-windows-azure-platform-identifying-connectivity-issues.aspx
CC-MAIN-2014-35
refinedweb
721
56.05
Tech Helproom It's free to register, to post a question or to start / join a discussion Is the following caused by Windows 7 or what?... « previous 1 2 Likes # 0 Posted September 29, 2011 at 4:46PM I used FF6 now FF7 + IE9 on Windows 7 I originally opened this in the Windows 7 Forum, but they tend to move a little slower October 2, 2011 at 1:42PM Sorry about that, in fact that's something else that is happening. Sometimes when I highlight something I click 'Copy' then 'Paste' and the thing that is pasted is not what I was supposed to be copying just then. Instead it's something I copied prior. Although I can 'Cut' and 'Paste', all this is begining to sound more like my original thought, a PC problem? Still this should be the link...Click Here Likes # 0 Posted October 2, 2011 at 1:50PM Sorry about that, in fact that's another thing, I'm not always able to 'copy' and 'Paste'. But so far I am able to 'Cut' and 'Paste'. Which sounds more like my original theory, that it's a PC thing? Anyway here's the link...Click Here If you still want it, now that it's looking increasingly like the PC.? Likes # 0 Posted October 2, 2011 at 1:52PM I didn't realise it had gone over the page, hence the, more or less duplicate posting. Likes # 0 Posted October 2, 2011 at 3:57PM I think it must be your system. The site looks as it should on FF7 for me. Likes # 0 Posted October 2, 2011 at 4:11PM I've just looked into my 'Event Viewer' and it isn't a pretty site. It's all 'Red Error' exclamation marks and 'Yellow Warning' exclamation marks. On reading a cross section of them, everyone I've read have been asking for 'Driver Updates for Windows 7'. The error and warning signs date back to 06/07/2011 wich was more or less when I got the PC. I assumed driver updates for Window came with Windows Updates? Apparently Not! How and where do I get them? Likes # 0 Posted October 2, 2011 at 4:34PM For a new pc you shouldn't have any warning marks. All of the drivers should have been correctly installed and only needed updating if there were snags. You will need to check each device and then go to the manufactuer's site and download the drivers. But this should not have happened and I would contact the supplier. Likes # 0 Posted October 2, 2011 at 4:48PM No, perhaps I didn't explain it right. They aren't for third party hardware, the drivers it's asking me to update are for Windows 7 it'self?! Ie Boot Drivers etc Likes # 0 Posted October 2, 2011 at 4:55PM Here's a brief taster... Event Details: Product: Windows Operating System ID: 7026 Source: Service Control Manager Version: 6.1 Symbolic Name: EVENTBOOTSYSTEMDRIVERSFAILED Message: The following boot-start or system-start driver(s) failed to load: %1 Resolve: Update Drivers Likes # 0 Posted October 2, 2011 at 5:08PM Here's another one.... Event Details Product: Windows Operating System ID: 10 Source: Microsoft-Windows-WMI Version: 6.1 Symbolic Name: WBEMMCCANNOTACTIVATEFILTER Message: Event filter with query "%2" could not be activated again in namespace "%1" because of error %3. Events may not be delivered through this filter until the problem is corrected. Resolve: Update permanent event subscriptions. The trouble here is that 'CIM Studio' doesn't mention support for any Windows above WinXP. But that's the least of my problems, even if it does support W7 I don't know how to use it.? Likes # 0 Posted October 2, 2011 at 5:33PM I'm going to give sfc /scannow a go, before doing anything else or maybe CHKDSK «
http://www.pcadvisor.co.uk/forums/1/tech-helproom/4082245/is-the-following-caused-by-windows-7-or-what/?ob=datea&pn=2
CC-MAIN-2014-23
refinedweb
652
71.34
Created on 2019-11-04 21:21 by cboltz, last changed 2019-11-09 13:26 by kinow. The following test script works with Python 3.7 (and older), but triggers an endless loop with Python 3.8: #!/usr/bin/python3 import shutil import os os.mkdir('/dev/shm/t') os.mkdir('/dev/shm/t/pg') with open('/dev/shm/t/pg/pol', 'w+') as f: f.write('pol') shutil.copytree('/dev/shm/t/pg', '/dev/shm/t/pg/somevendor/1.0') The important point is probably that 'pg' gets copied into a subdirectory of itsself. While this worked in Python up to 3.7, doing the same in Python 3.8 runs into an endless loop: # python3 /home/abuild/rpmbuild/SOURCES/test.py Traceback (most recent call last): File "/home/abuild/rpmbuild/SOURCES/test.py", line 15, in <module> shutil.copytree('/dev/shm/t/pg', '/dev/shm/t/pg/somevendor/1.0') File "/usr/lib/python3.8/shutil.py", line 547, in copytree return _copytree(entries=entries, src=src, dst=dst, symlinks=symlinks, File "/usr/lib/python3.8/shutil.py", line 486, in _copytree copytree(srcobj, dstname, symlinks, ignore, copy_function, ... copytree(srcobj, dstname, symlinks, ignore, copy_function, File "/usr/lib/python3.8/shutil.py", line 547, in copytree return _copytree(entries=entries, src=src, dst=dst, symlinks=symlinks, File "/usr/lib/python3.8/shutil.py", line 449, in _copytree os.makedirs(dst, exist_ok=dirs_exist_ok) File "/usr/lib/python3.8/os.py", line 206, in makedirs head, tail = path.split(name) File "/usr/lib/python3.8/posixpath.py", line 104, in split sep = _get_sep(p) File "/usr/lib/python3.8/posixpath.py", line 42, in _get_sep if isinstance(path, bytes): RecursionError: maximum recursion depth exceeded while calling a Python object I also reported this at Hi, I did a quick `git bisect` using the example provided, and looks like this regression was added in the fix for bpo-33695, commit 19c46a4c96553b2a8390bf8a0e138f2b23e28ed6. It looks to me that the iterator returned by with os.scandir(...) is including the newly created dst directory (see the call for `os.makedirs(dst, exist_ok=dirs_exist_ok)` in). This results in the function finding an extra directory, and repeating the steps for this folder and its subfolder recursively. This only happens because in the example in this issue, dst is a subdirectory of src. The bpo-33695 commit had more changes, so I've reverted just this block of the copytree as a tentative fix, and added a unit test: -- Here's a simplified version of what's going on: ``` import os import shutil shutil.rmtree('/tmp/test/', True) os.makedirs('/tmp/test') with open('/tmp/test/foo', 'w+') as f: f.write('foo') # now we have /tmp/test/foo, let's simulate what happens in copytree on master with os.scandir('/tmp/test') as entries: # up to this point, /tmp/test/foo is the only entry os.makedirs('/tmp/test/1/2/3/', exist_ok=True) # <---- when the iterator starts below in `f in entries`, it will find 1 too # now 1 will have been added too for f in entries: print(f) ```
https://bugs.python.org/issue38688
CC-MAIN-2019-47
refinedweb
514
60.31
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. It is not necessary for NO_IMPLICIT_EXTERN_C to be handled in two different places. Since I am trying to render cpplib independent of target headers, I chose to leave the handler in c-lex.c instead of the one in cppinit.c. (Technically, *both* of them are the wrong place; it should be done way upstream, in cppdefault.c etc, but this patch needs to happen anyway.) The form of -E output on NO_IMPLICIT_EXTERN_C targets changes slightly; they will now generate the "4" markers on system header #-lines, as their IMPLICIT_EXTERN_C counterparts do. But those markers will be ignored when the preprocessed source is read back in. Bootstrapped i686-linux. I also verified that on a NO_IMPLICIT_EXTERN_C target, system headers still don't get implicit extern "C", and that on an IMPLICIT_EXTERN_C target they do. Applied. zw * cppinit.c (append_include_chain): Always pay attention to cxx_aware when setting new->sysp. Remove ATTRIBUTE_UNUSED marker on argument. =================================================================== Index: cppinit.c --- cppinit.c 29 May 2002 17:15:31 -0000 1.235 +++ cppinit.c 31 May 2002 22:54:02 -0000 @@ -207,7 +207,7 @@ append_include_chain (pfile, dir, path, cpp_reader *pfile; char *dir; int path; - int cxx_aware ATTRIBUTE_UNUSED; + int cxx_aware; { struct cpp_pending *pend = CPP_OPTION (pfile, pending); struct search_path *new; @@ -252,11 +252,7 @@ append_include_chain (pfile, dir, path, include files since these two lists are really just a concatenation of one "system" list. */ if (path == SYSTEM || path == AFTER) -#ifdef NO_IMPLICIT_EXTERN_C - new->sysp = 1; -#else new->sysp = cxx_aware ? 1 : 2; -#endif else new->sysp = 0; new->name_map = NULL;
http://gcc.gnu.org/ml/gcc-patches/2002-05/msg02631.html
crawl-001
refinedweb
269
64.61
Interacting with Microsoft's LED art piece at JSConf EU Bryan Hughes ・3 min read Hi CSSConf/JSConf EU friend! If you're here, then you've probably heard that we brought an LED art piece with us to the conferences. This article will tell you all about this project at a high level, and how you can engage with it. This art piece is very loosely inspired by bamboo forests. This device features multiple LED "shoots" that are controlled from the cloud, by you. The hardware was built by my amazing teammate and Microsoft Cloud Advocate PM for Europe Jan Schenk from designs I created. Here is what the finished version of the art piece looks like: Creating animations for this piece may look daunting at first glance, but don't worry. I put a lot of work into this project over the years to make it approachable for a wide variety of folks. This project is based on another project I created called Raver Lights. I started this project back in 2016 to control LEDs on art pieces destined for Burning Man and similar events. Suffice to say, it's pretty well tested at this point, having survived the extremes of the desert multiple times now. There are two ways to control the animations in this piece. The first way is at our booth using a web interface running on the new Chromium-based version of Edge (ask us about it!). The second is by creating your own Azure Function-based Serverless endpoint, which gives you a lot more control over the animation than the booth interface. Basic Animations Don't let the name fool you, there's nothing boring about these animations! I'm just bad at naming things, to be completely honest. At the booth, you can submit an animation from tablets we'll have on hand running the new Chromium-based version of Edge. If you haven't heard yet, we're rewriting our browser to be based on Chromium! We're still hard at work getting it ready for general use, but you can download canary and dev channel builds today for Windows 10 and macOS. I decided to have a little fun building this app. My design skills may not be the best, but I did get to use some pretty new technologies here: CSS Grid, the new CSS filter property, and Web Assembly. If I'm at the booth, ask me all about how I wrote it (I'm the one with purple hair). You can also check out the code on GitHub. While these animations are certainly pretty, you don't quite get the same control as custom animations. Custom You can also write a custom Azure Function to gain complete control of the animation. For the curious, here is the code to create the default animation you see running at the booth: import { createWaveParameters, createMovingWave, createSolidColorWave, createPulsingWave } from 'rvl-node-animations'; const animation = createWaveParameters( // Create the moving purple wave on top createMovingWave(215, 255, 8, 2), // Creating a pulsing green on top of the blue, but below the purple createPulsingWave(85, 255, 2), // Create the solid blue on bottom createSolidColorWave(170, 255, 255, 255) ); This code uses the same tools you'll use to create a custom animation. Not so bad, right? Trust me, writing the animation engine that takes these parameters was considerably more difficult. To get started writing a custom animation, head over to our starter repo and read the instructions there. Happy Hacking! What's your 🎉New Year Resolutions🎉 ? I know you all must be thinking, "Ugh... Not this again. These things never wor...
https://dev.to/azure/interacting-with-microsoft-s-led-art-piece-at-jsconf-eu-2cph
CC-MAIN-2019-39
refinedweb
607
70.53
:Do you think it will be like a full DragonFlyBSD install but then spread :on to the clusters so that it is possible to log on to that virtual OS :the same way like I do now with a physical machine? : :-- :mph I think if we do it right, anything is possible. If each virtual 'cluster' is considered to be a 'machine', then theoretically one can do anything on that 'cluster' that one could do on a single machine, including running the installer on it :-). -- I've been looking at the mount structure and it isn't quite suitable as the 'management' structure I want to pass to all the VOP calls as the first argument. Instead what I think I'll do for the second stage is create a new structure, we'll call it 'fsmanage', for this purpose. This structure will contain a pointer to the mount point and will embed the vop_ops operations vector (I don't want to have too many indirections to get to the vop_ops). The vnode will then have a v_fm pointer to this management structure and we will remove v_op (now v_vops) and v_mount from the vnode structure. (It's a little messy to do things that way because each filesystem actually needs three sets of operations vectors for normal files/dirs, pipes, and device nodes, but I can't think of a cleaner way of doing it at the moment). Getting rid of the 'vnode must be the first argument' dependancy that the current VOP subsystem has is very important for upcoming stages in order to be able to do namespace-locked operations where a vnode is not necessarily present or desired. I am also going to collect the cacheable and system managed portions of the vnode (the VM object, vattr, range locks, supporting structures for cache coherency, and other things) all into their own structure, but for now I am just going to embed that structure into the vnode so nothing will really have changed (yet). I'll call that one 'fscache'. But down the line I want all the kernel's high level cache management infrastructure to work on a 'fscache' structure rather then a 'vnode'. This will allow us to use the 'fscache' structure for a number of other purposes including, in the far future, the holding of remotely cached data in the cluster. Other more esoteric possibilities include using such structures in pipes, sockets, block devices, and so forth. In anycase, it sounds complex but I actually think stage 2 will only take the weekend to get done. It's mostly grunt work because all the VOP_*() calls in all the VFS's have to be modified to supply the new argument. -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx>
https://www.dragonflybsd.org/mailarchive/kernel/2004-08/msg00173.html
CC-MAIN-2017-04
refinedweb
460
61.8
System DSN to Access DB This article was contributed by J. Smits. Environment: VC6 SP5, Win 2K SP2 The code below shows how you can add a Data Source Name at run-time. - Add the classes from the download to your project. Alternatively, paste the code below into the spot of your choice. - Make sure you have the following include-statements in your project: #include <afxdb.h> //Needed for WriteProfile #include <odbcinst.h> //Needed for SQLConfigDataSource etc - Create an object from the class you have included, or, if you have not done so, declare the function "CreateDSN". - Compile and run the program. - Check whether the DSN has been created. Be mindful that if you keep the ODBC-panel open, it doesn't refresh, even if you click a different tab. For that reason, consider using the Registry editor (see illustration). Just press F5 and new entries are listed! Please supply any suggestions to improve my code to this site! Thanx! BOOL CODBC::CreateDSN( CString sDBPath, CString sProjectName, CString sDescription) { //Function creates System DSN. //Projectname should be one word ("Demo") //DBPath should be full path without // filename: ("C:\Tests") //Description: optional parameter. //Check in ODBC.INI (In your Windows DIR) if DSN already exists: HKEY hKey; if (ERROR_SUCCESS == RegOpenKeyEx(HKEY_LOCAL_MACHINE, TEXT("Software\\ODBC\\ODBC.INI\\" + sProjectName), 0L, KEY_QUERY_VALUE, &hKey)) { RegCloseKey(hKey); AfxMessageBox("Data Source is already registered"); return FALSE; } char MdbFile[_MAX_FNAME]; char DSNName[_MAX_FNAME]; char Description[_MAX_PATH]; char* szDesc; char* szAttributes; int mlen, i=0, j=0; lstrcpy(DSNName, sProjectName); sProjectName=sDBPath + "\\" + sProjectName + ".mdb"; lstrcpy(MdbFile, sProjectName); lstrcpy(Description,sDescription); szDesc = new char[256]; szAttributes = new char[256]; //Use Hexadecimal 'FF' (=255) as temporary place holder wsprintf( szDesc, "DSN=%s \xFF DESCRIPTION=%s \xFF DBQ=%s: FIL=MS Access;\xFF \xFF ", DSNName, Description, MdbFile); mlen = strlen(szDesc); //Loop to replace "FF" by "\0"(so as to store multiple // strings into one): while(i < mlen-1) { if ((szDesc[i] == '\xFF') && (szDesc[i+1] == ' ')) { szAttributes[j] = '\0'; i++; } else { szAttributes[j] = szDesc[i]; } i++; j++; } //Create DSN: if (!SQLConfigDataSource( NULL, ODBC_ADD_SYS_DSN, "Microsoft Access Driver (*.mdb)\0", (LPCSTR)szAttributes)) { AfxMessageBox( "Failed to add Data Source\nBe sure" + " you are logged on as an Administrator"); delete [] szDesc; delete [] szAttributes; return FALSE; } else { AfxMessageBox("Data Source Name was added successfully"); } delete [] szDesc; delete [] szAttributes; return TRUE; } DownloadsDownload demo project - 30 Kb Download source - 2 Kb create DSN having datasource on a different computerPosted by Legacy on 03/14/2002 12:00am Originally posted by: Komila My access database file is on one PC. I want to create DSN on another pc, specifiying my database file on another Computer. How can i do this? -Komila. errorPosted by Legacy on 11/22/2001 12:00am Originally posted by: Anonymous Reply TimelyPosted by Legacy on 11/21/2001 12:00am Originally posted by: Neal Horman How timely.... I was just thinking about how this could be done. I didn't want to take a day to figure this out right now, so I was going to leave it for later. Thank you.Reply
http://www.codeguru.com/cpp/data/mfc_database/microsoftaccess/article.php/c4345/System-DSN-to-Access-DB.htm
CC-MAIN-2017-17
refinedweb
505
54.02
Last week I had to write a small WPF application, and I was surprised how hard it is to get WPF right. By “right” I mean not just getting the code to work, but to write easily maintainable code I won’t feel ashamed of later. I haven’t being programming in WPF for about a year. A year is not long enough to lose touch, but long enough to get some perspective. While I was in the thick of it, I did not realize how ridiculous some things really are. Of course, WPF is virtually dead, so its problems may not really matter, but WPF’s alleged successor “Modern UI” a.k.a. “Metro” a.k.a “WinRT” inherited most of WPF’s baggage. So, whenever I say “WPF” it really means “WPF, Silverlight, Metro and all other XAML-based technologies”. In fact, WPF successors made some problems worse, e.g. by excluding MarkupExtension. WPF problem number one is that it is verbose. You have to write too much code to achieve simple things. Verbosity in WPF comes in multiple flavors: - XAML is verbose. NotifyPropertyChangedis verbose and repetitive. - Value converters are verbose. - Dependency properties are verbose. - Attached properties are outright scary. WPF problem number two is lack of consistent programming model. Putting business logic in views leads to mixing it with presentation, and your data binding looks awkward. Putting business logic in view models leads to ridiculous questions like “how do I handle double click”, “how do I open a new window from my view model”, or “how do I set focus to a control from my view model”. Any way you do it, it is either complex, creates unwanted dependencies, or both. XAML is Verbose It is amazing how little attention WPF authors put to the expressive power of the language. E.g., consider this grid definition: <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> <RowDefinition Height="Auto" /> </Grid.RowDefinition> <Label Grid.Foo</Label> <TextBox Grid.Row='1" TextWrapping="Wrap" AcceptsReturn="True" HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto" /> <Button Grid.Submit</Buton> </Grid> Who came up with the idea of enumerating rows in a separate section and then referring to them by number? Whose idea was it to give HorizontalScrollBarVisibility attribute a 29-character name? With some effort this definition could be simplified to <Grid> <Row><Label>Foo</Label></Row> <Row Height="*"><TextArea /></Row> <Row><Button HAlign="Center" Width="200"/>Submit</Button></Row> </Grid> If we abandoned XML, which is by definition verbose, we could even get something Python-like: Grid Label "Foo" Row Height="*" TextArea Button HAlign="Center" "Submit" None of that happened. Of course, there are tools that allow you to create XAML visually, but they tend to suck, and the resulting XAML is even more verbose than the one written by hand. WPF properties are verbose NotifyPropertyChanged is another code-bloating monster. In WPF, each view model property has to raise PropertyChanged event when it is updated. This sounds innocuous enough, but in practice instead of public int MyProperty { get; set; } You get public int MyProperty { get { return _myProperty; } set { _myProperty = value; RaisePropertyChanged("MyProperty"); } } private int _myProperty; Ouch. Double ouch, because this has to be repeated for every property in every view model. There are multiple solutions to that problem, from proxy classes a-la nHibernate to swiping updates like in Angular, but WPF did not even attempt to address it. Third party frameworks allow to use lambda expression instead of string property name. This reduces typos, and associated silent lack of updates at run time, but does not address verbosity in any way. I won’t cover dependency properties and attached properties here. Look at some examples in MSDN. Suffice it to say that part of the example reads “ FrameworkPropertyMetadataOptions.AffectsRender“. Value converters are verbose Value converters are needed for the simplest tasks in WPF. E.g., if I have a Boolean property that I want to show as “yes” when true, and as “no” when false, I need a value converter (or I could just do the conversion in my view model, but this is usually a bad idea). The framework provides only a handful of converters out of the box, and most of those are highly specialized ad-hoc converters WPF authors needed for themselves. No attempt was made to create a comprehensive library of converters. To convert a Boolean to a string I need to write a converter class: public class BoolToTextConverter : IValueConverter { public string TrueText { get; set; } public string FalseText { get; set; } public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { bool flag = false; if (value is bool) flag = (bool)value; else if (value is bool?) flag = ((bool?)value == true); return flag ? TrueText : FalseText; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } Then I need to reference this class in my XAML, and don’t forget the namespace reference too: <Window xmlns: <Window.Resources> <local:BoolToTextConverter x: </Window.Resources> And then finally I can use my beloved converter in a binding expression: <TextBlock Text="{Binding MyBoolValue, Converter={StaticResource BoolToText}"/> There are more elegant ways to do this, but the amount of ceremony remains astonishing. Also, neither Silverlight, nor WinRT have MarkupExtension class, so this solution applies only to pure WPF. For a simple task like this I expect to write something like {Binding MyBoolValue, False=No, True=Yes} and be done with it. WPF lacks conceptual coherence WPF offers multiple ways of handling user input and program output: - Data binding. - Routed events. - Commands. - Direct calls on controls like SetFocus(). Somehow WPF authors never came up with a strategy that allows all these methods to co-exist peacefully. Data binding assumes you are going to use view models, which are data objects devoid of presentation details. This is great idea for separation of concerns, but other ways of handling program/UI interaction don’t fit well with it. Routed events are handled in views and completely ignore view models. Commands are better, but they are quite limited in scope compared to routed events, and are unusable out-of-the-box, since WPF provides no implementation for the ICommand interface. Every project needs to include an implementation like DelegateCommand or RelayCommand, or borrow it from a third party framework. Frankly, I find such lack of fundamental building blocks disturbing. Some calls like “SetFocus” or “ShowDialog” don’t fall under any of the “structured” categories above, and they are the worst. Every time you write a program bigger than “Hello, world”, you keep running into philosophical questions, because view models are not supposed to know about the views, and MVVM machinery simply does not have easy answers on how to achieve things like handling arbitrary events, setting focus, or creating new windows. Bottom line WPF has a lot of interesting ideas, especially around hierarchical controls. I like the fact that I can put checkbox inside a button or button inside a checkbox. However, pervasive verbosity and lack of conceptual coherence make creating good WPF (Silverlight, Metro) applications conceptually hard, forcing the developer to write lots of code to achieve simple things, and to struggle with philosophical questions which should have been solved on the framework level.
https://ikriv.com/blog/?p=1991
CC-MAIN-2022-40
refinedweb
1,203
54.52
The purpose of sharing bugs Bugs often accompany me in the development process. If I can’t fix or reproduce them, I will ignore these problems. So I plan that whenever I encounter nine valuable and thoughtful bugs, I will share them together, so as to think about the work itself in an extensible way and constantly improve my awareness. After all, the bugs encountered in the improvement of ability must be different, And if there are fewer and fewer bugs, it can only show that one’s work task and learning task are ‘standing still, and not saturated’. In this case, let’s rely on bugs to witness one’s growth 1: Element UI: El card tag Bug phenomenon: One day, the test suddenly told me that there was a bug on a page that had not been moved in the current version. Some data on the page was empty, and some button clicks were invalid Bug tracing: First analysis: Well, if there is a problem with the page that hasn’t been moved at all, that is, the problem left over by history or the problem returned by the back-end, then go to check it. But the process of checking makes me dumbfounded. The back-end returns the value, but somehow it has been reporting the error of ‘null value’. Moreover, this is a three-year old project or other project groups. To tell you the truth, the code is rotten and messy, A lot of mixins are used and part of the logic goes into the node_ There are no documents in the modules. At this time, we can only crack them violently. We can trace them step by step with a debugger. After a long time, we found that the key point is that a certain sub component did not get a value, and then the sub component would pass a value to the parent component, which caused some values in the parent component to be empty, thus reporting an error. Looking at the online project, we found no such problems, If there is a problem with my version, someone must have moved the code of this page, or a global attribute of the overall environment has been tampered with. The problem is extensive. Call up the code of the previous version of the current page of gitlab, (here is a trick) commit the current code, generate a cache, and then cover the current code with the code of the previous version, In deff, you can see the difference between the two versions of the code. I found that the original root tag was < div >… < / div >, but this time it was < El card > < / El card >. My thinking was a bit blocked: I just changed the tag, which did not involve any attributes and variables. How could it cause data crash?? Bug reason: The culprit is this this.$parentIt turns out that the previous student used the instance of the parent component to get the data passed by the parent. Then the problem is clear. If the < El card > < / El card > tag becomes his parent, then it can’t get the data of a higher level, so the data is empty and the ‘killer’ is it Bug resolution: - change this.$parentFor the ‘value between lines’ form, and test whether easy to use - Find out if there are similar situations in the whole situation and correct them one by one - Find the classmate who added < El card >, explain the reason, and ask if he has changed the code in other places in this way, and give up other projects Bug thinking: this.$parentOr is it this.$childrenIt’s very inappropriate to obtain and transfer data in this way, because there is no clear data source and user, which will make it difficult to check the problem. This “parent-child” relationship is very fragile, and it’s easy to suddenly turn the ‘parent component’ into a grandparent component, and the bug will follow. Therefore, it’s not recommended to use it when it’s not necessary or in a highly encapsulated environment If you change a piece of code at will, it doesn’t affect your knowledge, but it doesn’t mean it doesn’t really affect you. Don’t ignore the link of “verification”. You need to be responsible for your own code, OK? 2: Antv: line chart flashback Bug phenomenon: Previously, ecarts was used to develop charts. In this version, antv was used to develop charts. But strange things happened. The ‘Y-axis’ of the line chart was flashback, that is, the strange arrangement of 10, 5 and 0 led to the chart upside down Bug tracing: There was no similar problem before, but this version appeared. There is no dynamic interface at the back end of this version. So the problem lies in the use of this component. First, analysis: is there such an attribute that controls the positive and negative of the y-axis? Go to see the document did not find, the second analysis: go to see the example of the official website, a letter is not bad copy, but still reverse, the third analysis: so the problem is in the data, but the data does not change, so the problem is in the two sets of plug-in data processing mechanism, careful observation and careful observation, found! The original number returned by the background is’ string type ‘ Bug reason: It turns out that antv will only sort ‘number’ from small to large, and Chinese characters will be displayed in the order of receiving. I’ll just add the group data once Bug thinking: Different component libraries have different forms of data processing. Don’t think it’s OK to change the API method of component library knowledge. There will be a lot of ‘original sin’ Through the use of three different chart libraries, I benefit a lot. It’s not the gorgeous effect of the chart, but the thinking of each chart calling way, which makes me understand the design pattern more deeply 3: Regular: forward looking matching vs Firefox Bug phenomenon: All of a sudden, the project can’t run on the Internet Explorer, but I still want to explore this phenomenon Bug tracing: This time, the order of troubleshooting is not quite right. I first looked at whether the last version could run in Firefox. If I found that the last version had no problem, I would locate the problem in the current version. In fact, I should first look at the console to report an error. However, because the page is blank, I naturally thought it was completely collapsed, or the background returned an error, (next time, we must trace the error information of the console first) that is, this paragraph “syntax error: invalid regexp group” is a problem with forward-looking matching. I used the following code on the filter of adding a semicolon to the value (in fact, what I wrote is to flip the character string and add ‘,’) export default { Install (VM) {// if it is followed by a multiple of 3, then add a ',' vm.filter('formatThousand', (num) => { if (!num) return 0; const reg = /\d{1,3}(?=(\d{3})+$)/g; return (`${num}`).replace(reg, '$&,'); }); }, }; So if you comment out this code, you won’t report an error? The answer is no, and the error reporting is still going on. I open the vscode search desk and input the forward-looking matching syntax, but I can’t find anything. Thinking… The vscode search desk also has limitations, for example, it won’t go deep into the node_ The search in modules is also for performance, so it must be the newly introduced plug-in. In addition to the problem, the search is really because the ‘3D view’ component made by our own company uses regular forward-looking matching, but in order to better promote the development of the company’s technology, we must also use the company’s technology. Now it’s not a complaint but feedback, Make this problem clear with the visualization department. Fortunately, the current project doesn’t need to be compatible with Firefox, but other projects should pay attention to it What is prospective matching Regular is the basis of JS, if the unskilled students need to introspect,? You can also get rid of greedy mode (?= Exp) matches the position following the expression exp (?! Exp) matches the position where the expression exp is not satisfied Bug reason: The original IE and Firefox browser do not support forward-looking matching Bug thinking: Most of the time, we will ignore the compatibility of some methods, and even some plug-ins don’t take care of the compatibility, so we must choose the technology according to the requirements of the project. If we develop plug-ins ourselves, we should also write a good compatibility range for everyone to use 4: SCSS: importing SCSS file is invalid Bug phenomenon: We need to change the style of the table globally. I took out a single SCSS file and put it on the outermost layer. But something strange happened. The nested writing method in the SCSS file doesn’t work. We have to take it out and write it like a CSS file Bug tracing: Positioning problem: if the nested writing method is invalid, is the variable valid? The answer is also invalid. That is to say, I have no problem importing SCSS files, and the system also parses them, but I don’t know how to write SCSS. Is my SCSS loader broken?? It’s OK to write lang =’scss’ in each Vue file, so it’s still not OK to copy and paste the previous project Bug reason: There are two ways to introduce CSS. One is provided by SCSS, as follows: Using the second method can correctly parse the SCSS file @import url('./assets/style/animation.css'); // Provided by CSS @import '@/assets/style/animation.scss'; // Provided by SCSS Bug thinking: Don’t underestimate the way of introduction Writing CSS is a very important part of a project. There are two main ideas: Ben and OOCSS. Some people directly write CSS in HTML file, while others directly write CSS in app.vue file without adding “OOCSS” scoped‘is the overall model, of course, these are no problem, but after all, we are engineers who have pursuit, to achieve’ beautiful ‘and’ engineering ‘, and’ color and fragrance ‘is a good code 5: Vue: don’t call the folder bin Bug phenomenon: In my last article, I shared how to engineer mock in a project. However, after putting the mock project into some projects, strange errors occurred. First, it can be started normally, and port 8080 is also started successfully. But… The service of localhost will be terminated in the moment of browser access, and forced exit is OK?? Bug tracing: To tell you the truth, my mind was blank for the first time, and I couldn’t figure out the specific reason. After several repeated trials, I determined the specific phenomenon of the bug. I typed debugger at each operation, but I still couldn’t trace the cause. But the error was reported in the bin folder, and the files in it didn’t run normally. So, is there something wrong with the bin folder itself? Try changing the name? That’s good~~ Bug reason: Bin this file name is quite special, change to other immediately good Bug thinking: Don’t easily use the common system file names. Once before, I wrote my own ccpack, which is also JS’s index file. It conflicts with the index file in the node environment. You can’t call it all index…. you should be careful when naming it 6: Token: where is better to store it? Bug phenomenon: Our token has always been stored in the cookie, but recently we are learning about web security Bug reason: CSRF attack is as follows: you log in to website a successfully, and B is a phishing website. When you click B, you will send a request to website a to impersonate yourself. At this time, the browser will send it with the cookie information you log in to website a by default. This request can not only use img tag to make get request, but also use form form to make post request, So this token can’t be placed in the cookie, and it’s not httponly. It’s more reasonable to put it in the local storage. Every request is attached to the header, and it’s ready to update at any time. In this way, CSRF can’t get the value of local storage Bug feedback: We talked about this problem with related students, because adding token to cookie is the back-end direct operation, but because of this problem, the back-end students have been encapsulated in a unified middleware. If it is modified, it will involve a wide range of aspects, but other ways can play a remedial effect, such as verifying the referer source information, white list, secondary verification and so on The Enlightenment on this issue is that many core logic can not be formulated hastily and can be better. Why not do that? 7: PM2: restart is not reliable Cause of the bug: At present, there are many server-side rendering projects. Most of the front-end engineers use node, and most of them use PM2 for thread protection. After all, PM2 is concise and has its own ‘load balancing’ Bug phenomenon: In the local and test environment, there is no problem. In the evening of going online, I performed the following operations sudo -s CD / home / XXXX / XXXX // / online environment directory git pull npm run build pm2 restart all But a strange thing happened. When I started it, it was’ successful ‘. After 2 seconds, four servers became error. (why four servers, because four cores were allocated). I rebuilt the project and started it again pm2 restart allIt’s still invalid. We should be excited at the beginning when the bug comes, but we are all in a hurry to launch the project, so we need to be nervous. First, we need to look at the onerror log. Because it’s in the server environment, we can only look at it by cat, but the log is in a mess. After looking at it for a while, we can’t find the real reason. Is there a big problem with build? Test is good. At that time, I fell into the silly operation of repeatedly building and restart. Calm down. Is it possible that this problem is not a packaging error but a service error? The problem is not in our code, but in PM2 itself. Suppose that ‘restart’ can’t solve the problem, I will delete four servers by PM2 delete ID, and then pm2 start ./server.js -i 4Restart really OK Bug analysis: It’s a good quality to find out the reason from yourself, but you can’t just doubt yourself. You should also consider that maybe other technologies are out of order PM2 is also just a system. Occasionally, instead of restarting the project it monitors, it restarts itself 8: Vue: the use of the $variable Cause of the bug: A set of components developed by the 3D technology team of the company. When receiving an instance, the name must be ‘$a’, not ‘a’ After consulting the students of the visualization team, we learned that $and_ The first variable will not be monitored by Vue, so we can monitor the variable by ourselves, which is more flexible Use $: Define the $TXT variable in data and report the following error between the lines < span > {{$TXT}} < / span >: `Property "$txt" must be accessed with "$data.$txt" because properties starting with "$" or "_" are not proxied in the Vue instance to prevent conflicts with Vue internals.` You must use “$data. $TXT” to access the property ‘$TXT’ because “$” or “” is not a proxy in a Vue instance To prevent conflicts within Vue. Use: Defined in data_ Txt variable and between lines < span > 0{{_ The error is as follows: `Keys starting with with '_' are reserved` With ” The first key is reserved Learn ‘bug’: data(){ return { $txt:{}, } }, created(){ this.$txt = {n:2}; }, mounted(){ Console. Log (this. $txt) // {n: 2} however, the value in the span tag will not change } Let’s monitor him created() { this.$txt = { n: 2 }; let n = 2; Object.defineProperty(this.$txt, "n", { get() { return n; }, set(val) { return (n = val); } }); }, mounted() { this.$txt.n = 3; console.log(this.$txt.n); // 3 } This is achieved: data can be obtained by hanging on the data, but the data is not monitored by Vue, so there are more ways to play, which is a very good thinking explain Because my use of ‘a’ will cause conflict between Vue’s observation and 3D team’s observation, so the error ‘$a’ will not conflict feedback I have given feedback to the 3D team on this technical point, hoping to make it clear in the document to prevent other students from scratching their heads 9: dom: ResizeObserver Bug display: I found a problem when using the icon component developed by the visualization team. When the window size changes, the icon will automatically adjust its layout width, but… This icon is 100% full of parents, or flex:1 Full of father, when his father changes and does not start the windw. Onresize event, the mutation does not adjust its width Cause of the bug: It only monitors the size change of the window and ignores the size change of the parent itself Bug solution 1: I add a CB to the function that can change the width of its parent, so that I know that every time the parent changes, but this needs to be added one by one in the function that can affect the change of the parent Bug solution 2: limited to Google The magic sizeobserver is easy to use <template> <div> <div ref="wrap" class="wrap"> <div class="box">1</div> </div> </div> </template> <script> export default { mounted(){ const wrap = this.$refs.wrap; //Create monitoring instance const resizeObserver = new ResizeObserver((item)=>{ Console.log ('change ', item) // item is an array, and the changers }); //Invest in the observed DOM resizeObserver.observe(wrap) //Of course, you can remove the observation, and you don't have to worry about the performance // resizeObserver.unobserve(wrap); } } </script> <style> .wrap{ border: 1px solid red; width: 50%; } .box{ border: 1px solid black; margin: 20px; } </style>
https://developpaper.com/issue-1-sharing-nine-bugs-on-the-front-end/
CC-MAIN-2021-25
refinedweb
3,092
66.78
Hi, I'm working on a project where we use flex 3.6 with Adobe Air 3.3. I'm trying to run my tests in IntelliJ, but the IDEA generates an app descripotr with air version 1.5.3. Is it possible that it is decided upon the version of FLEX, and not the air? In an other project where I use higher flex version, the app descriptor is correct. Unfortunately we have dependencies in this project to flex 3.6, cannot change it yet. Thanks, Mark Hi, What is your IntelliJ IDEA version? I hope it is 11.1+. You need to use custom AIR app descriptor template with 3.3 namespace. And this is not only for FlexUnit - this is for usual Flash run configuration as well. The option is at the AIR Package tab of the Flash build configuration page (Project Structure dialog). Yes, I am using 11.1.4. I was already using a custom app descriptor, now I tried a generated 3.3, but the one copied each time to the output folder has version 1.5.3. Sorry, the issue seems to be fixed only in IntelliJ IDEA 12 EAP. Workaround is to configure a separate build configuration for running tests that would use more recent Flex SDK.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206221719-FLEX-3-6-with-Air-3-3?sort_by=votes
CC-MAIN-2020-50
refinedweb
215
76.42
irrKlang is a cross platform sound library for C++, C# and all .NET languages. Read more irrKlang 1.5 released. This adds 64 bit support to all platforms (Windows, Linux, Mac, .NET), there is now a separate SDK for 64 bit available. Also, this adds unicode and 24 flac support, and improves performance on Linux. irrKlang 64bit beta released. The first build supporting 64bit architectures is available now. It is a beta version, only working with visualStudio. It can be downloaded from the download page, and used together with irrKlang 1.4.0. irrKlang 1.4.0b released. Improved Linux compatibility: Some games using irrKlang are now released on Valves "Steam for Linux" and now also work there nicely with this update. irrKlang 1.4.0 released. This fixes a few bugs and adds performance improvements. See changelog for details. irrKlang 1.3.0b released. This adds support for .NET version 4. See changelog for details. irrKlang 1.3.0 released. Adds the possibility to capture the mixed output audio data. See changelog for details. irrKlang 1.2.0 released. Includes support for playing back .FLAC files See changelog for details. irrKlang 1.1.3c released. Bug fix: Fixed a potential crash when using looped, streamed sounds in 3d. See changelog for details. irrKlang 1.1.3b released. Small update release: Fixed a small bug with looping streamed sounds in DirectSound, added .NET IDisposble interface implementations. See changelog for details. irrKlang 1.1.3 released. Improved performance and compatibility of Mac OS X version, reduced latency of Linux version, added multi channel audio recording, added several small improvements to the .NET version including examples for VisualBasic.NET. irrKlang 1.1.2b released. Added support for (external) multichannel sound cards on Mac OS X. irrKlang 1.1.2 released. New: Support for 24 bit wave files, versions for the .NET runtimes 1.1 and 2.0, added access to the internal audio interfaces, several other small new features, new examples and several bug fixes. irrKlang 1.1.0 released. New: Direct access to decoded PCM sample data, bug fixes, speed optimizations and memory usage improvments. See change log for details, or download irrKlang 1.1.0 now. irrKlang 1.0.4 released. New: Audio recording for Windows, PCM sound sources, minor bug fixes. See change log for details, or download irrKlang 1.0.4 now.. See change log for details, or download irrKlang 1.0.3 now. This C++ example code shows how to play an mp3 file: #include <iostream>#include <irrKlang.h>using namespace irrklang; int main(int argc, const char** argv) { // start the sound engine with default parameters ISoundEngine* engine = createIrrKlangDevice(); if (!engine) return 0; // error starting up the engine // play some sound stream, looped engine->play2D("somefile.mp3", true); char i = 0; std::cin >> i; // wait for user to press some key engine->drop(); // delete engine return 0; } The following example was written in C# and shows how to play an mp3 file: using IrrKlang; namespace HelloWorld { class Example { [STAThread] static void Main(string[] args) { // start up the engine ISoundEngine engine = new ISoundEngine(); // play a sound file engine.play2D("somefile.mp3"); // wait until user presses ok to end application System.Windows.Forms.MessageBox.Show( "Playing, press ok to end."); } // end main() } // end class } // end namespace Games Farm's Shadows 2DBoy's World of Goo Novacore's Legends of Pegasus Boxed Dreams' Ceville Positech's Gratuitous Tank Battles Galcon's Galcon Fusion Hammerware's Family Farm Ageod's World War One Orchid Game's Heartwild Solitaire Kritzelkratz 3000's Sarah 2 Sechsta Sinn's Die verbotene Welt Wintervalley Software's Maximum-Football Elecorn's Caster Digini's Blade3d 1morebee's Fiona Finch Ntronium Games' Armada 2526
http://www.ambiera.com/irrklang/
CC-MAIN-2016-40
refinedweb
617
58.48