text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Count the number of vowels, words, and sentences. Ice Brazuca Greenhorn Joined: Nov 08, 2009 Posts: 4 posted Nov 16, 2009 01:07:53 0 Hey guys I am trying to make a program called word that uses the a text file called output in order to: 1) Count the number of words. 2) Count the number of vowels (a,e,i,o,u *disregard y*) 3) Count the number of sentences. 4) Count the number of lines. 5) Count the number of punctuations. 6) Count the number of characters. For my output text I have: Time to test this program. I need to see if it will work, or if it will not work. This is like the 9000th version of the program! Will it work? Try it in 3, 2, 1, 0! Now I tried doing this program in two different ways and I haven't been able to make any of them work so I will show you guys both ways I've done it. ***Here is my first attempt at the program.*** import java.util.*; import java.io.*; import jpb.*; public class word { public static void main(String[] args)throws FileNotFoundException { String filename; String words; String line; String characters; int totalCharacters; String s; int count = 0; int countword = 0; int countCharacters = 0; int vowelCount = 0; try{ Scanner in = new Scanner(System.in); System.out.print("Enter name of input file: "); Scanner input=new Scanner(new FileReader(in.nextLine())); if(!input.hasNext()) {System.out.println("File is empty. Aborting Program"); System.exit(0);} while (input.hasNextLine()) { line = input.nextLine(); System.out.println(line); count++; Scanner inLine = new Scanner(line); while (inLine.hasNext()) { words = inLine.next(); System.out.print(words); countword++; } countCharacters += line.length(); for(int i = 0; i < line.length(); i++) { char c = line.charAt(i); if ((c == 'a') || (c == 'e') || (c == 'i')|| (c == 'o') || (c == 'u')) vowelCount++; } } System.out.println("Number of words: " + countword); System.out.println("Number of lines: " + count); System.out.println("Number of sentences: "); System.out.println("Number of vowels: " + vowelCount); System.out.println("Number of characters: " + countCharacters); System.out.println("Number of punctuations: "); PrintStream out=new PrintStream(new File("output.txt")); out.println("Number of words: " + countword); out.println("Number of lines: " + count); out.println("Number of sentences: "); out.println("Number of vowels: " + vowelCount); out.println("Number of characters: " + countCharacters); out.println("Number of punctuations: "); out.close(); System.exit(0);} catch ( FileNotFoundException e) {System.out.println("C:\\ The file you entered either do not exist or the name is spelled wrong.");} } } The errors I am getting with this one are: a) Each "answer" is repeated like 3 times for some reason. b) It won't read how many sentences I have in the output file. c) It won't read how many punctuations I have in the output file. ***Here is my second attempt at the program.*** import java.util.*; import java.io.*; import jpb.*; public class word{ public static void main(String[] args)throws FileNotFoundException {String filename,input; int lcount=0,vcount=0,pcount=0,ccount=0,wcount=0,scount=0; int i,j; char ch; char vowel[]={'A','a','E','e','I','i','O','o','U','u'}; char punct[]={'.','!',',','?',':',';'}; char white[]={'\n','\t',' '}; Scanner in=new Scanner(System.in); System.out.print("Enter input file name: "); filename=in.next(); try{PrintStream foutput=new PrintStream(new File("output.txt")); Scanner finput=new Scanner(new FileReader(filename)); if(!finput.hasNext()) {System.out.println(filename+"File is empty. Aborting Program"); System.exit(1); } while(finput.hasNextLine()) {input=finput.nextLine(); wcount++; for(i=0;i<input.length();i++) {ch=input.charAt(i); ccount++; for(j=0;j<white.length;j++) {if(ch==white[j]) wcount++; } for( j=0;j<punct.length;j++) if(ch==punct[j]) {pcount++; if(j<3) scount++; } for( j=0;j<vowel.length;j++) if(ch==vowel[j]) vcount++; } lcount++; } System.out.println("words: "+wcount); System.out.println("lines: "+lcount); System.out.println("sentences: "+scount); System.out.println("vowels: "+vcount); System.out.println("characters: "+ccount); System.out.println("punctuations: "+pcount); foutput.println("words: "+wcount); foutput.println("lines: "+lcount); foutput.println("sentences: "+scount); foutput.println("vowels: "+vcount); foutput.println("characters: "+ccount); foutput.println("punctuations: "+pcount); foutput.close(); finput.close(); System.exit(0); }catch ( FileNotFoundException e) {System.out.println("C:\\java word\n The file you entered either do not exist or the name is spelled wrong."); System.exit(2); } } } The error I am getting with this one are: a) It erases the output text file and then give me the message "File is empty. Aborting Program" Can anyone help me? I've been trying to get this program working for over three days... I agree. Here's the link: subject: Count the number of vowels, words, and sentences. Similar Threads String Array Loop Problem Passing Vector elements to an array counting specific words from a text file ArrayList Problem All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/471119/java/java/Count-number-vowels-words-sentences
CC-MAIN-2014-52
refinedweb
825
54.79
W3C RDF/Shoe stuff pls refer to this schema using the namespace name cookie crumbs: horn clause in the FOLDOC. aka "introduces" in larch vars(rule, l) = l is the list of vars used in rule hmm... List[Term] if(rule, l) = l is the list of premises in rule hmm... List[Statement] @@ I could perhaps model that as in(x, if(r)) -> type(x, Statement) then(rule, s) = s is the conclusion of rule Hmm... this forall mechanism of explicitly stating which symbols are variables is very general, and it makes explicit the connection between variable syntax and the particular logic implied by that syntax. But it's sort of tedious to work with. Some folks have propose[@@cite] distinguishing variables by their URI scheme, ala var:foo; I noted that this rubs me the wrong way[@@cite], but I couldn't fully explain why (except to appeal to the axiom of opacity@@link). It occurs to me, though, that we can reserve a part of URI space for variables without reserving a new URI scheme. See: KIF and RDF. busy(?who, ?t) :- member(?who, ?group), meets(?group, ?t) <r:Rule> <r:vars> <l:List> <l:first rdf: <l:rest> <List> <l:first rdf: <l:rest> <l:List> <l:first rdf: <l:rest rdf: </l:List> </l:rest> </l:List> </l:rest> </l:List> </r:vars> <r:if> <l:List> <l:first rdf: <l:rest> <l:List> <l:first rdf: <l:rest rdf: </l:List> </l:rest> </l:List> </r:if> <r:then rdf: </r:Rule> <rdf:Description <xx:member rdf: </rdf:Description> <rdf:Description <xx:meets rdf: </rdf:Description> <rdf:Description <xx:busy rdf: </rdf:Description> dealing with variables: base case dealing with variables: induction step: @@this pretty much introduces rule2 by an existential, and won't work in ALL; that is: it needs :the or :find (find or create)... or some sort of ruleOf([vars, if, then]) thingy? I maintain this as HTML, but I make it availble as RDF using a transformation.
http://www.w3.org/2000/04shoe-swell/inference
CC-MAIN-2014-52
refinedweb
336
64.85
On 6/20/05, Philip Martin <philip@codematters.co.uk> wrote:> Charles Bailey <bailey.charles@gmail.com> writes:> > > I've interspersed my comments in the code, since there's imho zero> > chance that this version of the patch will be> > substantially/stylistically suitable for committing.> > That doesn't encourage review! Anyway, if the comments are necessary> to understand the code then they should be part of the code. Hmm. Perhaps I'm phrasing poorly. My intent was to say that the patch wasfunctionally correct, but there are enough issues not only of style (ala HACKING)but of "project philosophy", for lack of a better term -- whethermacros are preferredto repeated code, framing of comments, calling sequence conventions --that it wasunlikely to be in final form. I'd hoped to encourage review, ratherthan put it off. Those comments which I intended to document the code are present as C-stylecomments in the patch. It's the "philosophical" questions I've interspersed. I hope this helps make my intent clearer. > > They're far from> > exhaustive, but this message is long enough already.> >> > Conceptual "Log message":> > [[[> > Add function that escapes illegal UTF-8 characters, along the way> > "valid" rather than "legal", "invalid" rather than "illegal". That> applies to function names, variables, comments, etc. OK, I'm happy to follow that convention. > > refactoring core of> > string-escaping routines, and insure that illegal XML error message> > outputs legal UTF-8.> > ### Probably best applied as several patches, but collected here for review.> > If you think it should be several patches then submit several> patches. On the whole it looks like something that should be one> patch. Thanks. This is another place where I was unsure of the philosophy. If the policy is one-change-per-commit, then I can frame it as addingthe driver in one patch, using the driver to escape strings to UTF8 ina second, and using that to fix the XML error message in a third. Ifthe policy is one-problem-per-patch, then I agree it makes sense as asingle patch. (I'd still keep refactoring other string escapingroutines to use the driver as a separate patch, no?) > > *.> > I wouldn't apply patches to PO files unless they come from someone who> claims to be fluent. OK; I can drop them. > > --- /dev/null Mon Jun 6 11:06:27 2005> > +++ subversion/libsvn_subr/escape.c Fri Jun 3 19:16:09 2005> > New files need a copyright notice. Right. I'd noticed that, but since I don't know Collabnet's policyhere I didn't wantto presume. Is it acceptable to clone the copyright from an existingfile? Doesthat suffice to Collabnet as assignment of copyright? Do I have a pledge fromCollabnet that any code whose copyright is assigned in this way willremain freelyavailable in perpetuity? (By "freely available", I mean termssubstantially identicalto Subversion's current licensing; I'm not trying to start a skirmishover softwarelicensing.) > > > @@ -0,0 +1,58 @@> > +/*> > + * escape.c: common code for cleaning up unwanted bytes in strings> > + */> > +> > +#include "escape_impl.h"> > +> > +#define COPY_PREFIX \> > + if (c > base) { \> > + svn_stringbuf_appendbytes (out, base, c - base); \> > + base = c; \> > + }> > I don't think I'd have used a macro, it makes the calling code harder> to understand, but I'd prefer a macro with parameters and it needs> documentation. Hmm. The only purpose of the macro here is to make the calling code easierto understand; if it's not helping, I can just inline the prefix checkin the threeplaces where it's required. > > +> > *),> > mapper should be a typedef. OK. > > + void *mapper_baton,> > + apr_pool_t *pool)> > +{> > + unsigned char *base, *c;> > + svn_stringbuf_t *out;> > +> > + if (outsbuf == NULL || *outsbuf == NULL) {> > + out = svn_stringbuf_create ("", pool);> > It should probably be created with at least len bytes capacity. Sounds fair. Would it be better to create it with as many bytes asthe input string length? > > + if (outsbuf)> > + *outsbuf = out;> > + }> > + else> > + out = *outsbuf;> > +> > + for (c = base = (unsigned char *) instr; c < instr + len; ) {> > Try to avoid casting away const. Agreed. I think I tripped a compiler warning trying to leave everything (instr, base, and c) const, though it should be legal -- I'll haveanother look. > > + apr_size_t count = isok ? isok[*c] : 0;> > + if (count == 0) {> > + COPY_PREFIX;> > + count = mapper ? mapper (&c, instr, len, out, mapper_baton, pool) : 255;> > + }> > + if (count == 255) {> > + char esc[6];> > +> > + COPY_PREFIX;> > + sprintf (esc,"?\\%03u",*c);> > + svn_stringbuf_appendcstr (out, esc);> > + c++;> > + base = c;> > + }> > + else c += count;> > + }> > + COPY_PREFIX;> > + return out;> > This function doesn't follow the project's indentation/formatting> guidelines. Got it. I reflexively cuddled the braces, but can push them down. > > +}> > +> >> >> > ### Comments are pretty self-explanatory.> > ### Docs are as doxygen; will need to be downgraded to plaintext since it's> > ### an internal header.> > ### As noted above, it makes sense to combine this with utf_impl.h.> > If it makes sense to combine it then why is it separate? I didn't want on a first outing to be rearranging existing files. Atsome level, this isalso a question of philosophy. It seems to me that one "private"header file foreverything local to libsvn_subr is the best way to go. If, however,project policyis one header per topic, then I'd use a separate escape.h. > > --- /dev/null Mon Jun 6 11:35:47 2005> > +++ subversion/libsvn_subr/escape_impl.h Thu Jun 2 18:44:05 2005> > Missing copyright notice.> > > @@ -0,0 +1,147 @@> > +/*> > + * escape_impl.h : private header for string escaping function.> > + */> > +> > +> > +> > +#ifndef SVN_LIBSVN_SUBR_ESCAPE_IMPL_H> > +#define SVN_LIBSVN_SUBR_ESCAPE_IMPL_H> > +> > +> > +#include "svn_pools.h"> > +#include "svn_string.h"> > +> > +#ifdef __cplusplus> > +extern "C" {> > +#endif /* __cplusplus */> > +> > +> > +/** Scan @a instr of length @a len bytes, copying to stringbuf @a *outsbuf,> > + * escaping bytes as indicated by the lookup array @a isok and the mapping> > + * function @a mapper. Memory is allocated from @a pool. You may provide> > + * any extra information needed by @a mapper in @a mapper_baton.> > + * Returns a pointer to the stringbuf containing the escaped string.> > + *> > + * If @a outsbuf or *outsbuf is NULL, a new stringbuf is created; its> > address is> > The patch is mangled here, and in several o ther places. Right. When Michael Thelen pointed that out, I resent it as an attachment;I hope that came through in better shape. > > + * placed in @a outsbuf unless that argument is NULL.> > + * If @a isok is NULL, then @a mapper is used exclusively.> > + * If @ mapper is NULL, then a single character is escaped every time @a mapper> > + * would have been called.> > + *> > + * This is designed to be the common pathway for various string "escaping"> > + * functions across subversion. The basic approach is to scan> > + * the input and decide whether each byte is OK as it stands, needs to be> > + * "escaped" using subversion's "?\uuu" default format, or needs to be> > + * transformed in some other way. The decision is made using a two step> > + * process, which is designed to handle the simple cases quickly but allow> > + * for more complex mappings. Since the typical string will (we hope)> > + * comprise mostly simple cases, this shouldn't require much code> > + * complexity or loss of efficiency. The two steps used are:> > The question is do we need such a general function? One the one hand> it looks like it is more complicated th en necessary just to handle> escaping, on the other hand it doesn't look general enough to be used,> say, as a routine to separate multibyte codepoints.> > I think I'd prefer a simpler solution, although I haven't tried to> implement one. Perhaps based on the existing validation functions?> (Although since I wrote those I might be biased.) I think this is the crucial philosophic question. For the specificproblem with whichI started (given a string from I-know-not-where, and hence inI-know-not-what-encoding,emit a string that is valid UTF8), it's certainly easy enough to writeanother escaping function, similar to the ones already present. Thatapproach seemed less than ideal to me, since it would introduce yetanother function to be maintained in parallel, etc. The conceptualsimplicity may make it most useful in the end, though. I tried to write the driver to be general enough to handle differenttasks, but reasonably fast for common tasks. Since most of thestrings appearing within Subversion are UTF8, I chose a byte-orientedscreen. (I'm also trying to take advantage of the nice property ofUTF8 that one can identify multibyte codepoints from the first byte,iff the string is guaranteed to be valid.) I'm using the mappingfunction as a "trap door" to permit more involved (includingmultibyte) testing; in this case, there's no benefit in speed, butperhaps a common point for the actual escaping of bytes. More generalsolutions (of which I could conceive) seemed to me to entail yet morecomplexity; I chose this level as the appropriate tradeoff. It maywell be that the core developers choose a different level. I can workwith that. > > + *> > + * 1. The value of a byte from the input string ("test byte") is used as an> > + * index into a (usually 256 byte) array passed in by the caller.> > + * - If the value of the appropriate array element is 0xff,> > + * then the test byte is escaped as a "?\uuu" string in the output.> > + * - If the value of the appropriate element is otherwise non-zero,> > + * that many bytes are copied verbatim from the input to the output.> > + * 2. If the array yields a 0 value, then a mapping function provided by> > + * the caller is used to allow for more complex evaluation. This function> > + * receives five arguments:> > Five? I see six in the code. Drat. I thought I'd added the baton to the docment. My omission. > > + * - a pointer to the pointer used by svn__do_char_escape() to> > + * mark the test byte in the input string> > + * - a pointer to the start of the input string> > + * - the length of the input string> > + * - a pointer to the output stringbuf> > + * - the ever-helpful pool.> > + * The mapping function may return a (positive) nonzero value,> > + * which is interpreted * as described in step 1 above, or zero,> > + * indicating that the test byte * should be ignored. In the latter> > + * case, this is generally because the * mapping function has done the> > + * necessary work itself; it's free to * modify the output stringbuf and> > + * adjust the pointer to the test byte * as it sees fit (within the> > + * bounds of the input string). At a minimum, * it should at least> > + * increment the pointer to the test byte before * returning 0, in order> > + * to avoid an infinite loop.> > Are the '*' characters within the comment lines just fallout from> paragraph filling? Yes. > > + */> > +> > *),> > Should be a typedef, the typedef should document the parameters. OK. > > + void *mapper_baton,> > + apr_pool_t *pool);> > +> > +> > +> > +/** Initializer for a basic screening matrix suitable for use with> > + * #svn_subr__escape_string to escape non-UTF-8 bytes.> > + * We provide this since "UTF-8-safety" is a common denominator for> > + * most string escaping in Subversion, so this matrix makes a good> > + * starting point for more involved schemes.> > + */> > +#define SVN_ESCAPE_UTF8_LEGAL, 255, 255, 255, 255, 255, 255, 255, 255, 255}> > That looks a bit like the svn_ctype stuff, but it's a separate> implementation. That makes me uneasy. Fair enough. The underlying problem is that different situationsrequire slightly different behaviors, of course. I'm not sure whetherthe best solution is to start with a single framework and tweak it atruntime as needed, or to set up the framework for each case separatelyin the interest of speed over space. > > +> > +/** Given pointer @a c into a string which ends at @a e, figure out> > + * whether (*c) starts a valid UTF-8 sequence, and if so, how many bytes> > + * it includes. Return 255 if it's not valid UTF-8.> > + * For a more detailed description of the encoding rules, see the UTF-8> > + * specification in section 3-9 of the Unicode standard 4.0 (e.g. at> > + *),> > + * with special attention to Table 3-6.> > + * This macro is also provided as a building block for mappers used by> > + * #svn_subr__escape_string that want to check for UTF-8-safety in> > + * addition to other tasks.> > + */> > +#define SVN_ESCAPE_UTF8_MAPPING(c,e) \> > + ( (c)[0] < 0x80 ? /* ASCII */ \> > + 1 : /* OK, 1 byte */ \> > + ( ( ((c)[0] > 0xc2 && (c)[0] < 0xdf) && /* 2-byte char */ \> > + ((c) + 1 <= (e)) && /* Got 2 bytes */ \> > + ((c)[1] >= 0x80 && (c)[1] <= 0xbf)) ? /* Byte 2 legal */ \> > + 2 : /* OK, 2 bytes */ \> > + ( ( ((c)[0] >= 0xe0 && (c)[0] <= 0xef) && /* 3 byte char */ \> > + ((c) + 2 <= (e)) && /* Got 3 bytes */ \> > + ((c)[1] >= 0x80 && (c)[1] <= 0xbf) && /* Basic byte 2 legal */ \> > + ((c)[2] >= 0x80 && (c)[2] <= 0xbf) && /* Basic byte 3 legal */ \> > + (!((c)[0] == 0xe0 && (c)[1] < 0xa0)) && /* 0xe0-0x[89]? illegal */\> > + (!((c)[0] == 0xed && (c)[1] > 0x9f)) ) ? /* 0xed-0x[ab]? illegal */\> > + 3 : /* OK, 3 bytes */ \> > + ( ( ((c)[0] >= 0xf0 && (c)[0] <= 0xf4) && /* 4 byte char */ \> > + ((c) + 3 <= (e)) && /* Got 4 bytes */ \> > + ((c)[1] >= 0x80 && (c)[1] <= 0xbf) && /* Basic byte 2 legal */ \> > + ((c)[2] >= 0x80 && (c)[2] <= 0xbf) && /* Basic byte 3 legal */ \> > + ((c)[3] >= 0x80 && (c)[3] <= 0xbf) && /* Basic byte 4 legal */ \> > + (!((c)[0] == 0xf0 && (c)[1] < 0x90)) && /* 0xf0-0x8? illegal */ \> > + (!((c)[0] == 0xf4 && (c)[1] > 0x8f)) ) ? /* 0xf4-0x[9ab]? illegal*/\> > + 4 : /* OK, 4 bytes */ \> > + 255)))) /* Illegal; escape it */> > utf_validate.c already implements the UTF-8 encoding rules. There is> obviously some duplication of the algorithm, that makes me uneasy. True enough. I'd done this for speed and compactness relative to thestate machine in utf_validate.c, expecting comments from the coredevelopers about whether it was a net win. For the specific case ofescaping invalid UTF8, I could write a mapper to call out tosvn_utf__last_valid per character. > Those big macros also make me uneasy. Is the concern that it'll choke come compilers, or that they're hardto maintain?This may just be a stylistic issue: I tend to favor macros forrepeated tasks, on the theory that as long as one debugs it carefully,the benefit of inlining is worth the effort of coding. > > +> > +> > +#ifdef __cplusplus> > +}> > +#endif /* __cplusplus */> > +> > +#endif /* SVN_LIBSVN_SUBR_ESCAPE_IMPL_H */> >> >> > ### Function names can be revised to fit convention, of course.> > ### svn_utf__cstring_escape_utf8_fuzzy serves as an example of a benefit of> > ### returning the resultant stringbuf from svn_subr__escape_string both in a> > ### parameter and as the function's return value. If the sense is that> > it'll be a cause> > ### of debugging headaches, or that it's cortrary to subversion> > culture to code public> > ### functions as macros, it's easy enough to code this as a function,> > and to make> > ### svn_subr__escape_string return void (or less likely svn_error_t,> > if it got pickier> > ### about params.)> > --- subversion/libsvn_subr/utf_impl.h (revision 14986)> > +++ subversion/libsvn_subr/utf_impl.h (working copy)> > @@ -24,12 +24,33 @@> >> > #include <apr_pools.h>> > #include "svn_types.h"> > +#include "svn_string.h"> >> > #ifdef __cplusplus> > extern "C" {> > #endif /* __cplusplus */> >> >> > +/** Replace any non-UTF-8 characters in @a len byte long string @a src with> > + * escaped representations, placing the result in a stringbuf pointed to by> > + * @a *dest, which will be created if necessary. Memory is allocated from> > How does the user know what "if necessary" means? Fair point. s/necessary/NULL/. > > + * @a pool as needed. Returns a pointer to the stringbuf containing the result> > + * (identical to @a *dest, but facilitates chaining calls).> > + */> > +svn_stringbuf_t *> > +svn_utf__stringbuf_escape_utf8_fuzzy (svn_stringbuf_t **dest,> > + const unsigned char *src,> > + apr_size_t len,> > + apr_pool_t *pool);> > +> > +/** Replace any non-UTF-8 characters in @a len byte long string @a src with> > + * escaped representations. Memory is allocated from @a pool as needed.> > + * Returns a pointer to the resulting string.> > + */> > +#define svn_utf__cstring_escape_utf8_fuzzy(src,len,pool) \> > + (svn_utf__stringbuf_escape_utf8_fuzzy(NULL,(src),(len),(pool)))->data> > Is there any need for this to be a macro? A real function would be> less "surprising" and that makes it better unless you can justify the> macro. OK. I think this is another instance of the style point noted above. > > +> > +> > const char *svn_utf__cstring_from_utf8_fuzzy (const char *src,> > apr_pool_t *pool,> > svn_error_t *(*convert_from_utf8)> >> >> >> > ### There're other places that could be rewritten in terms of the new escaping> > ### functions, but I hope the two given here serve as an example of how it might> > ### be done.> > Complete patches are better than incomplete ones. Sure. My standard for 'complete' might have been a bit low: I wasaiming for a unit that compiled, passed tests, and illustrated theissues I believed were still open. I thought I might wait to seewhether the notion of a "common escaping routine" would carry beforeextending it to XML, other UTF8 tasks, etc. > > ### The rename to ascii_fuzzy_escape is to distinguish it from the new functions> > ### that escape only illegal UTF-8 sequences.> > --- subversion/libsvn_subr/utf.c (revision 14986)> > +++ subversion/libsvn_subr/utf.c (working copy)> > @@ -30,6 +30,7 @@> > #include "svn_pools.h"> > #include "svn_ctype.h"> > #include "svn_utf.h"> > +#include "escape_impl.h"> > #include "utf_impl.h"> > #include "svn_private_config.h"> >> > @@ -323,53 +324,19 @@> > /* Copy LEN bytes of SRC, converting non-ASCII and zero bytes to ?\nnn> > sequences, allocating the result in POOL. */> > static const char *> > -fuzzy_escape (const char *src, apr_size_t len, apr_pool_t *pool)> > +ascii_fuzzy_escape (const char *src, apr_size_t len, apr_pool_t *pool)> > {> > - const char *src_orig = src, *src_end = src + len;> > - apr_size_t new_len = 0;> > - char *new;> > - const char *new_orig;> > + static unsigned char asciinonul[256];> > + svn_stringbuf_t *result = NULL;> >> > - /* First count how big a dest str ing we'll need. */> > - while (src < src_end)> > - {> > - if (! svn_ctype_isascii (*src) || *src == '\0')> > - new_len += 5; /* 5 slots, for "?\XXX" */> > - else> > - new_len += 1; /* one slot for the 7-bit char */> > + if (!asciinonul[0]) {> > + asciinonul[0] = 255; /* NUL's not allowed */> > That doesn't look threadsafe. It is safe, though it doesn't prevent duplication. > > + memset(asciinonul + 1, 1, 127); /* Other regular ASCII OK */> > + memset(asciinonul + 128, 255, 128); /* High half not allowed */> > + }> >> > - src++;> > - }> > -> > - /* Allocate that amount. */> > - new = apr_palloc (pool, new_len + 1);> > -> > - new_orig = new;> > -> > - /* And fill it up. */> > - while (src_orig < src_end)> > - {> > - if (! svn_ctype_isascii (*src_orig) || src_orig == '\0')> > - {> > - /* This is the same format as svn_xml_fuzzy_escape uses, but that> > - function escapes different characters. Please keep in sync!> > - ### If we add another fuzzy escape somewhere, we should abstract> > - ### this out to a common function. */> > - sprintf (new, "?\\%03u", (unsigned char) *src_orig);> > - new += 5;> > - }> > - else> > - {> > - *new = *src_orig;> > - new += 1;> > - }> > -> > - src_orig++;> > - }> > -> > - *new = '\0';> > -> > - return new_orig;> > + svn_subr__escape_string(&result, src, len, asciinonul, NULL, NULL, pool);> > + return result->data;> > }> >> > /* Convert SRC_LENGTH bytes of SRC_DATA in NODE->handle, store the result> > @@ -448,7 +415,7 @@> > errstr = apr_psprintf> > (pool, _("Can't convert string from '%s' to '%s':"),> > node->frompage, node->topage);> > - err = svn_error_create (apr_err, NULL, fuzzy_escape (src_data,> > + err = svn_error_create (apr_err, NULL, ascii_fuzzy_escape (src_data,> > src_length, pool));> > return svn_error_create (apr_err, err, errstr);> > }> > @@ -564,7 +531,28 @@> > return SVN_NO_ERROR;> > }> >> > +static unsigned char> > +utf8_escape_mapper (unsigned char **targ, const unsigned char *start,> > + apr_size_t len, const svn_stringbuf_t *dest,> > + void *baton, apr_pool_t *pool)> > New functions need documentation. OK. I can add a docment above it. > > +{> > + const unsigned char *end = start + len;> > + return SVN_ESCAPE_UTF8_MAPPING(*targ, end);> > +}> >> > +svn_stringbuf_t *> > +svn_utf__stringbuf_escape_utf8_fuzzy (svn_stringbuf_t **dest,> > + const unsigned char *src,> > + apr_size_t len,> > + apr_pool_t *pool)> > +{> > + static unsigned char utf8screen[256] = SVN_ESCAPE_UTF8_LEGAL_ARRAY;> > +> > + return svn_subr__escape_string(dest, src, len,> > + utf8screen, utf8_escape_mapper, NULL,> > + pool);> > +}> > +> > svn_error_t *> > svn_utf_stringbuf_to_utf8 (svn_stringbuf_t **dest,> > const svn_stringbuf_t *src,> > @@ -787,7 +775,7 @@> > const char *escaped, *converted;> > svn_error_t *err;> >> > - escaped = fuzzy_escape (src, strlen (src), pool);> > + escaped = ascii_fuzzy_escape (src, strlen (src), pool);> >> > /* Okay, now we have a *new* UTF-8 s tring, one that's guaranteed to> > contain only 7-bit bytes :-). Recode to native... */> >> > ### With code comes testing.> > ### Note: Contains 8-bit chars, and also uses convention that cc will treat> > ### "foo" "bar" as "foobar". Both can be avoided if useful for> > finicky compilers.> >> > --- subversion/tests/libsvn_subr/utf-test.c (revision 14986)> > +++ subversion/tests/libsvn_subr/utf-test.c (working copy)> > @@ -17,6 +17,7 @@> > */> >> > #include "../svn_test.h"> > +#include "../../include/svn_utf.h"> > Does a plain "svn_utf.h" work? Hmm. I don't know; I'd followed the convention of the prior line. I'll give it a try. > > #include "../../libsvn_subr/utf_impl.h"> >> > /* Random number seed. Yes, it's global, just pretend you can't see it. */> > @@ -222,6 +223,84 @@> > return SVN_NO_ERROR;> > }> >> > +static svn_error_t *> > +utf_escape (const char **msg,> > + svn_boolean_t msg_only,> > + svn_test_opts_t *opts,> > + apr_pool_t *pool)> > +{> > + char in[] = { 'A', 'S', 'C', 'I', 'I', /* All printable */> > + 'R', 'E', 'T', '\n', 'N', /* Newline */> > + 'B', 'E', 'L', 0x07, '!', /* Control char */> > + 0xd2, 0xa6, 'O', 'K', '2', /* 2-byte char, valid */> > + 0xc0, 0xc3, 'N', 'O', '2', /* 2-byte char, invalid 1st */> > + 0x82, 0xc3, 'N', 'O', '2', /* 2-byte char, invalid 2nd */> > + 0xe4, 0x87, 0xa0, 'O', 'K', /* 3-byte char, valid */> > + 0xe2, 0xff, 0xba, 'N', 'O', /*3-byte char, invalid 2nd */> > + 0xe0, 0x87, 0xa0, 'N', 'O', /*3-byte char, invalid 2nd */> > + 0xed, 0 xa5, 0xa0, 'N', 'O', /*3-byte char, invalid 2nd */> > + 0xe4, 0x87, 0xc0, 'N', 'O', /* 3-byte char, invalid 3rd */> > + 0xf2, 0x87, 0xa0, 0xb5, 'Y', /* 4-byte char, valid */> > + 0xf2, 0xd2, 0xa0, 0xb5, 'Y', /* 4-byte char, invalid 2nd */> > + 0xf0, 0x87, 0xa0, 0xb5, 'N', /* 4-byte char, invalid 2nd */> > + 0xf4, 0x97, 0xa0, 0xb5, 'N', /* 4-byte char, invalid 2nd */> > + 0xf2, 0x87, 0xc3, 0xb5, 'N', /* 4-byte char, invalid 3rd */> > + 0xf2, 0x87, 0xa0, 0xd5, 'N', /* 4-byte char, invalid 4th */> > + 0x00 };> > + const unsigned char *legalresult => > + "ASCIIRET\nNBEL!$-1(c)�OK2?\\192?\\195NO2?\\130?\\195NO2"-A> > + "3$-3�0䇠1OK?\\226?\\255?\\186NO?\\224?\\135?\\160NO?\\237?\\165?\\160NO"-A> > + "?\\228?\\135?\\192NO3$-3�01Y?\\242$-1(c)�?\\181Y?\\240?\\135?\\160"-A> > + "?\\181N?\\244?\\151?\\160?\\181N?\\242?\\135�N?\\242?\\135?\\160"> > + "?\\213N";> > I don't like the embedded control characters in the source code, could> you generate them at runtime? Sure. I could also just C-escape them like the high-half bytes above. > > + const unsigned char *asciiresult => > + "ASCIIRET\nNBEL\x07!?\\210?\\166OK2?\\192?\\195NO2?\\130?\\195NO2"> > + "?\\228?\\135?\\160OK?\\226?\\255?\\186NO?\\224?\\135?\\160NO"> > + "?\\237?\\165?\\160NO?\\228?\\135?\\192NO?\\242?\\135?\\160?\\181Y"> > + "?\\242?\\210?\\160?\\181Y?\\240?\\135?\\160?\\181N"> > + "?\\244?\\151?\\160?\\181N?\\242?\\135?\\195?\\181N"> > + "?\\242?\\135?\\160?\\213N";> > + const unsigned char *asciified;> > + apr_size_t legalresult_len = 213; /* == strlen(legalresult) iff no NULs */> > + int i = 0;> > + svn_stringbuf_t *escaped = NULL;> > +> > + *msg = "test utf string escaping";> > +> > + if (msg_only)> > + return SVN_NO_ERROR;> > +> > + if (svn_utf__stringbuf_escape_utf8_fuzzy> > + (&escaped, in, sizeof in - 1, pool) != escaped)> > I prefer () with sizeof. OK. > > + return svn_error_createf> > + (SVN_ERR_TEST_FAILED, NULL, "UTF-8 escape test %d failed", i);> > + i++;> > + if (escaped->len != legalresult_len)> > + return svn_error_createf> > + (SVN_ERR_TEST_FAILED, NULL, "UTF-8 escape test %d failed", i);> > + i++;> > + if (memcmp(escaped->data, legalresult, legalresult_len))> > + return svn_error_createf> > + (SVN_ERR_TEST_FAILED, NULL, "UTF-8 escape test %d failed", i);> > + i++;> > + if (memcmp(escaped->data, legalresult, legalresult_len))> > A duplicate of the one above? Er, yes. I must've mispasted. Sorry. > > + return svn_error_createf> > + (SVN_ERR_TEST_FAILED, NULL, "UTF-8 escape test %d failed", i);> > + i++;> > +> > + asciified = svn_utf_cstring_from_utf8_fuzzy(in, pool);> > + if (strlen(asciified) != strlen(asciiresult))> > + return svn_error_createf> > + (SVN_ERR_TEST_FAILED, NULL, "UTF-8 escape test %d failed", i);> > + i++;> > + if (strcmp(asciified, asciiresult))> > + return svn_error_createf> > + (SVN_ERR_TEST_FAILED, NULL, "UTF-8 escape test %d failed", i);> > + i++;> > +> > + return SVN_NO_ERROR;> > +}> > +> >> > /* The test table. */> >> > @@ -230,5 +309,6 @@> > SVN_TEST_NULL,> > SVN_TEST_PASS (utf_validate),> > SVN_TEST_PASS (utf_validate2),> > + SVN_TEST_PASS (utf_escape),> > SVN_TEST_NULL> > };> >> >> >> > ### The original point of this thread.> > ### This patch will apply with an offset, since I've cut out sections which> > ### reimplement XML escaping in terms of the svn_subr__escape_string.> > --- subversion/libsvn_subr/xml.c ( revision 14986)> > +++ subversion/libsvn_subr/xml.c (working copy)> > @@ -395,11 +413,22 @@> > /* If expat choked internally, return its error. */> > if (! success)> > {> > + svn_stringbuf_t *sanitized;> > + unsigned char *end;> > +> > + svn_utf__stringbuf_escape_utf8_fuzzy(&sanitized, buf,> > + (len > 240 ? 240 : len),> > + svn_parser->pool);> > + end = sanitized->data +> > + (sanitized->len > 240 ? 240 : sanitized->len);> > 240? A magic number with no explanation. I can add a comment. It was discussed earlier in this thread, but Iagree it'd helpin the future to add something to the code. > > + while (*end > 0x80 && *end < 0xc0 &&> > + (char *) end > sanitized->data) end--;> > I think that could generate a huge error message in pathological> cases. Up to 240 characters, yes. My intent was to include enough to givethe reader a fair idea where the error occurred, without subjectingthem to screensful of junk. Setting the cutoff at 240 seemed to give reasonable chunk of XML element for the cases I checked (errors in theentries file.) > > err = svn_error_createf> > (SVN_ERR_XML_MALFORMED, NULL,> > - _("Malformed XML: %s at line %d"),> > + _("Malformed XML: %s at line %d; XML starts:\n%.*s"),> > XML_ErrorString (XML_GetErrorCode (svn_parser->parser)),> > - XML_GetCurrentLineNumber (svn_parser->parser));> > + XML_GetCurrentLineNumber (svn_parser->parser),> > + (char *) end - sanitized->data + 1, sanitized->data);> >> > /* Kill all parsers and return the expat error */> > svn_xml_free_parser (svn_parser);> >> > Overall this patch makes me uneasy, I'd prefer a patch that builds on> our existing ctype and/or utf-8 code. However as I haven't tried to> implement such a patch I don't know whether our existing code is> a suitable starting point. That's certainly possible. I'd actually started out that way, andswitched to the above when it seemed I was about to add ya parallelroutine to escape a string in a slightly different way. Thanks for the review. If you or one of the other core developerscould answer some of the "philosophic" questions above, I can revisethe patch appropriately. -- Regards,Charles BaileyLists: bailey _dot_ charles _at_ gmail _dot_ comOther: bailey _at_ newman _dot_ upenn _dot_ edu Received on Tue Jun 21 20:41:38 2005 This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2005-06/0720.shtml
CC-MAIN-2020-05
refinedweb
4,058
63.39
table of contents other versions - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.10-1 - unstable 5.10-1 other sections NAME¶sleep - sleep for a specified number of seconds SYNOPSIS¶ #include <unistd.h> unsigned int sleep(unsigned int seconds); DESCRIPTION¶sleep() causes the calling thread to sleep either until the number of real-time seconds specified in seconds have elapsed or until a signal arrives which is not ignored. RETURN VALUE¶Zero if the requested time has elapsed, or the number of seconds left to sleep, if the call was interrupted by a signal handler. ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶POSIX.1-2001, POSIX.1-2008. NOTES¶On Linux, sleep() is implemented via nanosleep(2). See the nanosleep(2) man page for a discussion of the clock used. Portability notes¶.
https://manpages.debian.org/testing/manpages-dev/sleep.3.en.html
CC-MAIN-2021-21
refinedweb
147
59.3
hello readers… what a ride this has been in the past few weeks trying to install, set up and code this demo on the HANA Express edition HXE 2 SP3. I kid you not, this was a challenge to get it to run.. i questioned Einstein’s definition of insanity a few times… I will say though, after repeating the same steps a few times I think the VM and the HXE image warmed up to me and let me play with it for some time. In any case, I wanted to share with you all, my experience while doing these well documented steps by SAP, reading blogs and even watching a few you tube videos from Thomas. My environment is a lenovo T480 laptop w 1 TB of SSD and 32 GB Ram. I downloaded the VMWare workstation player 14 and assigned 16, 20, then 24 GBs RAM into this VM to test out the various ways this installation may perform. With out a doubt, please assign as much as possible to your VM so your HXE can run smoother. My first few steps were following the Getting started pdf provided with the installation of HXE. Please read a few steps ahead and remember to take notes in case u need to refresh what you just did. Keep in mind you will be updating your windows host file (if you are using a windows OS like me) and you must open it as admin in order to be able to save your host file. This installation took me a few hours – again, due to the several attempts while increasing the RAM dedicated to the VM. Then, It was time to explore and see what all new features, cockpits and screens were there in this new HANA version and SP. For anyone out there needing to know the ports were some of these sap provided apps are, please visit the XS control API – it has links directly to the various tools you will be using during the development and administration of your HXE such as the HRTT, webide, among others. Secondly, I wanted to start with the easiest of modules, IMO, the sapui5 module, so let;s see what i got… - create a new project – right click on the workspace – new – project from template (provide your new project with a name) - create the sapui5 html module – provide it with a name, a namespace, then click on finish. - once you have the ui5 module in your project, open it, go to the view1 and add some content to make sure this is working. similarly you may open the i18n file, modify the text and save it. - Build your module - Run your module Keep in mind that adding any module (ui5, db, nodejs) to the project will automatically get added to the mta.yaml file. if you are new to XSA, you may wonder, what is the mta.yaml file? The mta.yaml file stands for Multi-Targeted applications. it is a structured file but be very careful if you decide to open it with the code editor… instead you should try using the graphical editor. save your patience for later. once it finishes running, you will get a url so that you can see the output of your ui5 application For additional content and how to build a sapui5 app / custom fiori application using the web ide, see my developing UI5 apps via the SCP the difference there will be only the web ide… and maybe a couple configuration files, otherwise, see how to add ui5 controls, etc. If your application doesn’t run due to an issue related to the application is not set to a space, right click on the project, go to project settings and select your SPACE (development in my case) Now, Back in the cockpit – we can see the app has started inside the ‘development’ space. you may click on your app name to see the logs – Notice application routes: This is the entry point of the application and we can set up to have a route base app. The next step is to add the UAA-service… (optional or can do later) *the issues I saw here is that the SPACE (development) had not been set up… if this is the case with you, please right click on the project… go to Project Settings and click on SPACE and select your space from the dropdown menu. ** the space may not be populated the first time you visit your project… try to save click away and then come back and your space should now show. - To create the UAA-service Back in the cockpit… Go to your cockpit – HANA Express Organization and select your SPACE (development) Then go to Service Marketplace and select Authorization & Trust Management Once in there click on the Instances button on the left nav And click on New Instance to create your new instance … after you are done, it should look like below. If you are unable to follow the navigation, you can also look at the breadcrumb section on top of the page to go back/forth on your nav. If you added the UAA-service.. .and your mta.yaml file has the correct references.. please build.. and run your app.. if everything works properly.. you should get prompted to log in when you visit your app again… the issues I faced while creating a ui5 module: - Did not select the SPACE this project belong to - Made modifications to my code and saved it… but I didn’t re-biuld it - did not know really how to navigate to the cockpit to find a way to create a service instance. - Understand what the UAA-service is and does. * this is probably the main item in this step. I recommend reading about the entire XSA architecture – Thomas J has a few good blogs out there on the topic. This simple blog pretty much wraps my part 1 of the learning I had while installing HXE 2 SP3…. my next step : What if now we want to create a DB module to consume it in the application.. please let me know if you ran into separate issues – maybe I can help you. thank you again for your time XSA: doing the same thing over and over again and expecting different results if di-core hasn’t crashed. too funny My environment is MacOS High Sierra, with 1TB disk (not SSD) and 32GB RAM. Using VMware Fusion 8.5. I am finding it very difficult to get a reliable setup. I am wondering if I should try Docker instead. hi David, i am on windows 10 and i can tell you that the initial set up took me at least 5 times.. it is difficult.. I do not know if it is related to the installation itself or any other configuration not really specified by SAP. I also run other software such as Visual Studio ’17 community and I see no problem with other software… only w the HXE… maybe SAP can provide additional details. In my current exercise, I am trying to extend the CDS context by adding additional entities, associations, etc.. i am able to build and run… except for when i try to build the nodejs module.. now i am getting some gateway timeout which i am currently researching – i have re-deploy the di-enablement service, and re-build my modules but i am unable to run it at this time…. dear @SAP, help
https://blogs.sap.com/2018/09/19/my-experiences-installing-and-trying-out-hxe-2-sp3-part1/
CC-MAIN-2020-40
refinedweb
1,254
73.21
Feature Request: Stamp Duty In the UK, a 0.5% stamp duty applies to stock purchases but not sales. This means for the UK market, I will be looking to create a commission scheme which has a mix of COMM_FIXEDand COM_PERCfor buying and only a COMM_FIXEDwhen selling. In other words my broker has a flat fee but I need to take into account the profit killing stamp duty :) Not sure how many markets this applies to but it would be useful enhancement for the UK. - backtrader administrators last edited by There seems to be some misunderstanding here: COMM_FIXEDis not a flat-fee commission, but represents a fixed value per item bought (shares, futures, options, ...) To put the stamp-duty in place (including a flat-fee) see Docs - User Defined Commissions You need to override def _getcommission(self, size, price, pseudoexec): '''Calculates the commission of an operation at a given price pseudoexec: if True the operation has not yet been executed ''' Where sizewill be > 0if the operation is a buyand will be < 0if you are using sell Ok - I see... I will take a look. Thanks for the pointer! - backtrader administrators last edited by You would obviously return a fixed amount regardless of size(hence flat) plus the 0.5%stamp duty if size > 0 I only just got around to looking at this again. If anyone else, has the same requirement, the code I developed is as follows: class stampDutyCommisionScheme(bt.CommInfoBase): ''' This commission scheme uses a fixed commission and stamp duty for share purchases. Share sales are subject only to the fixed commission. The scheme is intended for trading UK equities on the main market. ''' params = ( ('stamp_duty', 0.005), ('commission', 5), ('stocklike', True), ('commtype', bt.CommInfoBase.COMM_FIXED), ) def _getcommission(self, size, price, pseudoexec): ''' If size is greater than 0, this indicates a long / buying of shares. If size is less than 0, it idicates a short / selling of shares. ''' if size > 0: return self.p.commission + (size * price * self.p.stamp_duty) elif size < 0: return self.p.commission else: return 0 #just in case for some reason the size is 0. Cheers
https://community.backtrader.com/topic/446/feature-request-stamp-duty/5
CC-MAIN-2022-40
refinedweb
353
56.15
As we know BizTalk project is often a mission critical as middleware it always come in between different application and the BizTalk piece get down it lead to halt other system.So, it is a article about automation test of BizTalk using BizUnit4.0 BizUnit 4.0 - now available. This is a major release with significant changes making it much easier to create coded tests and also XAML tests. This new version 4.0 (BizUnit.TestSteps) don’t have support to MQSC by default, you need to add reference to BizUnit.MQSeriesSteps BUT It accept only XML base config not from C# code and BizUnit.MQSeriesSteps also been deprecated.So, I created new Test steps in BizUnit.TestSteps. This article cover Background The adoption of an automated testing strategy is fundamental in reducing the risk associated with software development projects, it is key to ensuring that you deliver high quality software. Often, the overhead associated with developing automated tests is seen as excessive and a reason to not adopt automated testing.. Using the code This will be a project that we test using BizUnit. So we need to create this project first. This will be going to be a very simple BizTalk project just enough to fulfill our immediate need. Here we brow the scenario of insurance company where we receiving basic enquire about policy. The policy details are stored into Mainframe system and we pass data to MF thru MQ. (Guys please don’t think about standardization of scenario. I know in real life this would be HIPPA) I am attaching this BizTalk project with this article, so you can directly deploy it for test. Artifact of project: This structure of input message. Where we are getting policy holder ID, name, address and employer Id. <ns0:Ind xmlns: <ID>ABC009</ID> <FName>Himanshu</FName> <LName>Thawait</LName> <Address>KCMO</Address> <EmployerID>iGate01</EmployerID> </ns0:Ind> This structure of MF input message. Which need Group policy number and member number to pull all the details Will be the out put of BizTalk project A very simple map, which mapping incoming message’s ID filed to MF schema’s MemNum filed and EmployeeID to GrpPolicyNum filed by appending the “GRP:” using String Concatenate functoid Here is our orchestration which have one receive port that will use to pickup the message (input xml file) from folder and give it to Rec shape then it’s make event log entry by using Expression share and orchestration variable (Trace Log) and construct the message for MF using above Map. It again make event log about completion of process and send this message to send port (H1_MQ_SEN) which is banded with MQSC. Her is one catch : MQSC wont run on 64 bit BizTalk Host Instance so, you need to create 32 bit Host Instance for you Orchestration and Port which connect to MQSC Project is ready to deploy, assign the snk and give the name to Application, set configurations and server and deploy.Details about deployment: () Before going into detail I wan to highlight about pre requisite #2(Have Idea about BizUnit). I am assuming by this point you have downloaded and installed the BizUnit 4.0 and also unzipped it’s source code “BizUnit.Source” @C:\Program Files\BizUnit\BizUnit 4.0\Source (default location). I am NOT attaching the entire BizUnit.TestStep in this article. You need to add following class in you existing BizUnit.TestStep. Now it’s time to add new TestSteps for MQSC The code is pretty self explanatory. However it’s good to explain it little bit. The heard of this class is TestStepBase all the BizUnit test steps must be derive from this make this class to be a BizUnit TestStep. It has two abstract methods that we need to implement here. It has a constructor where we unitize the Sub Step. The subStep is “The list of sub-steps to be executed by the test step, there maybe zero or more sub-steps. Each sub-step is called, with the data being assed from one to the next typically.” Execute(Context context): Where we actual logic and code is written for MQSC read. Executes the test steps logic.Parameters: context: The test context being used in the current TestCase Validate (Context context): You can write validation logic here like check for some required properties and there valid values. This will run when test steps is being execute. ///Himanshu Thawait ///Test step to work with MQSC. using amqmdnet.dll and MQSeriesHelper.cs using System; using System.Collections.ObjectModel; using System.IO; using BizUnit.TestSteps.Common; using BizUnit.Xaml; namespace BizUnit.TestSteps.MQSCStepHT { public class MQSCGetStepHT : TestStepBase { private string message; /// <summary> /// Que Manager details server (port) /// </summary> public string queueManager { get; set; } /// <summary> /// Query name /// </summary> public string queue { get; set; } /// <summary> /// Wait time out /// </summary> public int waitTimeout { get; set; } /// <summary> /// Used for Sub steps /// </summary> public MQSCGetStepHT() { SubSteps = new Collection<SubStepBase>(); } /// <summary> /// Read the data from MQ /// </summary> /// <param name="context"></param> public override void Execute(Context context) { context.LogInfo("Reading queue: {0}, search queueManager: {1}", queue, queueManager); message = MQSeriesHelper.ReadMessage(queueManager, queue, waitTimeout, context); context.LogData("MQSeries output message:", message); // SubSteps // Check it against the validate steps to see if it matches one of them foreach (var subStep in SubSteps) { try { Stream fileData = StreamHelper.LoadMemoryStream(message); // Try the validation and catch the exception fileData = subStep.Execute(fileData, context); } catch (Exception ex) { context.LogException(ex); throw; } } } /// <summary> /// Validation on MQ and Data data /// </summary> /// <param name="context"></param> public override void Validate(Context context) { if (string.IsNullOrEmpty(queueManager)) { throw new StepValidationException("queueManager may not be null or empty", this); } if (string.IsNullOrEmpty(queue)) { throw new StepValidationException("queue may not be null or empty", this); } } } } Finally we are all set to write test case for MQSC. This test project will going to use the same file drop location, Schema and XML file of BizTalk MQSC project (obviously as we going to test that project). We have two test cases on this class. One is for copying the xml file in BizTalk pickup location using BizUnit.TestSteps.File (out of the box test step) and other is to read data from MQSC using our BizUnit.TestSteps.MQSCStepHT. RunTest() is a key here, what ever code you have written (in MQSCStepHT) is not going to actually execute until this RunTest() get called. So, if you going to debug the code then wait this function get call then your breakpoint will hit (in MQSCStepHT) To know about the BizUnit Test steps read: “BizUnit Getting Started Guide.pdf” Which come with BizUnit 4.0 install. Default location C:\Program Files\BizUnit\BizUnit 4.0 Here is the complete code: using BizUnit.TestSteps.Common; using BizUnit.TestSteps.DataLoaders.File; using BizUnit.TestSteps.File; using BizUnit.TestSteps.MQSCStepHT; using BizUnit.TestSteps.ValidationSteps.Xml; using BizUnit.Xaml; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace HT1.TestData { [TestClass] public class HT1_MQ { [TestMethod] public void MQ_TC1_CopyFileToProcess() { //1. Creat test case var TC1 = new TestCase { Name = "H31 TC2", Category = "In Run", Preconditions = "H31 TC1" }; //2.0 Prepration for **Execution Step**- Copy file to IN folder var createStep = new CreateStep(); createStep.CreationPath = @"\\FileServer\Share\HT1\IN_MQ\IN_MQ.xml"; //2.3 Source File var dataLoder = new FileDataLoader { FilePath = @"\\FileServer\Share\HT1\IN_MQ.xml" }; createStep.DataSource = dataLoder; //3 Execution Step- Copy file TC1.ExecutionSteps.Add(createStep); /(); } [TestMethod] public void MQ_TC2_Read() { //1. Create Test case var TC1 = new TestCase { Name = "H31.MQ.TC1", Category = "IN run", Purpose = "Verify MQData" }; //2.0 Prepration for **Execution Step**- Setup MQSC setting var mqscStep = new MQSCGetStepHT(); mqscStep.queueManager = "MQSCMger"; mqscStep.queue="MQSC.TEST.IN"; //2.1 Validate XML again Schems var xmlValidationStep = new XmlValidationStep(); var schemaDefination = new SchemaDefinition { XmlSchemaPath = @"\\FileServer\Share\HT1\SchOUT.xsd", XmlSchemaNameSpace = "" }; //2.2 Adding schemt to validate xmlValidationStep.XmlSchemas.Add(schemaDefination); //2.3 Validate XML data Using XPath var xpathRecCount = new XPathDefinition { Description = "Checking the GrpPolicyNum", XPath = "/*[local-name()='Member' and namespace-uri()='']/*[local-name()='GrpPolicyNum' and namespace-uri()='']", Value = "GRP:iGate01" }; //3 Execution Step- check MQ data xmlValidationStep.XPathValidations.Add(xpathRecCount); //3.1 sub step mqscStep.SubSteps.Add(xmlValidationStep); //3.2 Main step TC1.ExecutionSteps.Add(mqscStep); /(); } } } Has some extra thing on Step 2 (2.1, 2.2, 2.3) that is for validation of message format and data in it. BizUnit test case will run just like any other test case. Jusr select your test case, right click and hit Run As you can see you can select more then one test case to run. To specify the permanent order of test case execution you can add “Order test” and here can see the list of test case and select them in desire order. Then instead of select individual test case you can simply select and run this Order test file. Things to remember: If you want to debug the code and hits the break point then right click on test case and select “Debug Selection” When you run the test case “Test Result” tab will come in form in bottom of you VS. Here you can see the test case status like pass, failed, Abraded and running. To see in details right click on test result and select “View Test Result Details” Here is how test details looks like, and let see how to red it: IT has two parts: Each test run will store the complete details in “.trx” file which is “Visual Studio Test Results File” format @ folder “TestResults” Name of file has user who run this test case plus computer name along with date time stamp. Finally it’s complete hope it will help you. I have written test steps for REStfull service and for Oracle as well, will create similar article for that too. BizUnit4.0 : BizUnit File step and I also like the article style : This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) News on the future of C# as a language
http://www.codeproject.com/Articles/550527/BizUnit-4-0-TestStep-for-MQSC-Ibm-Websphere-Messag
CC-MAIN-2014-10
refinedweb
1,658
57.06
Code. Collaborate. Organize. No Limits. Try it Today. In my previous article, I introduced dynamic types, some possible uses for them, and then a high level walkthrough of how you might go about implementing a solution using dynamic types. That was all nice and fluffy, but now, let’s get down to an actual code example of implementing and calling dynamic types. But first, for those that haven’t, or don’t want to read through the first article, here is a brief overview of what a dynamic type is. Dynamic types are types, or classes, that are generated at runtime from within a program. When an application starts, you’ll have at least one AppDomain running. In order to add dynamic types to your AppDomain, you need to first create and add a dynamic assembly to your AppDomain. A dynamic assembly is an assembly that is created, and then added to an AppDomain at runtime. It is usually not saved to a file, but exists solely in memory. Once this is in place, and after a few more steps that I’ll cover, you can use the Reflection.Emit classes to create dynamic types. Reflection.Emit So, what uses are dynamic types? First off, they are just plain cool (we are developers, we don’t need a better reason). I mean, come on? Emitting IL into memory at runtime to create your very own custom class! That’s just sweet. But seriously, the useful thing about dynamic types is that you can make your program evaluate the state of data that you may not know about until runtime in order to create a class that is optimized for the situation at hand. The challenging part about dynamic types is that you can’t just dump C# code into your dynamic assembly and have the C# compiler compile it to IL. That would just be way too easy, and the Reflection.Emit team at Microsoft wanted you to have to work for your dynamic types. You have to use the classes in Reflection.Emit to define and generate type, method, constructor, and property definitions, and then insert or ‘emit’ IL opcodes into these definitions. Sound fun yet? Every now and then, I run into a common problem when inheriting an application from another developer or dev team. The application uses DataSets to retrieve data from some database, but the developer used integer ordinals to pull data out of the DataRow, instead of string ordinals. DataSet DataRow //Using integer ordinals: foreach (DataRow row in dataTable.Rows) { Customer c = new Customer(); c.Address = row[0].ToString(); c.City = row[1].ToString(); c.CompanyName = row[2].ToString(); c.ContactName = row[3].ToString(); c.ContactTitle = row[4].ToString(); c.Country = row[5].ToString(); c.CustomerId = row[6].ToString(); customers.Add(c); } //Using string ordinals: foreach (DataRow row in dataTable.Rows) { Customer c = new Customer(); c.Address = row["Address"].ToString(); c.City = row["City"].ToString(); c.CompanyName = row["CompanyName"].ToString(); c.ContactName = row["ContactName"].ToString(); c.ContactTitle = row["ContactTitle"].ToString(); c.Country = row["Country"].ToString(); c.CustomerId = row["CustomerID"].ToString(); customers.Add(c); } Any performance minded developer will quickly jump on this and state that using integer ordinals is faster than using string ordinals, and I completely agree with this. Just to demonstrate what the performance difference between the two are, I ran a quick performance measurement test using Nick Wienholt’s Performance Measurement Framework (*See note at the end of the article about the Performance Measurement Framework). The string ordinal test had a Normalized Test Duration (NTD from now on) of 4.87 compared to the integer ordinal test, meaning it took almost 3 times as long to execute than using integer ordinal. When building a performance critical application which has a high user load, that little bit of time difference might be unacceptable, especially when using integer ordinals can give you an easy performance improvement. But, there is a maintenance problem with integer ordinals that has bitten me in the bum too many times. What happens if your DBA decides to redesign the table structure and adds a new column, not at the end of the table, but somewhere in the middle? What if the table was totally restructured with a whole new order to the columns? And, what if you, the developer, were not informed about this? Most likely, your app will crash because it is trying to cast a SQL data type into a non-matching .NET data type. Or even worse, your app doesn’t crash, but keeps chugging right along, but now with corrupt data. Believe it or not, this happens every now and then (at least it has to me). Applications are more susceptible these days due to the increasing use of web services maintained either by a third party vendor or another development team. This situation recently prompted me to write a utility class that would give me the speed of integer ordinals, but with the maintainability of string ordinals. To solve this simple problem, I came up with the following class: public class DataRowAdapter { private static bool isInitialized = false; private static int[] rows = null; public static void Initialize(DataSet ds) { if (isInitialized) return; rows = new int[ds.Tables[0].Columns.Count]; rows[0] = ds.Tables[0].Columns["Address"].Ordinal; rows[1] = ds.Tables[0].Columns["City"].Ordinal; rows[2] = ds.Tables[0].Columns["CompanyName"].Ordinal; rows[3] = ds.Tables[0].Columns["ContactName"].Ordinal; . .//pull the rest of the ordinal values by column name . isInitialized = true; } //static properties for returning integer ordinal public static int Address { get {return rows[0];} } public static int City { get {return rows[1];} } public static int CompanyName { get {return rows[2];} } public static int ContactName { get {return rows[3];} } } The purpose of this class is fairly evident. You pass in the DataSet into the static Initialize() method; this goes through each column in the DataTable and stores off the integer ordinal into an integer array. Then, I have a static property defined to pass back the integer ordinal for that column. Shown below is the code that uses this class to retrieve data from a DataRow. Initialize() DataTable DataRowAdapter.Initialize(dataSet); foreach (DataRow row in dataSet.Tables[0].Rows) { Customer c = new Customer(); c.Address = row[DataRowAdapter.Address].ToString(); c.City = row[DataRowAdapter.City].ToString(); c.CompanyName = row[DataRowAdapter.CompanyName].ToString(); c.ContactName = row[DataRowAdapter.ContactName].ToString(); . . customers.Add(c); } This is all fairly straightforward. The DataRowAdapter acts as an integer ordinal retrieval tool, so now, your code can pull from a DataRow in a pseudo-string ordinal way, but behind the scenes, it’s still accessing based on the integer index of the column. And, if your DBA ever decides to change around the order of the columns, you won’t have to update your data access code. DataRowAdapter To see if the DataRowAdapter actually helped with performance, I ran a performance test comparing this method to using a straight integer ordinal. Accessing data from a DataRow with the DataRowAdapter came up with an NTD of 1.04, just 4% slower. Not bad compared to using string ordinals, which was almost 300% slower! This works great, lasts long time. But, as I started using this class design for more and more DataTables, I realized that it was getting to be a pain to maintain. I had to create a new class with hard coded static properties for each DataTable column signature. After about 15 different classes, it starts to get on one’s nerves. Enter the Reflection.Emit namespace. The Reflection.Emit namespace has a bunch of classes whose primary job is to dynamically create assemblies and types at runtime, meaning while the application is running. Why is this important? Because with Reflection.Emit, you can now dynamically generate a DataRowAdapter class per DataTable at runtime, instead of hard coding a bunch of very specialized static classes. In theory, you should just pass in a DataSet or DataTable into a factory class, and the factory class should generate a new DataRowAdapter class based on the column structure of the DataTable. And, once the factory has generated a new DataRowAdapter, it won’t have to generate it again because it’ll be already loaded into the AppDomain. Pretty handy, eh? The down side of using Reflection.Emit (there’s always a downside, right?) is that you can’t just stuff a string variable full of C# code and then compile it on the fly (actually with the System.CodeDom namespace and the CSharpCodeProvider class, you could do this, but this would have to run through the C# compiler and then the JIT compiler, which would be a lot slower). With Reflection.Emit, you create a new assembly in memory and then emit IL opcodes directly into the assembly. The up side is you don’t have to run through the C# compiler because the code you are emitting is IL. The down side is you have to understand IL. But, there are ways to make that easier, which I’ll cover in a bit. System.CodeDom CSharpCodeProvider There is another problem with Reflection.Emit. You don’t have an API to program against. Think about it, the class that you are going to generate at runtime doesn’t exist at design time. So, how do you call it? Ahh, the power of interfaces. So, the first step is to figure out what the public interface for the dynamic type will be. This is a fairly simple example, so the interface should also be fairly simple. So, after pondering long and hard about this, I came up with the following interface: public interface IDataRowAdapter { int GetOrdinal(string colName); } Since I don’t know the column names that will be needed, the interface can’t very well have hard coded static properties, can it? Instead, I decided on a single method called GetOrdinal() that takes a string value of the column name and returns the integer ordinal for that column. GetOrdinal() All dynamic types generated by the factory class will inherit from this interface, and this interface will also be the return type for the factory class. Your program will call the factory class, passing in a DataTable, and get an IDataRowAdapter in return. It can then call IDataRowAdapter.GetOrdinal() to get the integer ordinal for a column name. IDataRowAdapter IDataRowAdapter.GetOrdinal() There is another way to go about this. Instead of defining a common interface that all dynamic types can inherit from, you could use late binding and access the dynamic type’s methods and properties via Reflection. But, this should be considered “bad form” for several reasons. First, the interface is a contract with the type. It guarantees that the method will exist and it can be called. If you use late bound method calls via Reflection, there is no guarantee that the method exists for that type. You could misspell the method name, and the compiler wouldn’t give you a warning. You wouldn’t know there was a problem until the application was running and tried to invoke the method, at which point a Reflection exception would be thrown. The second problem with late bound method calls is that Reflection is just plain slow. Any performance benefit that you gained by using dynamic types would most likely be lost because of doing so. When prototyping the new DataRowAdapter with the interface IDataRowAdapter in C#, I tried several different methods to determine what the integer ordinal value is from a string column name. Because I’m trying to find a fast, dynamic way to get data from a DataRow, I created a performance test for each method and measured the results. The following is a list of the different methods and their results compared against using an integer ordinal. switch Enum.Parse(columnName) if Dictionary<string, int> The results were a bit surprising. The slowest was switching on an enumeration. This was because Enum.Parse() uses Reflection to create an instance of the column enumeration. This method had an NTD of 10.74 compared to the integer ordinal. Enum.Parse() The next slowest was the string switch statement, coming in with an NTD of 3.71 compared to the integer ordinal. Not all that much faster than using a straight string ordinal. Next was using the generic Dictionary with an NTD of 3.4 compared to the integer ordinal. Still, not that great. Dictionary And, the winner was multiple “if” statements, with an NTD of 2.6 compared to the integer ordinal. Now, it's still a far amount slower than a straight integer ordinal lookup, but it's much faster than a string lookup, and you still get the column name safety. The actual implementation that I’ve decided to go with is shown below in C#. This is what I’ll base the IL off of when I use Reflection.Emit to generate the type. public class DataRowAdapter : IDataRowAdapter { public int GetOrdinal(string colName) { if (colName == "Address") return 0; if (colName == "City") return 1; if (colName == "CompanyName") return 2; if (colName == "ContactName") return 3; . . throw new ApplicationException("Column not found"); } } Now, can you see the benefit to dynamic types? You would never hard code something like this at design time, because you don’t know for sure that the City column is really at ordinal position 1. But with Reflection.Emit, you do know because you generate the class based on the evidence determined at runtime. The next thing to do is come up with a design for the class that will generate the dynamic types and return them to the caller. For the dynamic type generator, I decided that I’d go with the Factory pattern. This fits the needs of this solution perfectly since the caller can’t explicitly call the constructor of the dynamic type. Also, I want to hide the implementation details of the dynamic type away from the caller. So, the public API for the Factory class will be this: public class DataRowAdapterFactory { public static IDataRowAdapter CreateDataRowAdapter(DataSet ds, string tableName) { //method implementation } public static IDataRowAdapter CreateDataRowAdapter(DataTable dt) { //method implementation } //private factory methods } Because each dynamic type will be hard coded for a specific list of columns, each DataTable passed into the factory with a different TableName value will cause the factory to generate a new type. If a DataTable is passed into the Factory a second time, the dynamic type for that DataTable has already been generated and the Factory will only have to return an instance of the already generated type. TableName (Note, some of the Reflection.Emit function descriptions may be repeated from my last article, but I wanted the reader to be able to follow along even if they haven’t read Part 1.) Before getting down to writing the functionality of the GetOrdinal() method, I wanted to cover how to set up an assembly to hold the new type. Since Reflection.Emit cannot add a new type to an existing assembly, you have to generate a brand new one in memory. To do this, you use the AssemblyBuilder class. AssemblyBuilder private static AssemblyBuilder asmBuilder = null; private static ModuleBuilder modBuilder = null; private static void GenerateAssemblyAndModule() { if (asmBuilder == null) { AssemblyName assemblyName = new AssemblyName(); assemblyName.Name = "DynamicDataRowAdapter"; for the static Thread.GetDomain() method. This AppDomain instance will allow you to create the new dynamic assembly with the DefineDynamicAssembly() method. Just pass in the AssemblyName class and an enumeration value for AssemblyBuilderAccess. In this instance, I don’t want to save this assembly to file, but if I did, I could use AssemblyBuilderAccess.Save or AssemblyBuilderAccess.RunAndSave. AssemblyName AppDomain Thread.GetDomain() DefineDynamicAssembly() AssemblyBuilderAccess AssemblyBuilderAccess.Save AssemblyBuilderAccess.RunAndSave Luckily, once an AssemblyBuilder has been created, the same instance can be used over and over to create all the new dynamic types, so it only needs to be created once. Once the AssemblyBuilder has been created, a ModuleBuilder instance also needs to be created, which will be used later to create a new dynamic type. Use the AssemblyBuilder.DefineDynamicModule() method to create a new instance. If you want, you could create as many modules for your dynamic assembly as you want to, but for this case, only one is needed. ModuleBuilder AssemblyBuilder.DefineDynamicModule() Now, on to creating the dynamic type:(IDataRowAdapter)}); return typeBuilder; } A dynamic type is created via the TypeBuilder class. You create an instance of a TypeBuilder class by calling the ModuleBuilder.DefineType() method, passing in the class name value is an array of interfaces that the dynamic type will inherit from. This is very important in this solution, so I pass in a value of IDataRowAdapter. TypeBuilder ModuleBuilder.DefineType() TypeAttributes Type System.Object One thing.Emit namespace. Can you guess how you would create a MethodBuilder, ConstructorBuilder, FieldBuilder, or a PropertyBuilder class? Through the TypeBuilder, of course! MethodBuilder ConstructorBuilder FieldBuilder PropertyBuilder I want to stop for a minute and talk about my design. The final prototype of the DataRowAdapter uses the string column name to determine which ordinal to return via multiple if statements. But now that the type is created at runtime, there is a faster way available. Comparing two integers is much faster than comparing two strings. So, how can you get an integer value from a string? Why string.GetHashCode(), of course! Now, before you start screaming that a hash code is not guaranteed to be unique for every possible string out there, let me explain. While I can’t say that every string will output a unique hash code value, I can say that there is a large possibility that it will be unique within a small list of strings, like a list of column names for a DataTable. string.GetHashCode() So, I created a method to check and see if all the hash codes for a DataTable are unique. If it finds that the column names are unique, then the dynamic type factory will output a switch statement to check for integer values. If it finds that they are not unique, then the dynamic type factory will output multiple if statements that check for string equality. I wanted to see how much of a difference using the column name’s hash code was to using a string comparison in order to justify the added complexity of the type factory. When I ran a performance test, I found that using the hash code gave me an NTD of 1.35 compared to a straight integer ordinal usage. Now granted, the original static DataRowAdapter had an NTD of 1.04, but I also had to maintain one class per DataTable, which if an application is quite large, can become very cumbersome. With the dynamic type used in this solution, there is no maintenance. And, very often, a maintenance benefit will trump a performance benefit, especially if the performance degradation isn’t too bad. Next, I ran a test to check how fast the DataRowAdapter would run if I were using string comparisons. These results here weren’t all that great. I came up with an NTD of 1.9, twice as slow as using straight integer ordinals, but still decently faster than using straight string ordinals. But, I wanted to keep this in the design because of the off chance that the hash code values for a list of column names are not unique. So, with this design, the majority of the time you’ll get the performance benefit of integer equality checks, and every now and then, the type will fall back onto string equality checks. Either way, both are faster than using straight string ordinals. Next is the heart of the dynamic type factory, creating the GetOrdinal() method. Up to now, I haven’t shown much work with IL, but now, we’re going to get down and dirty with Reflection.Emit. Below is the code for GetOrdinal: GetOrdinal private static void CreateGetOrdinal(TypeBuilder typeBuilder, DataTable dt) { int colIndex = 0; //create the needed type arrays Type[] oneStringArg = new Type[1] {typeof(string)}; Type[] twoStringArg = new Type[2] {typeof(string), typeof(string)}; Type[] threeStringArg = new Type[3] {typeof(string), typeof(string), typeof(string)}; //create needed method and contructor info objects ConstructorInfo appExceptionCtor = typeof(ApplicationException).GetConstructor(oneStringArg); MethodInfo getHashCode = typeof(string).GetMethod("GetHashCode"); MethodInfo stringConcat = typeof(string).GetMethod("Concat", threeStringArg); MethodInfo stringEquals = typeof(string).GetMethod("op_Equality", twoStringArg); //defind the method builder MethodBuilder method = typeBuilder.DefineMethod("GetOrdinal", MethodAttributes.Public | MethodAttributes.HideBySig | MethodAttributes.NewSlot | MethodAttributes.Virtual | MethodAttributes.Final, typeof(Int32), oneStringArg); //create IL Generator ILGenerator il = method.GetILGenerator(); //define return jump label System.Reflection.Emit.Label outLabel = il.DefineLabel(); //define return jump table used for the many if statements System.Reflection.Emit.Label[] jumpTable = new System.Reflection.Emit.Label[dt.Columns.Count]; if (AllUniqueHashValues(dt)) { //create the return int index value, and hash value LocalBuilder colRetIndex = il.DeclareLocal(typeof(Int32)); LocalBuilder parmHashValue = il.DeclareLocal(typeof(Int32)); il.Emit(OpCodes.Ldarg_1); il.Emit(OpCodes.Callvirt, getHashCode); il.Emit(OpCodes.Stloc_1); foreach (DataColumn col in dt.Columns) { //define label jumpTable[colIndex] = il.DefineLabel(); //compare the two hash codes il.Emit(OpCodes.Ldloc_1); il.Emit(OpCodes.Ldc_I4, col.ColumnName.GetHashCode()); il.Emit(OpCodes.Bne_Un,++; } } else { //create the return int index value, and hash value LocalBuilder colRetIndex = il.DeclareLocal(typeof(Int32)); foreach (DataColumn col in dt.Columns) { //define label jumpTable[colIndex] = il.DefineLabel(); //compare the two strings il.Emit(OpCodes.Ldarg_1); il.Emit(OpCodes.Ldstr, col.ColumnName); il.Emit(OpCodes.Call, stringEquals); il.Emit(OpCodes.Brfalse,++; } } //error handler if cant find column name il.Emit(OpCodes.Ldstr, "Column '"); il.Emit(OpCodes.Ldarg_1); il.Emit(OpCodes.Ldstr, "' not found"); il.Emit(OpCodes.Callvirt, stringConcat); il.Emit(OpCodes.Newobj, appExceptionCtor); il.Emit(OpCodes.Throw); //label for if user found column name il.MarkLabel(outLabel); //return ordinal for column il.Emit(OpCodes.Ldloc_0); il.Emit(OpCodes.Ret); } The first thing we have to do is set up a few items so they are ready to be used later on in the method. The GetOrdinal() method we’re going to emit will need to call four methods during its lifetime. They are String.GetHashCode(), String.Concat(), String.op_Equality(), and the constructor for ApplicationException. We’ll need to create MethodInfo and ConstructorInfo objects in order to emit the “call” or “callvirt” opcodes for these methods. String.GetHashCode() String.Concat() String.op_Equality() ApplicationException MethodInfo ConstructorInfo call callvirt The next step is to create a MethodBuilder instance from the TypeBuilder. Again, I just looked at the prototype I wrote in C# in ILDASM to figure out what MethodAttributes to use when defining my MethodBuilder. The TypeBuilder.DefineMethod() is very similar to DefineConstructor (described in the first article) except for one thing. DefineMethod() has one more argument for you to define the return type. If the method you are defining doesn’t have a return value, then just pass in null for this argument. MethodAttribute TypeBuilder.DefineMethod() DefineConstructor DefineMethod() null Next, an ILGenerator instance is needed, which is created almost exactly the same as with the MethodBuilder. ILGenerator Now, I need to define several Labels. A Label is used in Reflection.Emit to tell the CLR where to jump or branch off to. Branching is how if statements, switch statements, and loops are written in IL, and are very similar to goto statements. The first label I define will be used when the correct ordinal has been found and the process should jump to the end of the method so it can return. Then a Label array is created, one Label for each column name, which will be used for defining either the block of “if” statements for string comparisons or for the “switch” statement for comparing the column name’s hash value. Label In order to use a label, you must first define it with ILGenerator.DefineLabel(). Next, you pass it into the ILGenerator.Emit() method as the second argument, where the first argument is any one of the many branch opcodes (Brfalse, Brtrue, Be, Bge, Ble, Br, etc). This basically says “if x is (true, false, ==, >=, <=) then go to this label”. Finally, you must use the ILGenerator.MarkLabel() method, passing in the Label instance as the only argument. What this does is tell the process to jump to where the Label is marked when it hits the branch opcodes. For “if” statements, you will mark your Label below where your branch opcode is defined. For “loop” statements, you’ll most likely mark your Label above where your branch opcode is defined (hence, the loop). ILGenerator.DefineLabel() ILGenerator.Emit() Brfalse Brtrue Be Bge Ble Br ILGenerator.MarkLabel() So, how do I know what IL to emit for GetOrdinal? Same way I knew how to define the MethodBuilder. Code it up in C#, compile it, and take a look at the generated IL in ILDasm.exe. Once you have the IL from ILDASM, it’s just a simple, but tedious task of duplicating the IL with ILGenerator.Emit(Opcodes.*) statements. ILGenerator.Emit(Opcodes.*) I’m not going to go over every line of code in GetOrdinal(), since it should be fairly obvious if you look at the IL generated for the C# version of GetOrdinal(). Now that all the tools are in place to create a dynamic type, I’ve got one last area to cover: how the Factory class creates a new dynamic type and returns a new instance to the caller, and how the dynamic type can be used. Shown below is the basic structure of the Factory class. public static IDataRowAdapter CreateDataRowAdapter(DataTable dt) { return CreateDataRowAdapter(dt, true); } TypeBuilder typeBuilder = null; private static IDataRowAdapter CreateDataRowAdapter(DataTable dt, bool returnAdapter) { //return no adapter if no columns or no table name if (dt.Columns.Count == 0 || dt.TableName.Length == 0) return null; //check to see if type instance is already created if (adapters.ContainsKey(dt.TableName)) return (IDataRowAdapter)adapters[dt.TableName]; //Create assembly and module GenerateAssemblyAndModule(); //create new type for table name TypeBuilder typeBuilder = CreateType(modBuilder, "DataRowAdapter_" + dt.TableName.Replace(" ", "")); //create get ordinal CreateGetOrdinal(typeBuilder, dt); IDataRowAdapter dra = null; Type draType = typeBuilder.CreateType(); //assBuilder.Save(assBuilder.GetName().Name + ".dll"); //Create an instance of the DataRowAdapter IDataRowAdapter dra = (IDataRowAdapter) = Activator.CreateInstance(draType, true); //cache adapter instance adapters.Add(dt.TableName, dra); //if just initializing adapter, dont return instance if (!returnAdapter) return null; return dra; } The first thing the Factory does is check if the TypeBuilder class has been created yet. If it hasn’t, the factory calls the private methods I’ve already shown that creates the DynamicAssembly, DynamicModule, TypeBuilder, and the GetOrdinal() method for the dynamic type. Once these steps are complete, it uses the TypeBuilder.CreateType() method to return a Type instance for the DataRowAdapter. I then use Activator.CreateInstance using the generated type to actually create a working instance of the dynamic type. Yeah! DynamicAssembly DynamicModule TypeBuilder.CreateType() Activator.CreateInstance This is the moment of truth. Creating a new type and calling the constructor on the type will invoke the constructor that was built earlier. If any of the IL was emitted wrong, the CLR will throw an exception. If everything is fine, you’ll have a working copy of the DataRowAdapter. After the Factory has an instance of the DataRowAdapter, it will cast it down and return an IDataRowAdapter. Using a dynamic type is pretty simple. The main thing to remember is that you have to code against an interface, because at design time, the class doesn’t exist. Shown below is an example code that calls the DataRowAdapterFactory class and asks for a DataRowAdapter. The Factory returns an IDataRowAdapter instance, and you can then call GetOrdinal() and pass in the desired column name. The DataRowAdapter will figure out what integer ordinal is being requested and return it. This is shown below. DataRowAdapterFactory IDataRowAdapter dra = DataRowAdapterFactory.CreateDataRowAdapter(dataTable); foreach (DataRow row in data.Rows) { Customer c = new Customer(); c.Address = row[dra.GetOrdinal("Address")].ToString(); c.City = row[dra.GetOrdinal("City")].ToString(); c.CompanyName = row[dra.GetOrdinal("CompanyName")].ToString(); c.ContactName = row[dra.GetOrdinal("ContactName")].ToString(); . .//pull the rest of the values customers.Add(c); } Below are the class and sequence diagrams that illustrate how the Factory is used to create an IDataRowAdapter, and how the IDataRowAdapter is then used by the consumer. (This section is a bit of a repeat from my first article, but I included it for instructional purposes.) OK, so now, I’ve completed the dynamic type factory, I compile the solution in Visual Studio, and there are no errors. If I run it, it’ll work, right? Maybe. The downside of Reflection.Emit is that you can emit just about any combination of IL that you want. But, there is no design time compiler checking to see if what you wrote is valid IL. Sometimes, when you “bake” your type with TypeBuilder.CreateType(), it’ll throw an exception if there is something wrong, but only for certain problems. Sometimes, you won’t get an error until you actually try to call a method for the first time. Remember the JIT compiler? The JIT compiler won’t try to compile and verify your IL until the first time a method is called. So, it’s very much possible, and actually probable, that you won’t find out that your IL is invalid until you are actually running your application, the type has been generated, and you are calling the dynamic type for the first time. But, the CLR gives helpful error messages, right? Not likely. Usually, I get the ever helpful “Common Language Runtime detected an invalid program” exception. OK, so how do you tell if your dynamic type contains valid IL? PEVerify.exe to the rescue! PEVerify is a tool that comes with .NET that can inspect an assembly for valid IL code, structure, and metadata. But, in order to use PEVerify, you must save the dynamic assembly to a physical file (remember, up until now, the dynamic assembly only exists in memory). To create an actual file for the dynamic assembly, you’ll: AppDomain.DefineDynamicAssembly() AssemblyBuilderAccess.Run Type draType = typeBuilder.CreateType(); assBuilder.Save(assBuilder.GetName().Name + ".dll"); Now that that is in place, run the application and create the dynamic type. Once it has run, you should have a new DLL named “DynamicDataRowAdapter.dll” in the Debug folder for your solution (assuming you are using a Debug build). Open the .NET command prompt window and type “PEVerify <path to the assembly>\ DynamicDataRowAdapter.dll”,’t mean that the assembly won’t run. For example, when I first wrote the Factory class, I used the “callvirt” opcode when I called the static String.Equals() method. This caused PEVerify to output an error, but it still ran. This was an easy fix to call the static method with the “call” opcodes instead, and the next time I ran PEVerify, it found no errors. String.Equals() (static and sealed method calls are called with the “call” opcode instead of the “callvirt” opcode. This is because the “callvirt” opcode causes the CLR to check the inheritance chain of a type to figure out what function to actually call. Since static and sealed methods aren’t affected by inheritance, the CLR doesn’t have to do this check and can call the method directly with the “call” opcode.) static sealed One last thing, if you change your code to output a physical file, be sure to change it back to the way it was before. This is because once a dynamic assembly has been saved to file, it is locked down, so you won’t be able to add any new dynamic types after the first one has been generated and saved to file. This would be a problem in this situation because every time a new DataTable is passed into the factory, it will try to create a new dynamic type. But after the first type has been generated, if you save off the assembly to file, an exception will be thrown. Another way to double check your IL is to use the free tool Reflector (just Google it) to decompile the IL back into C# and take a look at what you emitted. So, this seems like a heck of a lot of complicated work for a little bit of a performance gain. Is it really worth it? Maybe not…probably not. It depends on the application you are writing. Some applications that I’ve written are server apps that are very heavily hit and need every little bit of performance gain it can get. The point of this article wasn’t to show off a brand new utility class, but to give you a taste of what you can accomplish with Reflection.Emit. There are so many possibilities. For instance, the next article I hope to write in this series will cover a simple Aspect Oriented Programming (AOP) framework totally done with dynamic types and Reflection.Emit. There is one dark side to dynamic types. They can be hard to maintain and debug. And, here lies a problem that I like to bring up in development. I call it the “Hit by a Bus” problem. IL is fairly complex. Not a lot of people out there know it or even want to know it. So, what happens if you are the only one on your dev team that knows anything about IL and you get hit by a bus? Who will maintain the solution? I’m not saying that you should forgo the benefits of dynamic types if they are warranted, but it’s always good to have at least two people who understand the complexities in a product. The second problem is debugging. Since you are dumping IL code into memory, it’s not all that easy to just create a break point in Visual Studio and step through your IL if you ever have to debug your dynamic type. There are alternatives and solutions, and I’ll be talking about them in a future article on dynamic types. In this article, I ran comparison performance measurements against accessing a DataRow with both string and integer ordinals. Just because an integer ordinal is four times faster than a string ordinal, doesn’t mean your web page will load four times as fast. Not even close! Pulling data out of a DataReader is just one small item a page might do. This is especially true if your page uses any kind of data binding. Data binding is very expensive, and its performance cost will most likely make your integer ordinal gain unnoticeable. So, don’t expect to see a visible performance gain on a one time page load. The only way you might see some benefit is if you put your web page under a load testing tool and measured the requests per second when being hit with a high number of concurrent users. DataReader All performance tests referenced in this article were done using Nick Wienholt’s Performance Measurement Framework. You can download the source code and an article describing how to use it, from here. All test results were calculated by executing the problem code block 5,000 times, with a time measurement taken for this duration. This is then repeated 10 times, giving 10 time measurements. These 10 measurements are then calculated together, giving various statistics, such as normalized value, median, mean, min and max, and standard deviation. The performance measurement framework does all this for you. You just write the problem code block and a bit of the framework plug-in setup, and then run the test. There are several other options to using Reflection.Emit. You could always build up a System.CodeDom object graph that represents the code you want to generate, and run it through a CodeDomProvider to create an assembly in memory, and then load the assembly into your AppDomain. Alternatively, you could also use the CodeDom classes to dump a string of C# code into a CodeSnippetExpression, and then run it through a CodeDomProvider. Both of these would work for you, but the whole point of creating dynamic types is for efficiencies. Both of these methods have to run through the C# compiler, and then the assembly has to be loaded into your AppDomain before they can be called by your application. These are extra steps that you can skip when creating dynamic types via Reflection.Emit. CodeDomProvider CodeSnippetExpression This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) if (number == 0) return "Zero"; else if (number == 1) return "One"; else if (number == 2) return "Two"; else if (number == 3) return "Three"; else if (number == 4) return "Four"; else if (number == 5) return "Five"; else if (number == 6) return "Six"; else if (number == 7) return "Seven"; else if (number == 8) return "Eight"; else if (number == 9) return "Nine"; if (number > 4 && number < 10) { if (number > 7) { if (number == 8) { return "Eight"; } else { return "Nine"; } } else { if (number == 5) { return "Five"; } else if (number == 6) { return "Six"; } else { return "Seven"; } } } else if (number >= 0) { if (number > 2) { if (number == 3) { return "Three"; } else { return "Four"; } } else { if (number == 0) { return "Zero"; } else if (number == 1) { return "One"; } else { return "Two"; } } } General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/13969/Introduction-to-Creating-Dynamic-Types-with-Reflec?msg=1472049
CC-MAIN-2014-23
refinedweb
6,221
55.54
I have a function as defined $ V (j, k) $ with two base cases with $ j, k $ and the recursive part has an additional variable $ q $ which it also uses. Also, $ 1 leq q leq j – 1 $, The recursive part has the form: $$ jk + V (q, k / 2) + V (j – q, k / 2) $$I am not allowed to use repeated substitution and I want to prove this by induction. I can not use the main clause because the recursive part is not in that form. Any ideas how I can solve it with given limitations? Tag: Proving real analysis – proving a supremum of a crowd The question: Find the supremum of the set $$ { { sqrt (4) {n ^ 4 + n ^ 3} -n: n in mathbb {N} }} $$ And then it tells us that we need to take large values of n to find an appropriate guess, to show that it is an upper bound, and then to prove that it is the smallest upper bound. I followed the question and found a suitable guess for s = 1/4, showing that this is a good upper limit. My problem is to prove that there is no lower limit. At this point, my work looks like I'm trying to prove by contradiction: Suppose h is another upper bound such that h <1/4. $$ { sqrt (4) {n ^ 4 + n ^ 3} -n <h} $$ $$ n ^ 4 + n ^ 3 <(h + n) ^ 4 $$ But after the expansion I can only quit $ n ^ 4 $ This gives me a lot of unknown powers and a really complicated solution that I can do by hand $$ n ^ 3 <h ^ 4 + 4h ^ 3n + 6h ^ 2n ^ 2 + 4hn ^ 3 $$ That said, I know that I have taken the wrong path, but I'm not sure which direction to prove it. I adapted a response from another book example, but that was only up to potency 2, so it was much easier to simplify this method. turing machines – Here's why it's wrong to stop proving the problem Instead of solving the halting problem, I will try to solve a less complicated problem in a similar way. Can we write a function that predicts whether two given numeric inputs are the same? I will not create such a function, but suppose such a function can exist and we will call it H. Now we have H, a function that works and solves our problem, but we write another function and call it H +, a function that negates the result of our perfectly functioning function. Pseudocode: def H(p1, p2): #perfectly working piece of code that will solve our problem # returns True if p1 == p2, else returns False def H+(p1, p2): return not H(p1, p2) Now if we have the code, we compare p1 = 1, p2 = 2. And if we use the function H +, why not, it's the same function as H, a function that we know works. H + negates only the results of H. The result of H + is true, how can it be? we know that 1 is not equal to 2, so here we have a paradox and prove that we can not write a function to predict whether two numeric values are the same. Now for the Halting problem, If I understand that correctly, the Halting problem has been proved in a similar way; There is a machine H that can predict if a problem is solvable. There is a larger machine that uses H, but negates the results named H +. When H + is then fed into H +, a paradox arises. Of course, H + will not work. We assume that machine H delivers the correct result. Why will H still work in the same way after changing and converting to H +? What happens if we feed H + into H, will we still have the same paradox? I do not think so. Proving $ ({L_A} ^ * L_B) ^ + = (L_A cup L_B) ^ * L_B $ I try to prove the following identity: $$ ({L_A} ^ * L_B) ^ + = (L_A cup L_B) ^ * L_B $$ This is clearly true, as both sides will agree exactly "any sequence of A and B that ends with B" : $$ B, AB, AAB, ABBABAB, … $$ I tried to think about the quantity properties and even tried to think about the NFA constructed by the two sides of the equation but so far without success. Does anyone know how to prove it? Hurricane Dorian is the second strongest storm in history proving that climate change is real. Will Donald Trump address that? – and one can say with absolute certainty that hurricanes and the earth are older than that, historically speaking. Since the reliable recording of tropical cyclone data in the North Atlantic in 1851 [1], there have been 1,574 systems with at least tropical storm intensity and 912 systems with at least hurricane intensity. https: //en.wikipedia.org/wiki/List_of_At … Documents proving that I have not used Facebook for social media communication I was accused of posting things on Facebook, and I have not had any discussions. Do I need a lawyer to obtain legal documents to obtain this information? Many thanks at.algebraic topology – Proving a Kan-like condition for functors to model categories? I tried to prove this version of the Kan condition for a project that I think about, and I'm pretty stuck. My experience of asking questions about MO in the past was great. I hope that the (higher) category theorists here can help me! Categories of Functors: To let $ cal {C} $ be a (closed) model category and let $ mathcal {S} $ be a finite pose. The category of functors $ ( mathcal {S}, mathcal {C}) $ can be permeated with that Reedy model structure (see (1)). A bunch of posets $ f: mathcal {S} to mathcal {T} $ induces a pullback functor $$ f ^ *: ( mathcal {T}, mathcal {C}) to ( mathcal {S}, mathcal {C}) $$ This functor fits into a jugular extension with the left Kan extension $ f _! $ (see Barwick (2)). $$ f _!: ( mathcal {S}, mathcal {C}) leftrightarrow ( mathcal {T}, mathcal {C}): f ^ * $$ Besides, if $ iota: mathcal {S} to mathcal {T} $ is the inclusion of a subheading that is closed downwards (i.e. $ s in mathcal {S} $ and $ s & # 39; precs $ implied $ s & # 39; in mathcal {S} $) then $$ iota ^ *: ( mathcal {T}, mathcal {C}) to ( mathcal {S}, mathcal {C}) $$ Receives cofibrations. Maybe for reasons of clarity I should mention that we see $ mathcal {S} $ as directed category, d. H. a Reedy category in the $ mathcal {S} _ + = mathcal {S} $ and $ mathcal {S} _- $ is the trivial subcategory for each object. Let me set a notation for a specific subcategory of the radio category $ ( mathcal {S}, mathcal {C}) $, I have not yet fully defined the definition, the conditions are mainly due to the situation in which I am. Definition 1: The category $ text {Ch} ( mathcal {S}, mathcal {C}) $ from $ mathcal {S} $Chains in $ mathcal {C} $ is the full subcategory of $ ( mathcal {S}, mathcal {C}) $ consisting of functors $ x: mathcal {S} to mathcal {C} $ so that - (on) $ x $ is a cofibrant diagram relative to the Reedy model structure. - (B) $ x_S to x_T $ is a quasi-isomorphism for everyone $ S to T $ in the $ mathcal {S} $, I may want to improve the above assumptions to prove the result you are looking for (sentence 1 below). - (A & # 39) $ x $ is cofibrant and fibrant with respect to the Reedy model structure. - (b & # 39;) $ x_S to x_T $ is a trivial cofibration for everyone $ S to T $ in the $ mathcal {S} $, I also like to use the hypothesis $ x $ is cofibrant in the projective model structure, which is only objectively kofibranz. Categories of layers: A nice class of posets arises from simple complexes. Definition 2: To let $ X $ be a simplicial complex. The Category of layers $ X mathcal {S} $ is the poset whose objects are simplicity $ S $ in the $ X $ and where there is a morphism $ S to T $ if $ S $ is included in $ T $, Clear the card $ X to X mathcal {S} $ is functorial. Any map of simple complexes $ f: X to Y $ induces a map of posets $ f: X mathcal {S} to X mathcal {T} $, Main question: The result I have tried to prove is the following horn filling feature. Sentence 1 (?): To let $ iota: lambda ^ {n, k} to delta ^ n $ Designate a standard inclusion of a horn $ Lambda ^ {n, k} $ in the $ n $-Simplex $ Delta ^ n $, and let $ iota: lambda ^ {n, k} mathcal {S} to delta n mathcal {S} $ also designate the induced functor for stratigraphy categories. Then the corresponding pullback functor $$ iota ^ *: text {Ch} ( Delta ^ n mathcal {S}, mathcal {C}) to text {Ch} ( Lambda ^ {n, k} mathcal {S} , mathcal {C}) $$ allows a section, i. H. a functor $ sigma: text {Ch} ( Lambda ^ {n, k} mathcal {S}, mathcal {C}) to text {Ch} ( Delta ^ n mathcal {S}, mathcal {C}) $ With $ iota circ sigma = text {Id} $, My main question is the following. Question: Is sentence 1 true? What if I make some of the possible changes to Definition 1 suggested above? Ideas for the proof: Here is a sketch of the proof I had in mind. You can extend a functor $ x in text {Ch} ( Lambda ^ {n, k} mathcal {S}, mathcal {C}) $ to a functor that $ bar {x} in ( delta ^ n mathcal {S}, mathcal {C}) $ by filling the two faces $ T_0, T_1 $ from $ Delta ^ n $ that is missing in $ Lambda ^ {n, k} $ with the Colimit $ text {colim} (x) $ and the inclusions $ S to T_i $ with the colimit maps $ x_S to x_ {T_i} $, A map $ x to y $ in the $ text {Ch} ( Lambda ^ {n, k} mathcal {S}, mathcal {C}) $ induces a card $ bar {x} to bar {y} $ in an obvious way, and this defines a functor $ sigma $ as in the sentence. We have to show that $ bar {x} $ has properties (a) and (b) from Definition 1. To show property (b) from Definition 1, we note that since the nerve of $ Lambda ^ {n, k} mathcal {S} $ is the barycentric subdivision of $ Lambda ^ {n, k} $ (contractible) and the structure maps are quasi-isomorphisms, the map $ x_S to text {colim} (x) $ is a quasi-isomorphism. Property (a) is the problem: The Colimit $ text {colim} (x) $ is cofibrant because the colimit is the left Kan extension of the pullback $ Lambda ^ {n, k} mathcal {S} to * $ and that is a left jaw next door. However, there seems to be no reason for the cofibrancy of the extended diagram. You could try cofibrant replacement, but this will ruin the property $ iota ^ * bar {x} = x $, I'm not sure if (a & # 39;) and / or (b & # 39;) help at all and move to the projective model structure (assuming that $ x $ Seems to ruin the property that the colimit will be cofibrant, which is bad. Anyway, I'm stuck here. One last remark is, if I only use the (pointwise) left Kan extension $ iota_! $ then, as far as I can tell, the quasi-isomorphism property (b) of Definition 1 is generally not satisfied. Many thanks: To read the long question and for any help or advice you may have! Proving $ binom {n + m} {r} = sum_ {i = 0} ^ {r} binom {n} {i} binom {m} {r – i} $ To prove $$ binom {n + m} {r} = sum_ {i = 0} ^ {r} binom {n} {i} binom {m} {r – i}, $$ I have shown that equality applies to all $ n, $ $ m = 0, 1, $ and all $ r <n + m $ simply fix it $ n $ and $ r $ and insert $ 0.1 $ to the $ m. $ Then I go on $ m $ (and further $ m $ just). But I'm not completely confident because I see two placeholders, $ n $ and $ m. $ Is this the case when a double induction is required (first)? $ m $ and then on $ n $) Consider all fixated $ n, r geq 0 $ and the following two cases (I know that only one case is needed to complete this inductive proof). CASE 1 begin {align} binom {n + 0} {r} & = sum_ {i = 0} ^ {r} binom {n} {i} binom {0} {r – i} \ & = binom {n} {0} binom {0} {r} + binom {n} {1} binom {0} {r-1} + cdots + binom {n} {r} binom {0} {0} \ & = 0 + 0 + cdots + binom {n} {r} \ & = binom {n} {r} end CASE 2 begin {align} binom {n + 1} {r} & = sum_ {i = 0} ^ {r} binom {n} {i} binom {1} {r – i} \ & = binom {n} {0} binom {0} {r} + binom {n} {1} binom {0} {r-1} + cdots + binom {n} {r-1} binom {1} { r – (r-1)} + binom {n} {r} binom {1} {r – r} \ & = 0 + 0 + cdots + binom {n} {r-1} + binom {n} {r} \ & = binom {n} {r-1} + binom {n} {r} end INDUCTION Suppose it is true $ m leq k. $ Now think $$ binom {n + (k + 1)} {r}. $$ Pascal's identity follows $$ binom {n + (k + 1)} {r} = binom {n + k} {r} + binom {n + k} {r-1} $$ And, begin {align} binom {n + k} {r} + binom {n + k} {r-1} & = sum_ {i = 0} ^ {r} binom {n} {i} binom {k} { r – i} + sum_ {i = 0} ^ {r-1} binom {n} {i} binom {k} {r – 1 – i} \ & = binom {n} {r} + sum_ {i = 0} ^ {r-1} binom {n} {i} binom {k} {r – i} + sum_ {i = 0} ^ {r-1} binom {n } {i} binom {k} {r – 1 – i} \ & = binom {n} {r} + sum_ {i = 0} ^ {r-1} binom {n} {i} bigg[binom{k}{r – i} + binom{k}{r – 1 – i}bigg] \ & = binom {n} {r} + sum_ {i = 0} ^ {r-1} binom {n} {i} binom {k + 1} {ri} \ & = sum_ {i = 0} ^ {r} binom {n} {i} binom {k + 1} {ri} end Therefore, equality applies to $ m = k + 1 $ Since equality is for $ m = 0, 1, $ and that if equality holds $ m = k, $ then it applies to $ m = k + 1, $ It follows that equality holds $ forall m in mathbb {N}. $ CNN's Jim Acosta has mocked him for accidentally proving that the boundary walls are working? Forgive me, but unfortunately you have not received a PhD in Border Safety from Harvard. I did my doctorate in Baylor. However, if I have a plumber problem, I go to a plumber. If I have problems with the car, I go to a car mechanic. If I ask myself what would be helpful for border security, I go to Border Patrol and ASK YOU. Well, Sally, why do not you tell me about the training in border security that YOU have to oppose? , Machine Learning – Square Core Proving We have the given square kernel. K (x, y) = (xTy) ^ 2 φ ([x1, x2]) = {x1x1, x1x2, x2x1, x2x2} Show that K (x, y) = φ (x) T ((y) for arbitrary vectors of length n. I can show that this works for a two-dimensional vector, but I'm confused, as proven for an n-dimensional vector.
https://proxies-free.com/tag/proving/
CC-MAIN-2019-47
refinedweb
2,406
62.41
In this codelab you'll learn about state and how it can be used and manipulated by Jetpack Compose. Before we dive in, it's useful to define what exactly state is. At its core, state in an application is any value that can change over time. This is a very broad definition, and encompases everything from a Room database to a variable on a class. All Android applications display state to the user. A few examples of state in Android applications: - A Snackbar that shows when a network connection can't be established - A blog post and associated comments - Ripple animations on buttons that play when a user clicks - Stickers that a user can draw on top of an image In this codelab you will explore how to use and think about state when using Jetpack Compose. To do this, we will build a TODO application. At the end of this codelab you'll have built a stateful UI that displays an interactive, editable, TODO list. In the next section you'll learn about Unidirectional Data Flow – a design pattern that is core to understanding how to display and manage state when using Compose. What you'll learn - What is unidirectional data flow - How to think about state and events in a UI - How to use Architecture Component's ViewModeland LiveDatain Compose to manage state - How Compose uses state to draw a screen - When to move state to a caller - How to use internal state in Compose - How to use State<T>to integrate state with Compose What you'll need - The latest Android Studio 4.2 - Knowledge of Kotlin - Consider taking the Jetpack Compose basics codelab before this codelab - Basic understanding of Compose (such as the @Composableannotation) - Basic familiarity with Compose layouts (e.g. Row and Column) - Basic familiarity with modifiers (e.g. Modifier.padding) - Basic understanding of Architecture Component's ViewModeland LiveData What you'll build - An interactive TODO app using unidirectional data flow in compose To download the sample app, you can either: ... or clone the GitHub repository from the command line by using the following command: git clone cd android-compose-codelabs/StateCodelab At any time you can run either module in Android Studio by changing the run configuration in the toolbar. Open Project into Android Studio - On the Welcome to Android Studio window select Open an Existing Project - Select the folder [Download Location]/StateCodelab(tip: make sure you select the StateCodelabdirectory containing build.gradle) - When Android Studio has imported the project, test that you can run the startand finishedmodules. Exploring the start code The start code contains four packages: examples– Example Activities for exploring the concepts of unidirectional data flow. You will not need to edit this package. ui– Contains themes auto-generated by Android Studio when starting a new compose project. You will not need to edit this package. util– Contains helper code for the project. You will not need to edit this package. todo– The package containing the code for the Todo screen we are building. You will be making modifications to this package. This codelab will focus on the files in the todo package. In the start module there are several files to become familiar with. Provided files in todo package Data.kt– Data structures used to represent a TodoItem TodoComponents.kt– Reusable composables that you will use to build the Todo screen. You will not need to edit this file. Files you will edit in todo package TodoActivity.kt– Android Activity that will use Compose to draw a Todo screen after you're done with this codelab. TodoViewModel.kt– A ViewModelthat you will integrate with Compose to build the Todo screen. You will connect it to Compose and extend it to add more features as you complete this codelab. TodoScreen.kt– Compose implementation of a Todo screen that you will build during this codelab. The UI update loop Before we get to our TODO app, let's explore the concepts of unidirectional data flow using the Android view system. What causes state to update? In the introduction we talked about state as any value that changes over time. This is only part of the story of state in an Android application. In Android apps, state is updated in response to events. Events are inputs generated from outside our application, such as the user tapping on a button calling an OnClickListener, a EditText calling afterTextChanged, or an accelerometer sending a new value. In all Android apps, there's a core UI update loop that goes like this: - Event – An event is generated by the user or another part of the program - Update State – An event handler changes the state that is used by the UI - Display State – The UI is updated to display the new state Managing state in Compose is all about understanding how state and events interact with each other. Unstructured state Before we get to Compose, let's explore events and state in the Android view system. As a "Hello, World" of state we are going to build a hello world Activity that allows the user to input their name. One way we could write this is to have the event callback directly set the state in the TextView, and the code, using ViewBinding, might look something like this: HelloCodelabActivity**.kt** class HelloCodelabActivity : AppCompatActivity() { private lateinit var binding: ActivityHelloCodelabBinding var name = "" override fun onCreate(savedInstanceState: Bundle?) { /* ... */ binding.textInput.doAfterTextChanged {text -> name = text.toString() updateHello() } } private fun updateHello() { binding.helloText.text = "Hello, $name" } } Code like this does work, and for a small example like this it's fine. However, it tends to become hard to manage as the UI grows. As you add more events and state to an Activity built like this several problems can arise: - Testing – since the state of the UI is interwoven with the Viewsit can be difficult to test this code. - Partial state updates – when the screen has many more events, it is easy to forget to update part of the state in response to an event. As a result the user may see an inconsistent or an incorrect UI. - Partial UI updates – since we're manually updating the UI after each state change, it's very easy to forget this sometimes. As a result the user may see stale data in their UI that randomly updates. - Code complexity – it's difficult to extract some of the logic when coding in this pattern. As a result, code has a tendency to become difficult to read and understand. Using Unidirectional Data Flow To help fix these problems with unstructured state, we introduced Android Architecture Components which contain ViewModel and LiveData. A ViewModel lets you extract state from your UI and define events that the UI can call to update that state. Let's look at the same Activity written using a ViewModel. HelloCodelabActivity.kt class HelloCodelabViewModel: ViewModel() { // LiveData holds state which is observed by the UI // (state flows down from ViewModel) private val _name = MutableLiveData("") val name: LiveData<String> = _name // onNameChanged is an event we're defining that the UI can invoke // (events flow up from UI) fun onNameChanged(newName: String) { _name.value = newName } } class HelloCodeLabActivityWithViewModel : AppCompatActivity() { val helloViewModel by viewModels<HelloCodelabViewModel>() override fun onCreate(savedInstanceState: Bundle?) { /* ... */ binding.textInput.doAfterTextChanged { helloViewModel.onNameChanged(it.toString()) } helloViewModel.name.observe(this) { name -> binding.helloText.text = "Hello, $name" } } } In this example, we moved the state from the Activity to a ViewModel. In a ViewModel, state is represented by LiveData. A LiveData is an observable state holder, which means that it provides a way for anyone to observe changes to the state. Then in the UI we use the observe method to update the UI whenever the state changes. The ViewModel also exposes one event: onNameChanged. This event is called by the UI in response to user events, such as what happens here whenever the EditText's text changes. Going back to the UI update loop we talked about earlier we can see how this ViewModel fits together with events and state. - Event – onNameChangedis called by the UI when the text input changes - Update State – onNameChangeddoes processing, then sets the state of _name - Display State – name's observer(s) are called, which notifies the UI of state changes By structuring our code this way, we can think of events flowing "up" to the ViewModel. Then, in response to events the ViewModel will do some processing and possibly update state. When the state is updated it flows "down" to the Activity. This pattern is called unidirectional data flow. Unidirectional data flow is a design where state flows down and events flow up. By structuring our code this way we gain a few advantages: - Testability – by decoupling state from the UI that displays it, it's easier to test both the ViewModel and the Activity - State encapsulation – because state can only be updated in one place (the ViewModel), it's less likely that you'll introduce a partial state update bug as your UI grows - UI consistency – all state updates are immediately reflected in the UI by the use of observable state holders So, while this approach does add a bit more code – it tends to be easier and more reliable to handle complex state and events using unidirectional data flow. In the next section we'll see how to use unidirectional data flow with Compose. In the last section we explored unidirectional data flow in the Android View system using ViewModel and LiveData. Now we're going to move into Compose and explore how to use unidirectional data flow in Compose using ViewModels. At the end of this section you'll have built this screen: Explore TodoScreen composables The code you downloaded contains several composables that you'll use and edit throughout this codelab. Open up TodoScreen.kt and take a look at the existing TodoScreen composable: TodoScreen.kt @Composable fun TodoScreen( items: List<TodoItem>, onAddItem: (TodoItem) -> Unit, onRemoveItem: (TodoItem) -> Unit ) { /* ... */ } To see what this composable displays, use the preview pane in Android Studio by clicking on the split icon in the top right corner . This composable displays an editable TODO list, but it doesn't have any state of its own. Remember, state is any value that can change – but none of the arguments to TodoScreen can be modified. items– an immutable list of items to display on the screen onAddItem– an event for when the user requests adding an item onRemoveItem– an event for when the user requests removing an item In fact, this composable is stateless. It only displays the items list that was passed in and has no way to directly edit the list. Instead, it is passed two events onRemoveItem and onAddItem that can request changes. This raises the question: if it's stateless how can it display an editable list? It does that by using a technique called state hoisting. State hoisting is the pattern of moving state up to make a component stateless. Stateless components are easier to test, tend to have fewer bugs, and open up more opportunities for reuse. It turns out the combination of these parameters works to allow the caller to hoist state out of this composable. To see how this works go let's explore the UI update loop of this composable. - Event – when the user requests an item be added or removed TodoScreencalls onAddItemor onRemoveItem - Update state – the caller of TodoScreencan respond to these events by updating state - Display state – when the state is updated, TodoScreenwill be called again with the new itemsand it can display them on screen The caller is responsible for figuring out where and how to hold this state. It can store items however makes sense, for example in memory or read them from a Room database. TodoScreen is completely decoupled from how the state is managed. Define TodoActivityScreen composable Open up TodoViewModel.kt and find an existing ViewModel that defines one state variable and two events. TodoViewModel.kt class TodoViewModel : ViewModel() { // state: todoItems private var _todoItems = MutableLiveData(listOf<TodoItem>()) val todoItems: LiveData<List<TodoItem>> = _todoItems // event: addItem fun addItem(item: TodoItem) { /* ... */ } // event: removeItem fun removeItem(item: TodoItem) { /* ... */ } } We want to use this ViewModel to hoist the state from TodoScreen. When we're done, we'll have created a unidirectional data flow design that looks like this: To get started integrating TodoScreen into TodoActivity, open up TodoActivity.kt and define a new @Composable function TodoActivityScreen(todoViewModel: TodoViewModel) and call it from setContent in onCreate. In the rest of this section we will build the TodoActivityScreen one step at a time. You can start by calling TodoScreen with fake state and events like this: TodoActivity.kt class TodoActivity : AppCompatActivity() { val todoViewModel by viewModels<TodoViewModel>() override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { StateCodelabTheme { Surface { TodoActivityScreen(todoViewModel) } } } } } @Composable private fun TodoActivityScreen(todoViewModel: TodoViewModel) { val items = listOf<TodoItem>() // in the next steps we'll complete this TodoScreen( items = items, onAddItem = { }, // in the next steps we'll complete this onRemoveItem = { } // in the next steps we'll complete this ) } This composable will be a bridge between the state stored in our ViewModel and the TodoScreen composable that's already defined in the project. You can change TodoScreen to take the ViewModel directly, but then TodoScreen would be a bit less reusable. By preferring simpler parameters such as List<TodoItem>, TodoScreen is not coupled to the specific place that state is hoisted. If you run the app right now, you'll see that it displays a button but clicking it doesn't do anything. This is because we haven't yet connected our ViewModel to TodoScreen. Flow the events up Now that we have all the components we need – a ViewModel, a bridge composable TodoActivityScreen, and TodoScreen, let's wire everything together to display a dynamic list using unidirectional data flow. In TodoActivityScreen pass addItem and removeItem from the ViewModel. TodoActivity.kt @Composable private fun TodoActivityScreen(todoViewModel: TodoViewModel) { val items = listOf<TodoItem>() TodoScreen( items = items, onAddItem = { todoViewModel.addItem(it) }, onRemoveItem = { todoViewModel.removeItem(it) } ) } When TodoScreen calls onAddItem or onRemoveItem, we can pass the call to the correct event on our ViewModel. Pass the state down We've wired up the events of our unidirectional data flow – now we need to pass the state down. Edit TodoActivityScreen to observe the todoItems LiveData using observeAsState: TodoActivity.kt @Composable private fun TodoActivityScreen(todoViewModel: TodoViewModel) { val items: List<TodoItem> by todoViewModel.todoItems.observeAsState(listOf()) TodoScreen( items = items, onAddItem = { todoViewModel.addItem(it) }, onRemoveItem = { todoViewModel.removeItem(it) } ) } This line will observe the LiveData and let us use the current value directly as a List<TodoItem>. There's a lot packed into this one line – so let's take it apart: val items: List<TodoItem>declares a variable itemsof type List<TodoItem> todoViewModel.todoItemsis a LiveData<List<TodoItem>from the ViewModel .observeAsStateobserves a LiveData<T>and converts it into a State<T>object so Compose can react to value changes listOf()is an initial value to avoid possible nullresults before the LiveDatais initialized, if it wasn't passed itemswould be List<TodoItem>?which is nullable. byis the property delegate syntax in Kotlin, it lets us automatically unwrap the State<List<TodoItem>>from observeAsStateinto a regular List<TodoItem> Run the app again Run the app again and you'll see a dynamically updating list! Clicking on the button on the bottom will add new items while clicking on an item removes it. In this section we explored how to build a unidirectional data flow design in Compose using ViewModels. We also saw how to use a stateless composable to display a stateful UI by using a technique called state hoisting. And, we continued to explore how to think about dynamic UIs in terms of state and events. In the next section we'll explore adding memory to composable functions. Now that we've explored how to use compose with ViewModels to build a unidirectional data flow, let's explore how compose can interact with state internally. In the last section, you saw how compose updates the screen by calling composables again. A process called recomposition. We were able to display a dynamic list by calling TodoScreen again. In this and the next section and the next we'll explore how to make stateful composables. In this section we'll explore how to add memory to a composable function – which is a building block we'll need to add state to Compose in the next section. Disheveled Design Mock from designer For this section, a new designer on your team has given you a mock following the latest design trend – disheveled design. The core principle of disheveled design is to take a good design and add seemingly random changes to it to make it "interesting." In this design, each icon is tinted to a random alpha between 0.3 and 0.7. Adding random to a composable To get started, open up TodoScreen.kt and find the TodoRow composable. This composable describes a single row in the todo list. Define a new val iconAlpha with a value of randomTint(). This is a float between 0.3 and 0.7 like our designer asked for. Then, set the tint of the icon. TodoScreen.kt @Composable fun TodoRow(todo: TodoItem, onItemClicked: (TodoItem) -> Unit, modifier: Modifier = Modifier) { Row( modifier = modifier .clickable { onItemClicked(todo) } .padding(horizontal = 16.dp, vertical = 8.dp), horizontalArrangement = Arrangement.SpaceBetween ) { Text(todo.task) val iconAlpha = randomTint() Icon( asset = todo.icon.vectorAsset, tint = AmbientContentColor.current.copy(alpha = iconAlpha) ) } } If you check the preview again you'll see that the icon now has a random tint color. Exploring recomposition Run the app again to try out the new disheveled design, you'll immediately notice that the tints seem to change all the time. Your designer tells you that though we were going for random – this is just a bit too much. App with icons changing tint when list changes What's going on here? It turns out that the recomposition process is calling randomTint for each row on the screen again every time that the list changes. Recomposition is the process of calling composables again with new inputs to update the compose tree. In this case when TodoScreen is called again with a new list, LazyColumnFor will recompose all of the children on the screen. This will then call TodoRow again, generating a new random tint. Compose generates a tree, but it is a bit different than the UI tree you may be familiar with from the Android view system. Instead of a tree of UI widgets, compose generates a tree of composables. We can visualize TodoScreen like this: TodoScreen tree When Compose runs composition the first time it builds a tree of every composable that was called. Then, during recomposition updates the tree with the new composables that get called. The reason the icons update every time the TodoRow recompose is because TodoRow has a hidden side-effect. A side-effect is any changes that's visible outside of the execution of a composable function. The call to Random.nextFloat() updates the internal random variable used in a pseudo-random number generator. This is how Random returns a different value every time you ask for a random number. Introducing memory to composable functions We don't want the tint to change every time that TodoRow recomposes. To do that, we need a place to remember the tint that we used in the last composition. Compose lets us store values in the composition tree, so we can update TodoRow to store the iconAlpha in the composition tree. Edit TodoRow and surround the call to randomTint with remember like this: TodoScreen.kt val iconAlpha: Float = remember(todo.id) { randomTint() } Icon( asset = todo.icon.vectorAsset, tint = AmbientContentColor.current.copy(alpha = iconAlpha) ) Looking at the new compose tree for TodoRow, you can see that iconAlpha has been added to the compose tree: TodoRow tree using remember If you run the app again now, you'll see that the tint doesn't update every time the list changes. Instead, when recomposition happens, the previous value stored by remember is returned. If you look closely at the call to remember, you'll see we're passing todo.id as the key argument. remember(todo.id) { randomTint() } A remember call has two parts: - key arguments – the "key" that this remember uses, this is the part that is passed in parenthesis. Here we're passing todo.idas the key. - calculation – a lambda that computes a new value to be remembered, passed in a trailing lambda. Here we're computing a random value with randomTint(). The first time this composes, remember always calls randomTint and remembers the result for the next recomposition. It also keeps track of the todo.id that was passed as well. Then, during recomposition it will skip calling randomTint and return the remembered value unless a new todo.id is passed to TodoRow. Recomposition of a composable must be idempotent. By surrounding the call to randomTint with remember, we skip the call to random on recomposition unless the todo item changes. As a result, TodoRow has no side-effects and always produces the same result every time it recomposes with the same input and is idempotent. Making remembered values controllable If you run the app now, you'll see that it's displaying a random tint on each icon. Your designer is pleased that this is following the principles of disheveled design and approves it for shipping. But before you do, there's one minor code change to do before checking this in. Right now there's no way for the caller of TodoRow to specify the tint. There's a lot of reasons they might want to – for example the VP of product noticing this screen and requiring a hotfix to remove the disheveling right before you ship the app. To allow the caller to control this value, simply move the remember call to a default argument of a new iconAlpha parameter. @Composable fun TodoRow( todo: TodoItem, onItemClicked: (TodoItem) -> Unit, modifier: Modifier = Modifier, iconAlpha: Float = remember(todo.id) { randomTint() } ) { Row( modifier = modifier .clickable { onItemClicked(todo) } .padding(horizontal = 16.dp) .padding(vertical = 8.dp), horizontalArrangement = Arrangement.SpaceBetween ) { Text(todo.task) Icon( asset = todo.icon.vectorAsset, tint = AmbientContentColor.current.copy(alpha = iconAlpha) ) } } Now the caller gets the same behavior by default – TodoRow calculates a randomTint. But they can specify any alpha they want. By allowing the caller to control the alphaTint this composable is more reusable. On another screen, a designer may want to display all the icons with 0.7 alpha. There's also a really subtle bug with our remember usage. Try adding enough todo rows to scroll a few off screen by clicking "Add random todo" repeatedly then scrolling. As you scroll, you'll notice that the icons change alpha every time they scroll back onto the screen. In the next sections we'll explore state and state hoisting – which will give you the tools you need to fix bugs like these. In the last section we learned how composable functions have memory, now we're going to explore using that memory to add state to a composable. Todo input (state: expanded) Todo input (state: collapsed) Our designer has moved on from disheveled design and is now into post-Material. The new design for todo input takes up the same space as a collapsible header and has two main states: expanded and collapsed. The expanded version will show whenever the text is not empty. To build this, first we'll build the text and button, then we'll look at adding the auto-hiding icons. Editing text in a UI is stateful. The user updates the currently displayed text every time they type a character, or even when they change the selection. In the Android view system, this state is internal to EditText and exposed via onTextChanged listeners, however since compose is designed for unidirectional data flow this wouldn't fit. TextField in compose is a stateless composable. Just like the TodoScreen that displays a changing list of todos, a TextField just displays whatever you tell it to and issues events when the user types. Create a stateful TextField composable To start exploring state in compose we're going to make a stateful component for displaying an editable TextField. To get started, open TodoScreen.kt and add the following function TodoScreen.kt @Composable fun TodoInputTextField(modifier: Modifier) { val (text, setText) = remember { mutableStateOf("") } TodoInputText(text, setText, modifier) } This function uses remember to add memory to itself, then in the memory it stores a mutableStateOf to create a MutableState<String> which is a builtin type of Compose that provides an observable state holder. Since we're going to immediately pass a value and setter event to TodoInputText, we destructure the MutableState object into a getter and a setter. And that's it. We've created an internal state in TodoInputTextField. To see it in action, define another composable TodoItemInput that shows the TodoInputTextField and a Button. TodoScreen.kt @Composable fun TodoItemInput(onItemComplete: (TodoItem) -> Unit) { // onItemComplete is an event will fire when an item is completed by the user Column { Row(Modifier .padding(horizontal = 16.dp) .padding(top = 16.dp) ) { TodoInputTextField(Modifier .weight(1f) .padding(end = 8.dp) ) TodoEditButton( onClick = { /* todo */ }, text = "Add", modifier = Modifier.align(Alignment.CenterVertically) ) } } } TodoItemInput has only one parameter, an event onItemComplete. When the user completes a TodoItem the event will be triggered. This pattern of passing a lambda is the main way that you define custom events in compose. Also, update the TodoScreen composable to call TodoItemInput in the background TodoItemInputBackground that's already defined in the project: TodoScreen.kt @Composable fun TodoScreen( items: List<TodoItem>, onAddItem: (TodoItem) -> Unit, onRemoveItem: (TodoItem) -> Unit ) { Column { // add TodoItemInputBackground and TodoItem at the top of TodoScreen TodoItemInputBackground(elevate = true, modifier = Modifier.fillMaxWidth()) { TodoItemInput(onItemComplete = onAddItem) } ... Try out TodoItemInput Since we just defined a major UI composable for the file, it's a good idea to add a @Preview for it. This will allow us to explore that composable in isolation, as well as allow readers of this file to preview it quickly. In TodoScreen.kt add a new preview function to the bottom: TodoScreen.kt @Preview @Composable fun PreviewTodoItemInput() = TodoItemInput(onItemComplete = { }) Now you can run that composable either in interactive preview or on an emulator to debug this composable in isolation. When you do, you'll see that it correctly displays an editable text field that allows the user to edit text. Whenever they type a character, the state is updated which triggers recomposition updating the TextField displayed to the user. Make the button click add an item Now we want to make the "Add" button actually add a TodoItem. To do that, we'll need access to the text from the TodoInputTextField. If you look at part of the composition tree of TodoItemInput you can see that we're storing the text state inside of TodoInputTextField. TodoInputItem composition tree (builtin composables hidden) This structure won't let us wire the onClick up because onClick needs to access the current value of text. What we want to do is expose the text state to TodoItemInput – and use unidirectional data flow at the same time. Unidirectional data flow applies both to high level architecture and the design of a single composable when using Jetpack Compose. Here, we want to make it so that events always flow up and state always flows down. This means we want state to flow down from TodoItemInput, and events to flow up. Unidirectional data flow diagram for TodoItemInput In order to do that, we'll need to move the state from the child composable, TodoInputTextField, to the parent TodoItemInput. TodoInputItem composition tree with state hoisting (builtin composables hidden) This pattern is called state hoisting. We will "hoist" (or lift) state from a composable to make it stateless. State hoisting the main pattern to build unidirectional data flow designs in Compose. To start hoisting state, you can refactor any internal state T of a composable to a (value: T, onValueChange: (T) -> Unit) parameter pair. Edit TodoInputTextField to hoist the state by adding (value, onValueChange) parameters: TodoScreen.kt // TodoInputTextField with hoisted state @Composable fun TodoInputTextField(text: String, onTextChange: (String) -> Unit, modifier: Modifier) { TodoInputText(text, onTextChange, modifier) } This code adds a value and onValueChange parameter to TodoInputTextField. The value parameter is text, and the onValueChange parameter is onTextChange. Then, because the state is now hoisted, we remove the remembered state from TodoInputTextField. State that is hoisted this way has some important properties: - Single source of truth – by moving state instead of duplicating it, we're ensuring there's only one source of truth for the text. This helps avoid bugs. - Encapsulated – only TodoInputItemwill be able to modify the state, while other components can send events to TodoInputItem. By hoisting this way, only one composable is stateful even though multiple composables use the state. - Shareable – hoisted state can be shared as an immutable value with multiple composables. Here we're going to use the state in both TodoInputTextFieldand TodoEditButton. - Interceptable – TodoItemInputcan decide to ignore or modify events before changing its state. For example, TodoItemInputcould format :emoji-codes: into emoji as the user types. - Decoupled – the state for TodoInputTextFieldmay be stored anywhere. For example, we could choose to back this state by a Room database that is updated every time a character is typed without modifying TodoInputTextField. Now, add the state in TodoItemInput and pass it to TodoInputTextField: TodoScreen.kt @Composable fun TodoItemInput(onItemComplete: (TodoItem) -> Unit) { val (text, setText) = remember { mutableStateOf("") } Column { Row(Modifier .padding(horizontal = 16.dp) .padding(top = 16.dp) ) { TodoInputTextField( text = text, onTextChange = setText, modifier = Modifier .weight(1f) .padding(end = 8.dp) ) TodoEditButton( onClick = { /* todo */ }, text = "Add", modifier = Modifier.align(Alignment.CenterVertically) ) } } } Now we've hoisted the state, and we can use the current value of text to drive the behavior of the TodoEditButton. Finish the callback and enable the button only when the text is not blank per the design: TodoScreen.kt // edit TodoItemInput TodoEditButton( onClick = { onItemComplete(TodoItem(text)) // send onItemComplete event up setText("") // clear the internal text }, text = "Add", modifier = Modifier.align(Alignment.CenterVertically), enabled = text.isNotBlank() // enable if text is not blank ) We're using the same state variable, text, in two different composables. By hoisting the state we're able to share the state like this. And, we've managed to do it while making only TodoItemInput a stateful composable. Run it again Run the app again and you'll see that you can now add todo items! Congratulations – you've just learned how to add state to a composable, and how to hoist it! Code cleanup Before you move on, inline the TodoInputTextField. We just added it in this section to explore state hoisting. If you look into the code of TodoInputText that was provided with the codelab, you'll see that it already hoists state following the patterns that we discussed in this section. When you're done, your TodoItemInput should look like this: TodoScreen.kt @Composable fun TodoItemInput(onItemComplete: (TodoItem) -> Unit) { val (text, setText) = remember { mutableStateOf("") } Column { Row(Modifier .padding(horizontal = 16.dp) .padding(top = 16.dp) ) { TodoInputText( text = text, onTextChange = setText, modifier = Modifier .weight(1f) .padding(end = 8.dp) ) TodoEditButton( onClick = { onItemComplete(TodoItem(text)) setText("") }, text = "Add", modifier = Modifier.align(Alignment.CenterVertically), enabled = text.isNotBlank() ) } } } In the next section we'll continue to build this design and add the icons. You'll use the tools we learned in this section to hoist the state and build interactive UIs with unidirectional data flow. In the last section you learned how to add state to a composable, and how to use state hoisting to make a composable that uses state stateless. Now we're going to explore building a dynamic UI based on state. Going back to the mock from the designer, we should show the icon row whenever the text is not blank. Todo input (state: expanded - text not blank) Todo input (state: collapsed - text is blank) Derive iconsVisible from state Open up TodoScreen.kt and create a new state variable to hold the currently selected icon and a new val iconsVisible that's true whenever text is not blank. TodoScreen.kt @Composable fun TodoItemInput(onItemComplete: (TodoItem) -> Unit) { val (text, setText) = remember { mutableStateOf("") } val (icon, setIcon) = remember { mutableStateOf(TodoIcon.Default)} val iconsVisible = text.isNotBlank() // ... We added a second piece of state, icon, that holds the currently selected icon. The value iconsVisible does not add a new state to TodoItemInput. There is no way for TodoItemInput to directly change it. Instead, it is based entirely upon the value of text. Whatever the value of text is in this recomposition, iconsVisible will be set accordingly and we can use it to show the correct UI. We could add another bit of state to TodoItemInput to control when the icons are visible, but if you look closely at the spec the visibility is based entirely upon the text that has been input. If we made two states, it would be easy for them to get out of sync. Instead, we prefer to have a single source of truth. In this composable, we only need text to be state, and iconsVisible can be based on text. Continue editing TodoItemInput to show the AnimatedIconRow depending on the value of iconsVisible. If iconsVisible is true, display an AnimatedIconRow, if it's false display a Spacer with 16.dp. TodoScreen.kt @Composable fun TodoItemInput(onItemComplete: (TodoItem) -> Unit) { val (text, setText) = remember { mutableStateOf("") } val (icon, setIcon) = remember { mutableStateOf(TodoIcon.Default)} val iconsVisible = text.isNotBlank() Column { Row( /* ... */ ) { /* ... */ } if (iconsVisible) { AnimatedIconRow(icon, setIcon, Modifier.padding(top = 8.dp)) } else { Spacer(modifier = Modifier.preferredHeight(16.dp)) } } } If you run the app again now, you'll see that the icons animate in when you enter text. Here we're dynamically changing the composition tree based on the value of iconsVisible. Here is a diagram of the composition tree for both states. This sort of conditional-show logic is equivalent to visibility gone in the Android view system. TodoItemInput composition tree when iconsVisible changes If you run the app again, you'll see that the icon row displays correctly, but if you click "Add" the icon doesn't make it into the added todo row. This is because we haven't updated our event to pass the new icon state, let's do that next. Update the event to use icon Edit TodoEditButton in TodoItemInput to use the new icon state in the onClick listener. TodoScreen.kt TodoEditButton( onClick = { onItemComplete(TodoItem(text, icon)) setIcon(TodoIcon.Default) setText("") }, text = "Add", modifier = Modifier.align(Alignment.CenterVertically), enabled = text.isNotBlank() ) You can use the new icon state directly in the onClick listener. We also reset it to default when the user is done entering a TodoItem. If you run the app now, you'll see an interactive todo input with animated buttons. Great job! Finish the design with an imeAction When you show the app to your designer, they tell you that it should submit the todo item from the ime action on the keyboard. That's the blue button in the bottom right: Android Keyboard with ImeAction.Done TodoInputText lets you respond to imeAction with its onImeAction event. We really want this the onImeAction to have the exact same behavior as the TodoEditButton. We could duplicate the code, but that would be hard to maintain over time as it'd be easy to only update one of the events. Lets extract the event into a variable, so we can use it for both the TodoInputText's onImeAction and TodoEditButton's onClick. Edit TodoItemInput again to declare a new lambda function submit that handles the user performing a submit action. Then pass the newly defined lambda function to both TodoInputText and TodoEditButton. TodoScreen.kt @Composable fun TodoItem("") } Column { Row(Modifier .padding(horizontal = 16.dp) .padding(top = 16.dp) ) { TodoInputText( text = text, onTextChange = setText, modifier = Modifier .weight(1f) .padding(end = 8.dp), onImeAction = submit // pass the submit callback to TodoInputText ) TodoEditButton( onClick = submit, // pass the submit callback to TodoEditButton text = "Add", modifier = Modifier.align(Alignment.CenterVertically), enabled = text.isNotBlank() ) } if (iconsVisible) { AnimatedIconRow(icon, setIcon, Modifier.padding(top = 8.dp)) } else { Spacer(modifier = Modifier.preferredHeight(16.dp)) } } } If you wanted to, you could further extract the logic from this function. However, this composable is looking pretty good so we'll stop here. This is one of the big advantages of Compose – since you're declaring your UI in Kotlin you're able to build any abstractions needed to make the code decoupled and reusable. Run the app again to try out the new icons Run the app again and you'll see that the icons show and hide automatically as the text changes state. You can also change the icon selection. When you hit the "Add" button you will see that a new TodoItem is generated based on the values input. Congratulations, you've learned about state in compose, state hoisting, and how to build dynamic UIs based on state. In the next few sections we'll explore how to think about making reusable components that interact with state. Your designer is on a new design trend today. Gone are disheveled UI and post-Material, this week's design follows the design trend "neo-modern interactive." You asked them what that means, and the answer was a bit confusing and involved emoji, but anyway, here are the mocks. Mock for editing mode The designer says it reuses the same UI as the input with the buttons changed to a save and done emoji. At the end of the last section, we left TodoItemInput as a stateful composable. This was fine when it was just for inputting todos – but now that it's an editor it will need to support state hoisting. In this section, you'll learn how to extract state from a stateful composable to make it stateless. This will allow us to reuse the same composable for both adding todos and editing them. Convert TodoInputItem to a stateless composable To get started, we need to hoist the state from TodoItemInput. But where will we put it? We could put it directly in TodoScreen – but it's already working really well with internal state and a finished event. We don't really want to change that API. What we can do instead is split the composable into two – one that has state and the other that is stateless. Open up TodoScreen.kt and break TodoItemInput into two composables, then rename the stateful composable to TodoItemEntryInput as it's only useful for entering new TodoItems. ) } @Composable private fun TodoItemInput( text: String, onTextChange: (String) -> Unit, icon: TodoIcon, onIconChange: (TodoIcon) -> Unit, submit: () -> Unit, iconsVisible: Boolean ) { Column { Row( Modifier .padding(horizontal = 16.dp) .padding(top = 16.dp) ) { TodoInputText( text, onTextChange, Modifier .weight(1f) .padding(end = 8.dp), submit ) TodoEditButton( onClick = submit, text = "Add", modifier = Modifier.align(Alignment.CenterVertically), enabled = text.isNotBlank() ) } if (iconsVisible) { AnimatedIconRow(icon, onIconChange, Modifier.padding(top = 8.dp)) } else { Spacer(modifier = Modifier.preferredHeight(16.dp)) } } } This transformation is a really important one to understand when using compose. We took a stateful composable, TodoInputItem, and split it into two composables. One with state ( TodoItemEntryInput) and one stateless ( TodoItemEntry). The stateless composable has all of our UI-related code, and the stateful composable doesn't have any UI-related code. By doing this, we make the UI code reusable in situations where we want to back the state differently. Run the application again Run the application again to confirm that todo input still works. Congratulations, you've successfully extracted a stateless composable from a stateful composable without changing it's API. We'll explore in the next section how this allows us to reuse the UI logic in different locations without coupling the UI with state. Reviewing the neo-modern interactive mock from our designer, we'll need to add some state representing the current edit item. Mock for editing mode Now we need to decide where to add the state for this editor. We could build another stateful composable " TodoRowOrInlineEditor" that handles displaying or editing an item, but we only want to show one editor at a time. Looking at the design closely, the top section also changes when in editing mode as well. So we're going to have to do some state hoisting to allow the state to be shared. State tree for TodoActivity Since both TodoItemEntryInput and TodoInlineEditor need to know about the current editor state to enable hiding the input at the top of the screen, we need to hoist the state to at least TodoScreen. The screen is the lowest level composable in the hierarchy that's a common parent of every composable that needs to know about editing. However, since the editor is derived from and will be mutating the list, it should really live next to the list. We want to hoist state to the level that it might be modified. The list lives in TodoViewModel, so that's exactly where we'll add it. Convert TodoViewModel to use mutableStateOf In this section you'll add state for the editor in TodoViewModel, and in the next section you'll use it to build an inline editor. At the same time, we'll explore using mutableStateOf in a ViewModel and see how it simplifies state code compared to LiveData when targeting Compose. Open up TodoViewModel.kt and replace the existing todoItems with a mutableStateOf: TodoViewModel.kt class TodoViewModel : ViewModel() { // remove the LiveData and replace it with a mutableStateOf //private var _todoItems = MutableLiveData(listOf<TodoItem>()) //val todoItems: LiveData<List<TodoItem>> = _todoItems // state: todoItems var todoItems: List<TodoItem> by mutableStateOf(listOf()) private set // event: addItem fun addItem(item: TodoItem) { todoItems = todoItems + listOf(item) } // event: removeItem fun removeItem(item: TodoItem) { // toMutableList makes a mutable copy of the list we can edit, then // assign the new list to todoItems (which is still an immutable list) todoItems = todoItems.toMutableList().also { it.remove(item) } } } MutableState is built with idiomatic Kotlin in mind, and supports property delegate syntax. We used it earlier in this codelab inside a composable – but you can also use it inside of stateful classes like a ViewModel. The declaration of todoItems is short and captures the same behavior as the LiveData version. // state: todoItems var todoItems: List<TodoItem> by mutableStateOf(listOf()) private set This makes a new MutableStateOf<List<TodoItems>> then uses the property delegate syntax to convert it into a regular List<TodoItem>. By specifying private set, we're restricting writes to this state object to a private setter only visible inside the ViewModel. The events were also shortened. Since MutableState is written for Kotlin it has better nullability guarantees than LiveData can provide. Both event listeners are able to remove extra null-safety code. And, because we're able to use the property delegate syntax, we don't have to call .value every time you read or write from todoItems. Update TodoActivityScreen to use the new ViewModel Open TodoActivity.kt and update TodoActivityScreen to use the new ViewModel. TodoActivity.kt @Composable private fun TodoActivityScreen(todoViewModel: TodoViewModel) { TodoScreen( items = todoViewModel.todoItems, onAddItem = todoViewModel::addItem, onRemoveItem = todoViewModel::removeItem ) } Run the app again and you'll see that it works with the new ViewModel. You've changed the state to use MutableState – new let's explore how to create an editor state. Define editor state Now it's time to add state for our editor. To avoid duplicating the todo text – we're going to edit the list directly. To do that, instead of keeping the current text that we're editing, we'll keep a list index for the current editor item. Open up TodoViewModel.kt and add an editor state. Define a new private var currentEditPosition that holds the current edit position. It'll hold the list index that we're currently editing. Then, to expose the currentEditItem to compose using a getter. Even though this is a regular Kotlin function, currentEditPosition is observable to Compose just like a State<TodoItem>. TodoViewModel.kt class TodoViewModel : ViewModel() { // private state private var currentEditPosition by mutableStateOf(-1) // state var todoItems by mutableStateOf(listOf<TodoItem>()) private set // state val currentEditItem: TodoItem? get() = todoItems.getOrNull(currentEditPosition) // .. Whenever a composable calls currentEditItem, it will observe changes to both todoItems and currentEditPosition. If either change, the composable will call the getter again to get the new value. Define editor events We've defined our editor state, now we'll need to define events that composables can call to control editing. Make three events: onEditItemSelected(item: TodoItem), onEditDone(), and onEditItemChange(item: TodoItem). The events onEditItemSelected and onEditDone just change the currentEditPosition. By changing currentEditPosition, compose will recompose any composable that reads currentEditItem. TodoViewModel.kt class TodoViewModel : ViewModel() { ... // event: onEditItemSelected fun onEditItemSelected(item: TodoItem) { currentEditPosition = todoItems.indexOf(item) } // event: onEditDone fun onEditDone() { currentEditPosition = -1 } // event: onEditItemChange } } } The event onEditItemChange updates the list at currentEditPosition. This will change both the value returned by currentEditItem and todoItems at the same time. Before it does that, there's some safety checks to make sure the caller isn't trying to write the wrong item. End editing when removing items Update the removeItem event to close the current editor when an item is removed. TodoViewModel.kt // event: removeItem fun removeItem(item: TodoItem) { todoItems = todoItems.toMutableList().also { it.remove(item) } onEditDone() // don't keep the editor open when removing items } Run the app again And that's it! You've updated your ViewModel to use MutableState and saw how it can simplify observable state code. In the next section we'll add a test for this ViewModel, then move into building the editing UI. Since there were a lot of edits in this section, here's a full listing of TodoViewModel after all changes are applied: TodoViewModel.kt import androidx.compose.runtime.getValue import androidx.compose.runtime.mutableStateOf import androidx.compose.runtime.setValue import androidx.lifecycle.LiveData import androidx.lifecycle.MutableLiveData import androidx.lifecycle.ViewModel class TodoViewModel : ViewModel() { private var currentEditPosition by mutableStateOf(-1) var todoItems by mutableStateOf(listOf<TodoItem>()) private set val currentEditItem: TodoItem? get() = todoItems.getOrNull(currentEditPosition) fun addItem(item: TodoItem) { todoItems = todoItems + listOf(item) } fun removeItem(item: TodoItem) { todoItems = todoItems.toMutableList().also { it.remove(item) } onEditDone() // don't keep the editor open when removing items } fun onEditItemSelected(item: TodoItem) { currentEditPosition = todoItems.indexOf(item) } fun onEditDone() { currentEditPosition = -1 } } } } It's a good idea to test your ViewModel to make sure your application logic is correct. In this section we'll write a test to show how to test a view model using State<T> for state. Add a test to TodoViewModelTest Open TodoViewModelTest.kt in the test/ directory and add a test for removing an item: TodoViewModelTest.kt import com.example.statecodelab.util.generateRandomTodoItem import com.google.common.truth.Truth.assertThat import org.junit.Test class TodoViewModelTest { @Test fun whenRemovingItem_updatesList() { // before val viewModel = TodoViewModel() val item1 = generateRandomTodoItem() val item2 = generateRandomTodoItem() viewModel.addItem(item1) viewModel.addItem(item2) // during viewModel.removeItem(item1) // after assertThat(viewModel.todoItems).isEqualTo(listOf(item2)) } } This test shows how to test State<T> that's directly modified by events. In the before section, it creates a new ViewModel then adds two items to todoItems. The method we're testing is removeItem, which removes the first item in the list. Finally, we use Truth assertions to assert that the list contains only the second item. We don't have to do any extra work to read todoItems in a test if the updates were caused directly by the test (as we're doing here by calling removeItem) – it's just a List<TodoItem>. The rest of the tests for this ViewModel follow the same basic pattern – so we'll skip them as exercises in this codelab. You can add more tests of the ViewModel to confirm it works, or open TodoViewModelTest in the finished module to see more tests. In the next section, we'll add the new editing mode to the UI! We're finally ready to implement our neo-modern interactive design! As a reminder, this is what we're trying to build: Mock for editing mode Pass the state and events to TodoScreen We just finished defining all of the state and events we'll need for this screen in TodoViewModel. Now we'll update TodoScreen to take the state and events it will need to display the screen. Open TodoScreen.kt and change the signature of TodoScreen to add the three new events: onStartEdit: (TodoItem) -> Unit, onEditItemChange: (TodoItem) -> Unit, and onEditDone: () -> Unit ) { // ... } These are just the new state and event we just defined on the ViewModel. Then in TodoActivity.kt, pass the new values in TodoScreenActivity TodoActivity.kt @Composable private fun TodoActivityScreen(todoViewModel: TodoViewModel) { TodoScreen( items = todoViewModel.todoItems, currentlyEditing = todoViewModel.currentEditItem, onAddItem = todoViewModel::addItem, onRemoveItem = todoViewModel::removeItem, onStartEdit = todoViewModel::onEditItemSelected, onEditItemChange = todoViewModel::onEditItemChange, onEditDone = todoViewModel::onEditDone ) } This just passes the state and events that our new TodoScreen requires. Define a inline editor composable Create a new composable in TodoScreen.kt that uses the stateless composable TodoItemInput to define an inline editor. ) This composable is stateless. It only displays the item passed, and uses the events to request that the state update. Because we extracted a stateless composable TodoItemInput before, we're able to use it in this stateless context easily. This example shows the reusability of stateless composables. Even though the header uses a stateful TodoItemEntryInput on the same screen we're able to hoist the state all the way to the ViewModel for the inline editor. Use the inline editor in LazyColumnFor In the LazyColumnFor in TodoScreen, display TodoItemInlineEditor if the current item is being edited, otherwise show the TodoRow. Also, start editing when clicking an item (instead of removing it like before). TodoScreen.kt // fun TodoScreen() // ... LazyColumnFor( items = items, modifier = Modifier.weight(1f), contentPadding = PaddingValues(top = 8.dp) ) { todo -> if (currentlyEditing?.id == todo.id) { TodoItemInlineEditor( item = currentlyEditing, onEditItemChange = onEditItemChange, onEditDone = onEditDone, onRemoveItem = { onRemoveItem(todo) } ) } else { TodoRow( todo, { onStartEdit(it) }, Modifier.fillParentMaxWidth() ) } } // ... The LazyColumnFor composable is the compose equivalent of a RecyclerView. It will only recompose the items on the list needed to display the current screen, and as the user scrolls it will dispose of composables that left the screen and make new ones for the elements scrolling on. Try out the new interactive editor! Run the app again, and when you click on a todo row it'll open the interactive editor! We're using the same stateless UI composable to draw both the stateful header and the interactive edit experience. And, we didn't introduce any duplicated state while doing so. Already, this is starting to come together, though that add button looks out of place and we need to change the header. Let's finish up the design in the next few steps. Swap the header when editing Next, we'll finish the header design and then explore how to swap out the button for emoji buttons that the designer wants for their neo-modern interactive interactive design. Go back to the TodoScreen composable and make the header respond to changes in editor state. If currentlyEditing is null, then we'll show TodoItemEntryInput and pass elevation = true to TodoItemInputBackground. If currentlyEditing is not null, pass elevation = false to TodoItemInputBackground and display text that says "Editing item" in the same background. ) { Column { val enableTopSection = currentlyEditing == null TodoItemInputBackground(elevate = enableTopSection) { if (enableTopSection) { TodoItemEntryInput(onAddItem) } else { Text( "Editing item", style = MaterialTheme.typography.h6, textAlign = TextAlign.Center, modifier = Modifier .align(Alignment.CenterVertically) .padding(16.dp) .fillMaxWidth() ) } } // .. Again, we're changing the compose tree on recomposition. When the top section is enabled, we show TodoItemEntryInput, otherwise we show a Text composable displaying "Editing item." TodoItemInputBackground that was in the starter code automatically animates resizing as well as elevation changes – so when you enter editing mode this code automatically animates between the states. Run the app again Run the app again and you'll see that it animates between the editing not-editing states. We're almost done building this design. In the next section, we'll explore how to structure the code for the emoji buttons. Stateless composables that display complex UI can end up with a lot of parameters. If it's not too many parameters and they directly configure the composable, this is OK. However, sometimes you need to pass parameters to configure the children of a composable. In our neo-modern interactive design, the designer wants us to keep the Add button on the top but swap it out for two emoji-buttons for the inline editor. We could add more parameters to TodoItemInput to handle this case, but it's not clear these are really the responsibility of TodoItemInput. What we need is a way for a composable to take in a pre-configured button section. This will allow the caller to configure the buttons however it needs to without sharing all of the state required to configure them with TodoItemInput. This will both cut down the number of parameters passed to stateless composable, as well as make them more reusable. The pattern to pass a pre-configured section is slots. Slots are parameters to a composable that allow the caller to describe a section of the screen. You'll find examples of slots throughout the built-in composable APIs. One of the most commonly used examples is Scaffold. Scaffold is the composable for describing an entire screen in Material design, such as the topBar, bottomBar, and body of the screen. Instead of providing hundreds of parameters to configure each section of the screen, Scaffold exposes slots that you can fill in with whatever composables you want. This both cuts down on the number of parameters to Scaffold, and makes it more reusable. If you want to build a custom topBar, Scaffold is happy to display it. @Composable fun Scaffold( // .. topBar: @Composable (() -> Unit)? = null, bottomBar: @Composable (() -> Unit)? = null, // .. bodyContent: @Composable (PaddingValues) -> Unit ) { Define a slot on TodoItemInput Open TodoScreen.kt and define a new @Composable () -> Unit parameter on the stateless TodoItemInput called buttonSlot. TodoScreen.kt @Composable fun TodoItemInput( text: String, onTextChange: (String) -> Unit, icon: TodoIcon, onIconChange: (TodoIcon) -> Unit, submit: () -> Unit, iconsVisible: Boolean, buttonSlot: @Composable() () -> Unit ) { // ... This is a generic slot that the caller can fill in with the desired buttons. We'll use it to specify different buttons for the header and inline editors. Display the content of buttonSlot Replace the call to TodoEditButton with the content of the slot. TodoScreen.kt @Composable fun TodoItemInput( text: String, onTextChange: (String) -> Unit, icon: TodoIcon, onIconChange: (TodoIcon) -> Unit, submit: () -> Unit, iconsVisible: Boolean, buttonSlot: @Composable() () -> Unit, ) { Column { Row( Modifier .padding(horizontal = 16.dp) .padding(top = 16.dp) ) { TodoInputText( text, onTextChange, Modifier .weight(1f) .padding(end = 8.dp), submit ) // New code: Replace the call to TodoEditButton with the content of the slot Spacer(modifier = Modifier.width(8.dp)) Box(Modifier.align(Alignment.CenterVertically)) { buttonSlot() } // End new code } if (iconsVisible) { AnimatedIconRow(icon, onIconChange, Modifier.padding(top = 8.dp)) } else { Spacer(modifier = Modifier.preferredHeight(16.dp)) } } } We could directly call buttonSlot(), but we need to keep the align to center whatever the caller passes us vertically. To do that, we place the slot in a Box which is a basic composable. Update stateful TodoItemEntryInput to use the slot Now we need to update the callers to use the buttonSlot. First let's update TodoItemEntryInput: ) { TodoEditButton(onClick = submit, text = "Add", enabled = text.isNotBlank()) } } Since buttonSlot is the last parameter to TodoItemInput, we can use trailing lambda syntax. Then, in the lambda just call TodoEditButton like we were before. Update TodoItemInlineEditor to use the slot To finish the refactor, change TodoItemInlineEditor to use the slot as well:, buttonSlot = { Row { val shrinkButtons = Modifier.widthIn(20.dp) TextButton(onClick = onEditDone, modifier = shrinkButtons) { Text( text = "\uD83D\uDCBE", // floppy disk textAlign = TextAlign.End, modifier = Modifier.width(30.dp) ) } TextButton(onClick = onRemoveItem, modifier = shrinkButtons) { Text( text = "❌", textAlign = TextAlign.End, modifier = Modifier.width(30.dp) ) } } } ) Here we're passing buttonSlot as a named parameter. Then, in buttonSlot, we make a Row containing the two buttons for the inline editor design. Run the app again Run the app again and play around with the inline editor! In this section we customized our stateless composable using a slot, which allowed the caller to control a section of the screen. By using slots, we avoided coupling TodoItemInput with all of the different designs that may be added in the future. When you find yourself adding parameters to stateless composables to customize the children, evaluate if slots would be a better design. Slots tend to make composables more reusable while keeping the number of parameters manageable. Congratulations, you've successfully completed this codelab and learned how to structure state using unidirectional data flow in a Jetpack Compose app! You learned how to think about state and events to extract stateless composables in compose, and saw how to reuse a complex composable in different situations on the same screen. You've also learned how to integrate a ViewModel with Compose using both LiveData and MutableState. What's next? Sample apps - JetNews demonstrates how to use unidirectional data flow to use stateful composables to manage state in a screen built using stateless composables
https://developer.android.com/codelabs/jetpack-compose-state?hl=ja
CC-MAIN-2020-45
refinedweb
9,489
55.74
Here's a C program to generate magic square with output. This program uses C concepts like GOTO statement, Modulus in C, Multidimensional Arrays, IF-Else Condition, For loop and Nested Loops. What is Magic Square? A magic square of order n is an arrangement of n2 numbers, usually distinct integers, in a square, such that the n numbers in all rows, all columns, and both diagonals sum to the same constant. # include <stdio.h> # include <conio.h> void main() { int n, i, j, c, a[9][9] ; clrscr() ; printf("Enter the size of the magic square : ") ; scanf("%d", &n) ; if (n % 2 == 0) { printf("\nMagic square is not possible") ; goto end ; } printf("\nThe magic square for %d x %d is :\n\n", n, n) ; j = (n + 1) / 2 ; i = 1 ; for(c = 1 ; c <= n * n ; c++) { a[i][j] =() ; } Output of above program Enter the size of the magic square : 3 The magic square for 3 x 3 is : 8 1 6 3 5 7 4 9 2 plz tell me logic to make magic square. This program is the logic to generate magic square!! What else do you wanna know? magic square with even size is also possible...... formula for magic constant i.e. sum of elements in rows, columns and diagonals is m=n(n*n+1)/2 where n is the size of matrix go to wikipedia for more info plz dont use goto's!! its a hated thing among c programmers. How to tracing the program pls say Sir what is the procedure to trace this program brother i do not claer from this,please kindly clear() ; } Hi There, Your writing shines! There is no room for gibberish here clearly you have explained about C Program to Generate Magic Square is a right choice to advance my career Keep writing! Write a LEX program (I know this is in C) that identifies words from the previous set of phrases, such that an input of the form "triangle BCD" returns: ---Triangle: a geometric entity (loll. But nice Article Mate! Great Information! Keep up the good work! Cheers, Franda
http://cprogramming.language-tutorial.com/2012/02/c-program-to-generate-magic-square.html
CC-MAIN-2021-10
refinedweb
353
68.6
Back to index #include <libintl.h> #include <setjmp.h> #include <stdbool.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <ldsodefs.h> Go to the source code of this file. Definition at line 66 of file dl-error.c. Definition at line 158 of file dl-error.c. {; } Definition at line 197 of file dl-error.c. { struct catch **const catchp = &CATCH_HOOK; struct catch *old_catch; receiver_fct old_receiver; old_catch = *catchp; old_receiver = receiver; /* Set the new values. */ *catchp = NULL; receiver = fct; (*operate) (args); *catchp = old_catch; receiver = old_receiver; } Definition at line 136 of file dl-error.c. {); } Definition at line 71 of file dl-error.c. {) : "")); } } Definition at line 50 of file dl-error.c. Definition at line 65 of file dl-error.c. Definition at line 60 of file dl-error.c.
https://sourcecodebrowser.com/glibc/2.9/elf_2dl-error_8c.html
CC-MAIN-2016-44
refinedweb
132
64.98
Red Hat Bugzilla – Bug 90135 emacs font-lock error on java file Last modified: 2007-04-18 12:53:27 EDT From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20030225 Description of problem: $ cat Test1.java public class Test1 { public static void main(String[] args) { System.getProperties().list(System.out); } } $ emacs -nw -q --no-site-file \ --eval '(global-font-lock-mode t)' \ --eval '(setq debug-on-error t)' Test1.java This results in the error: Debugger entered--Lisp error: (wrong-type-argument integer-or-marker-p nil) goto-char(nil) eval((goto-char (match-beginning 4))) font-lock-fontify-keywords-region(1 119 nil) font-lock-default-fontify-region(1 119 nil) font-lock-fontify-region(1 119) run-hook-with-args(font-lock-fontify-region 1 119) jit-lock-fontify-now(1 501) jit-lock-function(1) This seems to happen with any java file. (the above file is just a simple sample) The -nw option is not necessary, I just specify it to make sure that the problem is not due to some X font problem. This problem resulted after an upgrade from rh 8.0 to rh 9. Version-Release number of selected component (if applicable): emacs-21.2-33 How reproducible: Always *** Bug 90136 has been marked as a duplicate of this bug. *** Here are a couple more test cases. A file that is just a comment does _not_ cause an error. $ cat NoError.java // the simplest java file The simplest legal file which is basically just a class definition does _not_ cause the error. $ cat NoError.java public class NoError { } The simpest useful file: a class with an embedded function definition _does_ cause the error. $ cat AnError.java public class AnError { public static void main() { } } Any hints on how to debug elisp? The stack trace doesn't even have filenames or line numbers. First of all, you may like to try using xemacs for java, since it comes with the powerful jde package included. Reproduced with emacs-21.2, but it is fixed in 21.3 AFAICT. Could you please try with the newer emacs in rawhide and re-open if you still a problem, thanks. I upgraded the following packages from rawhide: emacs-21.3-3.i386.rpm emacs-el-21.3-3.i386.rpm emacs-leim-21.3-3.i386.rpm And I can still reproduce the problem exactly as above. Thanks for the JDE tip. I actually use the JDE from emacs. I have to manually download and install: JDE, eieio, elib, semantic, and speedbar. It would be great if redhat included these as rpm packages. Just one other note. The bug repros whether the JDE is loaded or not. About JDE for Emacs, please put in a separate RFE request here, thanks. I can't see to reproduce this with 21.3 (neither with your examples Test1.java nor AnError.java), so can you tell me exactly how to reproduce? I can repro it with the same command line shown above. $ rpm -q emacs emacs-21.3-3 $ rpm -V emacs $ emacs --version GNU Emacs 21.3.1 GNU Emacs comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of Emacs under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING. $ emacs -nw -q --no-site-file --eval '(global-font-lock-mode t)' --eval '(setq debug-on-error t)' Test1.java The above line produces the error and stack trace as shown earlier. Note: I'm using rh9 + emacs-21.3-3 Are you using this same setup or are you using a complete rawhide installation? What revision of the package are you using? Is there a chance that I could be missing some other upgraded package that is fixing the problem? The -q and --no-site-file options should limit the behavior to just the emacs package, right? You're right, I reproduced it on a RHL9 install with emacs-21.3-3. However with the latest rawhide, I don't get any error. Strange. More data: downloaded $ ./configure $ make This freshly built version does _not_ reproduce the problem on rh9. Downloaded the emacs-21.3-3 source rpm (rawhide) Compiled on rh9 This freshly built version _does_ reproduce the problem on rh9. I suspect this bug is related to a bug I have found in 9.0 for emacs 21.2.1. In my case, fontifying c++ code, the type-name font is also applied to the function name, for built-in types. This did not occur with emacs 21.2.1 on 7.3, and does not occur with the xemacs shipped with 9.0. I observe that there are a large number of site lisp files that are being loaded with the version shipped with 9.0. Since the version of font-lock.el is identical between 9.0 and 7.3, some customization must be clobbering something. In addition, and maybe this should be a separate bug, smtp mail has bugs. These can be traced to the smtpmail.el that is distributed in site lisp. that the smtp server name is not being picked up from the correct smtpmail variable. This is due to use of smtp.el in the site lisp, and a different smtpmail.el in site lisp. However, I have not tracked down the precise location of the bug in the code. --peter I yesterday reinstalled rh9 from dvd, and thereby eliminated the font lock bug I was experiencing. In the "everything" install, there are several packages that result in additions to emacs, such as leim, flim, and apel, as well as others. I declined to install all of these for my reinstall. Note that the bug I was experiencing had persisted despite my running emacs with emacs -q -- no-site-file. I confess that I no longer remember the full emacs startup sequence, but I was able to verify that certain stuff gets loaded even if you specify -q --no-site-file. --peter Just an FYI. I've exausted my detective skills on this bug and have resorted to using my own, non-rpm build from the gnu sources. I assume that since doesn't repro in rawhide, it's not a priority and so no one at red hat is investigating it, right? Hmm, since I'm unable to reproduce the problem it is rather hard to look further into it. I recommend trying with a fresh install to see if you can still reproduce it. Closing for now. Feel free to reopen if you can still reproduce with Fedora Core.
https://bugzilla.redhat.com/show_bug.cgi?id=90135
CC-MAIN-2018-34
refinedweb
1,111
68.36
Collaboration Policy (same as PS2)For this problem set, you may either work alone and turn in a problem set with just your name on it, or work with one other student in the class of your choice. If you work with a partner, you and your partner should turn in one assignment with both of your names on it. Regardless of whether you work alone or with a partner, you are encouraged to In Problem Set 2, you used the StringTable data abstraction that provided a mapping between Strings and float values. For this problem set, you will implement the StringTable data abstraction.In Problem Set 2, you used the StringTable data abstraction that provided a mapping between Strings and float values. For this problem set, you will implement the StringTable data abstraction. - Learn to use abstraction functions and rep invariants to reason about data abstractions. - Learn to use ESC/Java annotations to specify preconditions, postconditions and invariants. - Gain experience implementing a data abstraction. We believe the examples in this problem set will give you enough information about ESC/Java to do what you need for this assignment. A manual for ESC/Java is available () (but we don't think you'll need it for this problem set). Reasoning About Data AbstractionsFirst, we will consider a implementing the Poly abstraction from Chapter 5 (specified in Figure 5.4). Suppose we implement Poly using a java.util.Vector. The Vector type allows us to represent an ordered collection of Objects. The objects we will store in our vector are records of <term, coefficient>. Here is the representation:import java.util.Vector; class TermRecord { // OVERVIEW: Record type int power; int coeff; TermRecord (int p_coeff, int p_power); } public class Poly { private Vector terms; // A Vector of TermRecord objects. ... }Suppose this is the implementation of degree:public int degree () { // EFFECTS: Returns the degree of this, i.e., the largest exponent // with a non-zero coefficient. Returns 0 if this is the zero Poly. return terms.lastElement ().power; } In an alternate implementation with the same rep, suppose this is the implementation of coeff:public int coeff (int d) { // EFFECTS: Returns the coefficient of the term of this whose // exponent is d. int res = 0; Enumeration els = terms.elements (); while (els.hasMoreElements ()) { TermRecord r = (TermRecord) els.nextElement (); if (r.power == d) { res += r.coeff; } } return res; } Annotating Rep InvariantsThe StringSet data abstraction is similar to IntSet from Figure 5.6. We represent a set of strings using a Vector. Its abstraction function is similar to that for IntSet (p. 101):AF(c) = { c.els[i] | 0 <= i <= c.els.size }And its rep invariant (p. 102) is:c.els != null && for all integers i . c.els[i] is a String && there are no duplicates in c.elsIf we run ESC/Java on StringSet.java without adding any annotations, eight warnings are produced. Note: when you run escjava in the DOS shell, you can use | more to prevent the messages from scrolling up the screen: escjava StringSet.java | more The first warning message is:StringSet: insert(java.lang.String) ... ------------------------------------------------------------------------ StringSet.java:25: Warning: Possible null dereference (Null) if (getIndex (s) < 0) els.add (s); ^In Java, a variable declared with an object type can either hold a value of the declared type, or the special value null (which is treated as a value of any object type). So, els is either a Vector or null. But, if els is null, it would be an error to invoke a method on it, hence ESC/Java produces a warning for the method call els.add. How does the programmer know this call is safe? The answer is the rep invariant — it contains the term c.els != null, so we know els is not null for the this object at the beginning of insert. Since nothing in insert assigns to els, we know it is still not null when we call els.add. To prevent the ESC/Java warning, we need to document the rep invariant using a formal annotation. Formal annotations are Java comments that are ignored by the Java compiler, but interpreted by ESC/Java. They are denoted by a @ character after the comment open. We express it as: //@invariant els != null. After we have added the annotation (inv1/StringSet.java), running ESC/Java produces four warnings. The first message warns that the precondition for add may not be satisfied:StringSet: insert(java.lang.String) ... ------------------------------------------------------------------------ StringSet.java:27: Warning: Precondition possibly not established (Pre) if (getIndex (s) < 0) els.add (s); ^ Associated declaration is "/net/af10/evans/escjava/escjava/lib/specs/java/util/Collection.spec", line 217, col 8: //@ requires !containsNull ==> o!=null ^A Vector may contain either all non-null elements or some possibly null elements. If the Vector containins only non-null elements, the precondition for add requires that the parameter is non-null. Note that our informal rep invariant did not preclude having null in the Vector. To make ESC/Java happy, we must explicity state whether or not the Vector can contain null. We choose to allow null in the Vector by adding //@invariant els.containsNull == true to indicate that the els Vector may contain null. The second message is similar to the first one, except it is warning about the type of the Vector elements, not whether or not they are null:StringSet.java:27: Warning: Precondition possibly not established (Pre) if (getIndex (s) < 0) els.add (s); ^ Associated declaration is "/net/af10/evans/escjava/escjava/lib/specs/java/util/Collection.spec", line 218, col 8: //@ requires \typeof(o) <: elementType || o==null ^Since the elements of a Vector may be any object type, we need to document the actual type of the Vector elements. In our informal rep invariant, we expressed this as for all integers i . c.els[i] is a String. We can express this with a formal annotation: //@invariant els.elementType == \type(String). So, now our invariant annotations are://@invariant els != null //@invariant els.containsNull == true //@invariant els.elementType == \type(String)Running ESC/Java produces four warnings. The first two reveal limitations of ESC/Java:StringSet: StringSet() ... ------------------------------------------------------------------------ StringSet.java:24: Warning: Possible violation of object invariant (Invariant) } ^ Associated declaration is "StringSet.java", line 18, col 7: //@invariant els.containsNull == true ^ Possibly relevant items from the counterexample context: (vAllocTime(brokenObj<4>) < after@21.24-21.24) ... (18 lines cut showing counterexample) brokenObj<4> != null (brokenObj* refers to the object for which the invariant is broken.) ------------------------------------------------------------------------ StringSet.java:24: Warning: Possible violation of object invariant (Invariant) } ^ Associated declaration is "StringSet.java", line 19, col 7: //@invariant els.elementType == \type(String) ^ Possibly relevant items from the counterexample context: objectToBeConstructed == brokenObj<5> RES-21.24:21.24 == brokenObj<5> (brokenObj* refers to the object for which the invariant is broken.)ESC/Java is not able to prove the invariant is true for the constructor, but by inspecting the code we can convince ourselves that it is. The Vector constructor returns a vector with no elements, so it does not contain null, and every element it contains (that is, none) is of type String. We can add set annotations to convince ESC/Java the invariant is true:public StringSet () { // EFFECTS: Initializes this to be empty: { } els = new Vector (); //@set els.elementType = \type(String) //@set els.containsNull = true } // ESC/Java is unable to prove the invariant for the empty constructor without the set's. The next warning is:StringSet: remove(java.lang.String) ... ------------------------------------------------------------------------ StringSet.java:38: Warning: Precondition possibly not established (Pre) els.removeElementAt (i); ^ Associated declaration is "/net/af10/evans/escjava/escjava/lib/specs/java/util/Vector.spec", line 569, col 8: //@ requires index < elementCount ; ^The precondition for Vector.removeElementAt requires that the value of the parameter is a valid index of an element in the vector: requires index < elementCount. The elementCount is a specification variable that indicates the number of elements in the vector. We know this is safe because getIndex always returns a value less than elementCount. To enable ESC/Java to use this, we need to document it as a postcontition of getIndex. This is done by adding an ensures clause (which has the same meaning as an informal EFFECTS clause):private int getIndex (String s) //@ensures \result < els.elementCountThe final warning is:StringSet: getIndex(java.lang.String) ... ------------------------------------------------------------------------ StringSet.java:50: Warning: Possible null dereference (Null) if (s.equals (els.elementAt (i))) { ^The method call dereferences s, the parameter to getIndex. We could either add a precondition that s is not null, or fix the code of getIndex to handle the case where s is null. Since we decided to allow null in the StringSet, we take the second approach and change the implementation of getIndex to work when s is null:private int getIndex (String s) //@ensures \result < els.elementCount { // EFFECTS: If x is in this returns index where x appears, else returns -1. for (int i = 0; i < els.size (); i++) { if (s == null) { if (els.elementAt (i) == null) { return i; } } else { if (s.equals (els.elementAt (i))) { return i; } } } return -1; }Now, running ESC/Java on StringSet.java produces no warnings. Note that we did not include the there are no duplicates in c.els part of our informal invariant in our formal annotations. Some terms in invariants are too complex to describe and check with ESC/Java. To check this at runtime, we should define and use a repOk method. Implementing StringTableFor the remaining questions, you will implement the StringTable data abstraction you used in Problem Set 2. Your implementation should satisfy this specification for StringTable. (Note: it is slightly different from the PS2 specification. We have removed the exception from addName and changed its effects clause accordingly).public class StringTable { // overview: StringTable is a set of <String, double> entries, // where the String values are unique keys. A typical StringTable // is {<s0: d0>, <s1: d1>, ... }. // //@ghost public int numEntries ; // The number of entries in the table public StringTable () // effects: Initializes this as an empty table: { }. //@ensures numEntries == 0; { } /* ** This method was used in PS2, but you do not need to implement it for PS3. public StringTable (java.io.InputStream instream) // requires: The stream instream is a names file containing lines of the form // <name>: <rate> // where the name is a string of non-space characters and the rate is // a floating point number. // modifies: instream // effects: Initializes this as a names table using the data from instream. { } */ public void addName (/*@non_null@*/ String key, double value) // requires: The parameter name is not null. (This is what the // ESC/Java /*@non_null@*/ annotation means.) // modifies: this // effects: If key matches the value of String in this, replaces the value associated // with that key with value. Otherwise, inserts <key, value> into this. // e.g., if this_pre = {<s0, d0>, <s1, d1>, <s2, d2>} // and s0, s1 and s2 are all different from key // then this_post = {<s0, d0>, <s1, d1>, <s2, d2>, <key: double>}. // if this_pre = {<s0, d0>, <s1, d1>, <s2, d2>} // and s1 is the same string as key // then this_post = {<s0, d0>, <s1, value>, <s2, d2>} // //@modifies numEntries //@ensures numEntries >= \old(numEntries); { } public double getValue (String key) // EFFECTS: Returns the value associated with key in this. If there is no entry matching // key, returns 0. // Note: it would be better to throw an exception (but we haven't covered that yet). { } public /*@non_null@*/ String getNthLowest (int index) // requires: The parameter index is non-negative and less than // the the number of entries in this. //@requires index >= 0; //@requires index < numEntries; // EFFECTS: Returns the key such that there are exactly index entries in the table for // with the value of the entry is lower than the value of the returned key. If two // keys have the same value, they will be ordered in an arbitrary way such that // getNthLowest (n) returns the first key and getNthLowest (n + 1) returns the second key. // // e.g., getNthLowest (0) returns the key associated with the lowest value in the table. // getNthLowest (size () - 1) returns the key associated with the highest value in the table. { } public int size () // EFFECTS: Returns the number of entries in this. //@ensures \result == numEntries; { } public String toString () // EFFECTS: Returns a string representation of this. { } public /*@non_null@*/ StringIterator keys () // EFFECTS: Returns a StringIterator that will iterate through all the keys in this in // order from lowest to highest. { } } The StringIterator datatype returned by the keys method is implemented by StringIterator.java. Its constructor expects an java.util.Enumeration object, which is what java.util.Vector.elements () returns. Changes and Clarifications: - 12 Sept 2002: Included specifications for degree and coeff (from Chapter 5). - 12 Sept 2002: Fixed "To make ESC/Java happy, we must explicity state whether or not the Vector can contain null. We choose to allow null in the Vector by adding //@invariant els.containsNull == true to indicate that the els Vector may contain null." (The problem set incorrectly used //@invariant els.containsNull == false. (Reported by Alex Lee) - 12 Sept 2002: For questions 1 and 2, note that without knowing the abstraction function, you can't conclue that any of the implementations satisfy their specification (no matter what rep invariant you choose). You should assume that there is a reasonable abstraction function for each question. For question 1, assume the abstraction function is:AF (c) = c_0 + c_1 x + c_2 x^2 + ... where c_i = terms.getElementAt (j).coeff if there is a j such that terms.getElementAt (j).power = i = 0 otherwiseFor question 2, assume the abstraction function is:AF (c) = c_0 + c_1 x + c_2 x^2 + ... where c_i = sum (terms.getElementAt (j).coeff) for all values j such that terms.getElementAt (j).power = i - 12 Sept 2002: Fixed link to java.util.Vector spec. - 15 Sept 2002: Changed question 9. - 16 Sept 2002: Fixed problem with < and > in StringTable specification.
http://www.cs.virginia.edu/~evans/cs201j-fall2002/problem-sets/ps3/
CC-MAIN-2017-43
refinedweb
2,289
58.08
Eavesdropping attacks are often easy to launch, but most people don't worry about them in their applications. Instead, they tend to worry about what malicious things can be done to the machine on which the application is running. Most people are far more worried about active attacks than they are about passive attacks. Nearly every active attack out there is the result of some kind of input from an attacker. Secure programming is about making sure that inputs from bad people do not do bad things. Indeed, most of the soon-to-be-released Secure Programming Cookbook for C and C++ addresses how to deal with malicious inputs. For example, cryptography and a strong authentication protocol can help prevent attackers from capturing someone's login credentials and sending those credentials as input to the program. If this entire cookbook focuses primarily on preventing malicious inputs, then why do we have a chapter of recipes specifically devoted to this topic? It's because this chapter is about one important class of defensive techniques: input validation. In Recipe 3.3 below on preventing buffer overflows, and in all of the recipes in the book's "Input Validation" chapter, we assume that people are connected to our software, and that some of them may send malicious data (even if we think there is a trusted client on the other end). One thing we really care about is this: "What does our application do with that data? In particular, does the program take data that should be untrusted and do something potentially security-critical with it? More importantly, can any untrusted data be used to manipulate the application or the underlying system in a way that has security implications?" Recipe 3.3: Preventing Buffer Overflows Problem C and C++ do not perform array-bounds checking, which turns out to be a security-critical issue, particularly in handling strings. The risks increase even more dramatically when user-controlled data is on the program stack (i.e., is a local variable). Solution There are many solutions to this problem, but none that are satisfying in every situation. You may want to rely on operational protections (such as StackGuard from Immunix), use a library for safe string handling, or even use a different programming language. Discussion Buffer overflows get a lot of attention in the technical world, partially because they constitute one of the largest classes of security problems in code, but also because they have been around for a long time, are easy to get rid of, and yet still are a huge problem. Buffer overflows are generally very easy for a C or C++ programmer to understand. An experienced programmer has invariably written off the end of an array, or indexed into the wrong memory because he improperly checked the value of the index variable. Because we assume that you are a C or C++ programmer, we won't insult your intelligence by explaining buffer overflows to you. If you do not already understand the concept, you can consult many other software security books, including Building Secure Software. In this recipe, we won't even focus so much on why buffer overflows are such a big deal. Other resources can help you understand that if you're insatiably curious. Instead, we'll focus on state of the art strategies for mitigating these problems. Most languages do not have this problem at all, because they ensure that writes to memory are always in bounds. Sometimes, this can be done at compile time, but generally it is done dynamically, right before data gets written. The C and C++ philosophy is different -- you are given the ability to eke out more speed, even if it means that you risk shooting yourself in the foot. String Handling in C and C++ Unfortunately, in C and C++, it is not only possible to overflow buffers -- it is easy, particularly when dealing with strings. The problem is that C strings are not high-level data types; they are arrays of characters. The major consequence of this nonabstraction is that the language does not manage the length of strings; you have to do it yourself. The only time C ever cares about the length of a string is in the standard library, and the length is not related to the allocated size at all -- instead, it is delimited by a 0-valued ( NULL) byte. Needless to say, this can be extremely error-prone. One of the simplest examples is the ANSI C standard library function, gets(): char *gets(char *str); This function reads data from the standard input device into the memory pointed to by str until there is a newline or until the end of file is reached. It then returns a pointer to the buffer. In addition, the function NULL-terminates the buffer. The problem with this function is that no matter how big the buffer is, an attacker can always stick more data into the buffer than it is designed to hold, simply by avoiding the newline. If the buffer in question is a local variable or otherwise lives on the program stack, then the attacker can often force the program to execute arbitrary code by overwriting important data on the stack. This is called a stack-smashing attack. Even when the buffer is heap allocated (that is, it is allocated with malloc() or new), a buffer overflow can be security-critical if an attacker can write over critical data that happens to be in nearby memory. There are plenty of other places where it is easy to overflow strings. Pretty much any time you perform an operation that writes to a "string," there is room for a problem. One famous example is strcpy(): char *strcpy(char *dst, const char *src); This function copies bytes from the address indicated by src into the buffer pointed to by dst, up to and including the first NULL byte in src. Then it returns dst. No effort is made to ensure that the dst buffer is big enough to hold the contents of the src buffer. Because the language does not track allocated sizes, there is no way for the function to do so. To help alleviate the problems with functions like strcpy() that have no way of determining whether the destination buffer is big enough to hold the result from their respective operations, there are also functions like strncpy(): char *strncpy(char *dst, const char *src, size_t len); The strncpy() function is certainly an improvement over strcpy(), but there are still problems with it. Most notably, if the source buffer contains more data than the limit imposed by the len argument, the destination buffer will not be NULL-terminated. This leads to the need for the programmer to ensure that the destination buffer is NULL-terminated. Unfortunately, the programmer often forgets to do so. There are two reasons for this failure: It's an additional step for what should be a simple operation. Many programmers do not realize that the destination buffer may not be NULL-terminated. The problems with strncpy() are further complicated by the fact that a similar function, strncat(), treats its length-limiting argument in a completely different manner. The difference in behavior serves only to confuse programmers, and more often than not, mistakes are made. Certainly, we recommend using strncpy() over using strcpy(); however, there better solutions. OpenBSD 2.4 introduced two new functions, strlcpy() and strlcat(), that are consistent in their behavior, and they provide an indication back to the caller of how much space in the destination buffer would be required to successfully complete their respective operations without truncating the results. For both functions, the length limit indicates the maximum size of the destination buffer, and the destination buffer is always NULL-terminated, even if the destination buffer must be truncated. Unfortunately, strlcpy() and strlcat() are not available on all platforms; at present, they seem to be available only on Darwin, FreeBSD, NetBSD, and OpenBSD. Fortunately, they are easy to implement yourself -- but you don't have to do so, because we provide implementations here: #include <sys/types.h> #include <string.h> size_t strlcpy(char *dst, const char *src, size_t size) { char *dstptr = dst; size_t tocopy = size; const char *srcptr = src; if (tocopy && --tocopy) { do { if (!(*dstptr++ = *srcptr++)) break; } while (--tocopy); } if (!tocopy) { if (size) *dstptr = 0; while (*srcptr++); } return (srcptr - src - 1); } size_t strlcat(char *dst, const char *src, size_t len) { char *dstptr = dst; size_t dstlen, tocopy; const char *srcptr = src; while (tocopy-- && *dstptr) dstptr++; dstlen = dstptr - dst; if (!(tocopy = size - dstlen)) return (dstlen + strlen(src)); while (*strptr) { if (tocopy != 1) { *dstptr++ = *srcptr; tocopy--; } srcptr++; } *dstptr = 0; return (dstlen + (srcptr - src)); } As part of its security push, Microsoft has developed a new set of string-handling functions for C and C++ that are defined in the header file strsafe.h. The new functions handle both ANSI and Unicode character sets, and each function is available in byte-count and character-count versions. For more information regarding using strsafe.h functions in your Windows programs, visit the MSDN reference for strsafe.h. All of the string-handling improvements we've discussed so far operate using traditional C-style NULL-terminated strings. While strlcat(), strlcpy(), and Microsoft's new string-handling functions are vast improvements over the traditional C string-handling functions, they all still require diligence on the part of the programmer to maintain information regarding the allocated size of destination buffers. An alternative to using traditional C-style strings is to use the SafeStr library, which is available from. The library is a safe string implementation that provides a new, high-level data type for strings, tracks accounting information for strings, and performs many other operations. For interoperability purposes, SafeStr strings can be passed to C string calls, as long as those calls use the string in a read-only manner. (We discuss SafeStr in some detail in Recipe 3.4 of the upcoming Secure Programming Cookbook for C and C++.) Finally, applications that transfer strings across a network should consider including a string's length along with the string itself, rather than requiring the recipient to rely on finding the NULL-terminating character to determine the length of the string. If the length of the string is known up front, the recipient can allocate a buffer of the proper size up front, and read the appropriate amount of data into it. The alternative is to read byte-by-byte, looking for the NULL-terminator, and possibly repeatedly resizing the buffer. Dan J. Bernstein has defined a convention called Netstrings for encoding the length of a string with the strings. This protocol simply would have you send the length of the string represented in ASCII, then a colon, then the string itself, then a trailing comma. For example, if you were to send the string "Hello, world!" over a network, you would send: 14:Hello, world!, Note that the Netstring representation does not include the NULL-terminator, as that is really part of the machine-specific representation of a string, and is not necessary on the network. Using C++ When using C++, you generally have a lot less to worry about when using the standard C++ string library, std::string. This library is designed in such a way that buffer overflows are less likely. Standard I/O using the stream operators ( >> and <<) is safe when using the standard C++ string type. However, buffer overflows when using strings in C++ are not out of the question. First, the programmer may choose to use old-fashioned C API calls, which work fine in C++ but are just as risky as they are in C. Second, while C++ usually throws an out_of_range exception when an operation would overflow a buffer, there are two cases where it doesn't. The first problem area occurs when using the subscript operator []. This operator doesn't perform bounds checking for you, so be careful with it. The second problem area is when using C-style strings with the C++ standard library. C-style strings are always a risk, because even C++ doesn't know how much memory is allocated to a string. Consider the following C++ program: #include <iostream.h> // WARNING: This code has a buffer overflow in it. int main() { char buf[12]; cin >> buf; cout << "You said... " << buf << endl; } If you compile the above program without optimization and then run it, typing in more than 11 printable ASCII characters (remember that C++ will add a NULL to the end of the string), the program will either crash or print out more characters than buf can store. Those extra characters get written past the end of buf. Also, when indexing a C-style string through C++, C++ always assumes that the indexing is valid, even if it isn't. Another problem occurs when converting C++-style strings to C-style strings. If you use string::c_str() to do the conversion, you will get a properly NULL-terminated, C-style string. However, if you use string::data(), which writes the string directly into an array (returning a pointer to the array), you will get a buffer that is not NULL-terminated. That is, the only difference between c_str() and data() is that c_str() adds a trailing NULL. One final point with regard to C++ is that there are plenty of applications not using the standard string library and are instead using third-party libraries. Such libraries are of varying quality when it comes to security. We recommend using the standard library, if at all possible. Otherwise, be careful in understanding the semantics of the library you do use, and the possibilities for buffer overflow. Stack-Protection Technologies In C and C++, memory for local variables is allocated on the stack. In addition, information pertaining to the control flow of a program is also maintained on the stack. If an array is allocated on the stack, and that array is overrun, an attacker can overwrite the control flow information that is also stored on the stack. As we mentioned above, this type of attack is often referred to as a stack-smashing attack. Recognizing the gravity of stack-smashing attacks, several technologies have been developed that attempt to protect programs against them. These technologies take various approaches. Some are implemented in the compiler (such as Microsoft's /GS compiler flag and IBM's ProPolice), while others are dynamic runtime solutions (such as Avaya Labs's LibSafe)., and if it is not what it is supposed to be, the program is terminated immediately. The idea behind using a canary is that an attacker attempting to mount a stack-smashing attack will have to overwrite the canary in order to overwrite the control flow information. By choosing a random value for the canary, the attacker cannot know what it is and thus be able to. On the other hand, although it is rare for Windows programs to be distributed in source form, the /GS compiler flag is a standard part of the Microsoft Visual C++ compiler, and the program's build scripts (whether they are make files, DevStudio project files, or something else entirely), can enforce the use of the flag. For Linux systems, Avaya Labs' works by replacing the implementation of several standard functions that are known to after local variables. If an attempt is made to write more than the estimated size of the buffer, the program is terminated. Unfortunately, there are several problems with the approach taken by LibSafe. One problem is that it cannot accurately compute the size of a buffer; the best that it can do is limit the size of the buffer to the difference between the start of the buffer and the frame pointer. The other problem is that LibSafe's protections will not work with programs that were compiled using the -fomit-frame-pointer flag to GCC, an optimization that causes the compiler to not put a frame pointer on the stack. Although relatively useless, this is a popular optimization for programmers to employ. In addition to providing protection against conventional stack-smashing attacks, the newest versions of LibSafe also provide some protection against format-string attacks. The format-string protection also requires access to the frame pointer, because it attempts to filter out arguments that are not pointers into the heap or the local variables on the stack. See Also: - The MSDN reference for strsafe.h - SafeStr - StackGuard from Immunix - ProPolice from IBM - LibSafe from Avaya Labs - Netstrings by Dan J. Bernstein - Recipes: 3.2 and 3.4 of O'Reilly's upcoming Secure Programming Cookbook for C and C++ John Viega is CTO of the SaaS Business Unit at McAfee and the author of many security books, including Building Secure Software (Addison-Wesley), Network Security with OpenSSL (O'Reilly), and the forthcoming Myths of Security (O'Reilly). Matt Messier is Director of Engineering at Secure Software, and coauthor of O'Reilly's "Network Security with OpenSSL." Return to the O'Reilly Network.
http://archive.oreilly.com/pub/a/network/2003/05/20/secureprogckbk.html?page=last&x-showcontent=text
CC-MAIN-2016-40
refinedweb
2,839
50.97
---- Reported by matthijs@stdin.nl 2010-09-07 01:45:41 +0000 ---- Hi folks, I've been having a build failure compiling 1.0.1 on an i386 Debian stable system. The (first) error was "invalid use of undefined type TransportAgent" or something similar in syncevo-dbus-server.cpp (I forgot to copy the full error, sorry). My analysis is that the file TransportAgent.h is not included in syncevo-dbus-server.cpp, while it does define a subclass of TransportAgent. syncevo-dbus-server.cpp does include SoupTransportAgent.h, which includes TransportAgent.h (but only when libsoup is enabled). Thus I suspect that this build failure only happens when building with libsoup disabled. Here's a trivial patch that fixed compilation for me: Index: syncevolution-1.0.1+ds1/src/syncevo-dbus-server.cpp =================================================================== --- syncevolution-1.0.1+ds1.orig/src/syncevo-dbus-server.cpp 2010-09-07 10:23:29.000000000 +0200 +++ syncevolution-1.0.1+ds1/src/syncevo-dbus-server.cpp 2010-09-07 10:24:06.000000000 +0200 @@ -27,6 +27,7 @@ #include <syncevo/LogRedirect.h> #include <syncevo/util.h> #include <syncevo/SyncContext.h> +#include <syncevo/TransportAgent.h> #include <syncevo/SoupTransportAgent.h> #include <syncevo/SyncSource.h> #include <syncevo/SyncML.h> ---- Additional Comments From patrick.ohly@intel.com 2010-09-07 02:11:24 +0000 ---- Your analysis sounds plausible. I'll apply the patch. But I'm not sure whether syncevo-dbus-server works well enough with libcurl as HTTP transport. When using libcurl, we block during HTTP POST. During that time the D-Bus daemon is unresponsive for D-Bus calls. With libsoup, we enter the main loop and continue to service D-Bus requests. Or do you build syncevo-dbus-server to be used as HTTP server? In that case the HTTP library doesn't matter. ---- Additional Comments From matthijs@stdin.nl 2010-09-07 04:17:21 +0000 ---- I'm only using this particular version to run as http server, so I haven't had any problems yet. I didn't disable libsoup explicitly, it's just not installed (and apparently the Debian package doesn't list it as a build-dep. The official builds have libsoup enabled anyway, so I guess that on Debian unstable some other build-dep pulls in libsoup). ---- Additional Comments From patrick.ohly@intel.com 2010-09-07 08:04:26 +0000 ---- (In reply to comment #1) > Your analysis sounds plausible. I'll apply the patch. Applied to master. I won't put this into a 1.0.2 because it occurs only in unusual configurations and because 1.1 shouldn't be too far away. --- Bug imported by patrick.ohly@gmx.de 2012-07-29 20:36 UTC --- This bug was previously known as _bug_ 6367 at Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.
https://bugs.freedesktop.org/show_bug.cgi?id=52731
CC-MAIN-2021-49
refinedweb
484
63.76
[This blog was migrated. You will not be able to comment here. The new URL of this post is] Finally this weekend I got a time to upgrade one of my OpenUp submissions, Silverlight controls library to work with Silverlight 2.0 beta 2. It was very interesting to track changes between developer’s (beta 1) and production (beta 2) go-live licenses. Let’s try to understand what has need changed. - Syntax of DependencyProperty registration. Now instead of DependencyProperty.Register(name, propertyType,ownerType,propertyChangedCallback) you should use DependencyProperty.Register(name, propertyType,ownerType,typeMetadata), which, actually receives only one parameter in constructor – propertyChangedCallback. This make Silverlight to be closer to WPF syntax and open it for future enhancements. You can download updated Visual Studio 2008 snippet for creation of Silverlight Dependency Properties. - OnApplyTemplate method of UserControl become public instead of protected - Thumb DragDelta event argument is not DragEventArg anymore. Now it’s DragDeltaEventArgs. So there is no HorizontalOffset and VerticalOffset attributes. They replaced by HorizontalChange and VerticalChange - DefaultStyleKey is not null anymore - Most of controls migrated from System.Windows.Controls into System.Windows namespace - Some changed to ToolTip service - Now Silverlight checks whether TargetType property of Style is really compatible with the control, you’re applying style to (this not happened in beta 1). Also DependencyObject.SetValue() method checks it’s type. - There is no InitializeFromXaml anymore. Now Silverlight works more “WPF style” with application services – Application.LoadComponent() - You cannot use x:Name and Name property together (someone did it?) There are a ton of other changed, that was not related to Silverlight Control Library. For example, changed within Storyboard class, networking, cross-domain policy, other controls (e.g. DataGrid), Templates of some controls (e.g. Button, TextBox, etc) and API. Also I want to invite you to take part into development of Silverlight controls library not because of complimentary ticket to PDC ‘08 or Mobile Memory Mouse 8000, but because Open Source is not “one-men-show”. To get access to SVN, submit your work and begin development of next generation of Silverlight controls, contact me via CodePlex and I’ll add you to the project as new contributor. Thanks for snippets, work fine MS doesn't know what is "backwards support"! After the update my old projects don't run. Why don't write "[deprecated]" on old redundant class members like in Java? Maxim, it was not "production ready" it was beta with developers "go live". Thus developers can recompile and change code a bit
http://blogs.microsoft.co.il/tamir/2008/06/15/silverlight-controls-library-has-been-upgraded-to-beta-2/
CC-MAIN-2014-52
refinedweb
417
51.14
Data Available for 26 Countries Today, we added the first global equity data to Quantopian. Global equity pricing data and fundamentals for 26 countries are now accessible via the Pipeline API in Research. The table below lists the newly supported countries as well as the corresponding supported exchanges. EDIT: This table is out of date. Quantopian now supports exchanges in 44 countries. The complete list can be found in the Data Reference. Pricing and fundamentals data for each of these countries is available as far back as 2004. The pricing data is daily, including OHLCV bars. Global fundamentals are sourced from the recently announced FactSet Fundamentals data integration. All global equity data is accessed using a new Pipeline feature, domains, described below and in the attached notebook. International Pipelines & Domains Pipeline is a tool that allows you to define computations over a universe of assets and a period of time. In the past, you could only run pipelines on the US equity market. As of today, you can now specify a domain over which a pipeline should be computed. The name "domain" refers to the mathematical concept of the "domain of a function", which is the set of potential inputs to a function. In the context of Pipeline, the domain specifies the set of assets and a corresponding trading calendar over which the expressions of a pipeline should be computed. Currently, there are 26 domains available in the Pipeline API corresponding to the countries in the table above. The attached notebook explains pipeline domains in more detail. Example Usage The following pipeline returns the latest close price, volume, and market cap for all Canadian equities, every day. Make sure to run it in Research. from quantopian.pipeline import Pipeline from quantopian.pipeline.data import EquityPricing, factset from quantopian.pipeline.domain import CA_EQUITIES from quantopian.research import run_pipeline pipe = Pipeline( columns={ 'price': EquityPricing.close.latest, 'volume': EquityPricing.volume.latest, 'mcap': factset.Fundamentals.mkt_val.latest, }, domain=CA_EQUITIES, ) result = run_pipeline(pipe, '2015-01-15', '2016-01-15') result.head() To learn more about domains, see the attached notebook. If you're curious, you can read more about the design behind domains in this Zipline issue. Currency Conversions One of the challenges in working with international financial data is the fact that prices and price based fields can be denominated in different currencies. Even if you only want to research stocks listed on a single exchange, currencies can present a challenge. For example, a company that is based in the US (like Apple), reports their financials in USD, but lists shares on exchanges around the world. These listings are often denominated in the local currency of the exchange. Having pricing and fundamentals data denominated in different currencies makes it hard to make comparisons between the two datasets. To solve this problem, FactSet Fundamentals data is denominated in the listing currency of each asset. For example, if you query sales data for the US listing of Apple, you will get data denominated in USD. If you look up fundamentals for the Japanese listing of Apple (listed in Japanese Yen), you will get data from the same US report, but it will be converted to Japanese Yen. The specific conversion techniques are as follows: Income Statement and Cash Flow Statement Data from the income statement and cash flow statement of a fundamentals report is converted using the average exchange rate over the fiscal period in question. Balance Sheet Data from the balance sheet of a report is converted using the exchange rate from the end of the fiscal period of the report (i.e. the end of Q1 for a Q1 quarterly report). Other Notes Contest Eligibility Unfortunately, international markets are not yet supported in the backtester, and by extension, they are not yet supported in the contest. We don’t currently have a timeline for a contest that supports international markets. We will provide an update to the community when we have more information to share. Alphalens Alphalens is currently the best tool for analyzing a factor on an international market. To date, we have suggested that you use get_pricing to get daily pricing data and supply it to Alphalens. International pricing data isn't yet available in get_pricing, which motivated us to revisit this suggestion. After poking around a bit, we put together a wrapper function that makes it easy to write a pipeline factor and run it through alphalens without having to write all of the get_pricing boilerplate code. The notebook attached in the comment below includes the wrapper function evaluate_factor. You can use evaluate_factor to evaluate a factor on any domain. Holdouts As described in this post, the most recent year of FactSet Fundamentals data is held out from the community platform, which applies to both US and international data. The same holdout applies to the daily OHLCV data for non-US markets. This means that all research on international factors will need to be conducted on data that is more than a year old. We are hopeful that the holdout will actually help to reduce the risk of overfitting. Documentation The notebook in this post is the best reference material on domains at this time. We plan to add documentation and other learning material around domains in the near term. Multi-Currency Markets Another challenge related to currencies is the fact that some exchanges don't require stocks to be listed in local currency. For example, the London Stock Exchange only has about 75% of its listings denominated in GBP*. The other 25% are primarily listed in EUR or USD. This can make it hard to make cross sectional comparisons. To solve this problem, most people rely on currency conversions to bring price-based fields into the same currency. Currently, there's no way to convert pricing and fundamentals data, so everything is denominated in the listing currency (the default). In addition, the listing currency information isn't yet accessible, so you don't have an easy way to determine the currency of a particular data point. Our long-term plan is to solve this problem and make it easy to conduct analyses across assets listed in different currencies. In the meantime, currency-dependent fundamentals fields are not yet available for assets that are listed in a non-primary currency of their exchange. Pricing data for non-primary currency assets is available, but you should use returns instead of raw prices to build a factor on a non-US domain (probably a good idea anyway!). Working in returns space instead of pricing makes it reasonable to make comparisons across assets listed in different currencies. *Great Britain, Switzerland, and Ireland all have a significant number of assets listed in non-local currency. Other markets have the vast majority, if not all of their listings in local currency. Combining Domains Pipelines can now be defined with new domains, but the idea of 'combining' domains is not yet supported. When working with international equity data, it is a common use case to develop a factor at a regional level (for example, Europe) instead of a country level. We plan to support multi-country domains, but don't yet have a timeline on the delivery of this feature. Known Bugs (We will make an effort to update this list as we discover and fix bugs) - Fixed: Fundamentals data for Sweden, Denmark, and Ireland are not yet available. We expect these to be available soon.* (Sweden and Denmark fundamentals data is now available) - Fixed: Fundamentals data for Ireland is not yet available. - Starting a non-US pipeline on a non-trading day (in the local market) raises an exception. Try building a factor on a new domain and let us know what you think. As always, if you discover any issues that aren't listed above or if you have any questions about the new data, please let us know!
https://www.quantopian.com/posts/global-equity-pricing-and-fundamental-data
CC-MAIN-2020-29
refinedweb
1,314
54.52
_______________________________________________________________________ I believe we should begin building the project using this library and perhaps being part of the GSL project: and with lots of info on wrappers, also docs here: One of the more important functions needed in orbital mechanis is the ability to numerically integrate non-linear differential equations and GSL has the functions to do this. Here\'s sample code from the docs which can easily be modified to numerically integrate the equations of motion of satellites: Your feedback to the list would be appreciated. --- #include <stdio.h> #include <gsl/gsl_errno.h> #include <gsl/gsl_matrix.h> #include <gsl/gsl_odeiv.h> int func (double t, const double y[], double f[], void *params) { double mu = *(double *)params; f[0] = y[1]; f[1] = -y[0] - mu*y[1]*(y[0]*y[0] - 1); return GSL_SUCCESS; } int jac (double t, const double y[], double *dfdy, double dfdt[], void *params) {; } Quoting Myles Standish <ems@...>: > Haisam K. Ido > > May 12m 2003 > > > You have asked, \"Anyone in JPL interested in joining this Open Source > project?\". I would strongly doubt it. We are busy with our own > projects. Perhaps you would be interested in the following website: > > > > And, incidentally, when I looked at your website, >, I found it empty. True. It is only in its planning stages. Thank you for the link. I\'ll add the link above. > > Myles Standish > > ********************************************************* > * Dr E Myles Standish; JPL 301-150; Pasadena, CA 91109 * > * TEL: 818-354-3959 FAX: 818-393-6388 * > * Internet: ems@... [128.149.23.23] * > ********************************************************* > > > Thank you Mr. Standish. Anyone in JPL interested in joining this Open Source project? Quoting Myles Standish <ems@...>: > Haisam K. Ido > > May 12m 2003 > > Thank you for asking about the JPL ephemerides. > > You may use the JPL Planetary and Lunar Ephemerides, and you may put > them on the server,. > > However, you must not change their name, and you must not change their > format. I.e., they must be referred to as, e.g., \"JPL Planetary and > Lunar Ephemerides DE405\" the first time they are mentioned and then > simply \"DE405\" thereafter. Also, they must remain in the original > form of chebychev polynomials. > > > > Myles Standish > > ********************************************************* > * Dr E Myles Standish; JPL 301-150; Pasadena, CA 91109 * > * TEL: 818-354-3959 FAX: 818-393-6388 * > * Internet: ems@... [128.149.23.23] * > ********************************************************* > Paul: Did you get my posting below? It doesn't appear on the archive page! Haisam Ido wrote: > > > > ------------------------------------------------------- > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, Santa Clara > The only event dedicated to issues related to Linux enterprise solutions > > > _______________________________________________ > Ossfd-general mailing list > Ossfd-general@... > In a message dated 5/1/2003 7:20:38 PM Eastern Standard Time, haisam@... writes: > On the list let's discuss our backgrounds. CTO TransOrbital, Inc. <>, constructing a commercial lunar spacecraft. 20 years systems integration and electronics design, including both hardware and software design. Lots of engineering type software written in C, C++, FORTRAN, and National Instruments LabView. My main background is in machine vision systems and interface design, I make no claims to being a guru in control systems. But I can talk the language and code just about anything. Paul Blase
https://sourceforge.net/p/ossfd/mailman/ossfd-general/
CC-MAIN-2017-43
refinedweb
517
66.33
When you create an instance of a class with the new operator, memory gets allocated on the heap. When you create an instance of a struct with the new operator where does the memory get allocated, on the heap or on the stack ? C# – Does using “new” on a struct allocate it on the heap or stack .netc++memory-management When you create an instance of a class with the Related Question - C# – How to create a new object instance from a Type - What and where are the stack and heap - C# – Should ‘using’ directives be inside or outside the namespace - C# – When should I use a struct rather than a class in C# - C# – Proper use of the IDisposable interface - C++ – Does delete on a pointer to a subclass call the base class destructor - C# – Memory allocation: Stack vs Heap - C++ – C++ programmers minimize use of ‘new’ - C# – Why not inherit from List Best Solution Okay, let's see if I can make this any clearer. Firstly, Ash is right: the question is not about where value type variables are allocated. That's a different question - and one to which the answer isn't just "on the stack". It's more complicated than that (and made even more complicated by C# 2). I have an article on the topic and will expand on it if requested, but let's deal with just the newoperator. Secondly, all of this really depends on what level you're talking about. I'm looking at what the compiler does with the source code, in terms of the IL it creates. It's more than possible that the JIT compiler will do clever things in terms of optimising away quite a lot of "logical" allocation. Thirdly, I'm ignoring generics, mostly because I don't actually know the answer, and partly because it would complicate things too much. Finally, all of this is just with the current implementation. The C# spec doesn't specify much of this - it's effectively an implementation detail. There are those who believe that managed code developers really shouldn't care. I'm not sure I'd go that far, but it's worth imagining a world where in fact all local variables live on the heap - which would still conform with the spec. There are two different situations with the newoperator on value types: you can either call a parameterless constructor (e.g. new Guid()) or a parameterful constructor (e.g. new Guid(someString)). These generate significantly different IL. To understand why, you need to compare the C# and CLI specs: according to C#, all value types have a parameterless constructor. According to the CLI spec, no value types have parameterless constructors. (Fetch the constructors of a value type with reflection some time - you won't find a parameterless one.) It makes sense for C# to treat the "initialize a value with zeroes" as a constructor, because it keeps the language consistent - you can think of new(...)as always calling a constructor. It makes sense for the CLI to think of it differently, as there's no real code to call - and certainly no type-specific code. It also makes a difference what you're going to do with the value after you've initialized it. The IL used for is different to the IL used for: In addition, if the value is used as an intermediate value, e.g. an argument to a method call, things are slightly different again. To show all these differences, here's a short test program. It doesn't show the difference between static variables and instance variables: the IL would differ between stfldand stsfld, but that's all. Here's the IL for the class, excluding irrelevant bits (such as nops): As you can see, there are lots of different instructions used for calling the constructor: newobj: Allocates the value on the stack, calls a parameterised constructor. Used for intermediate values, e.g. for assignment to a field or use as a method argument. call instance: Uses an already-allocated storage location (whether on the stack or not). This is used in the code above for assigning to a local variable. If the same local variable is assigned a value several times using several newcalls, it just initializes the data over the top of the old value - it doesn't allocate more stack space each time. initobj: Uses an already-allocated storage location and just wipes the data. This is used for all our parameterless constructor calls, including those which assign to a local variable. For the method call, an intermediate local variable is effectively introduced, and its value wiped by initobj. I hope this shows how complicated the topic is, while shining a bit of light on it at the same time. In some conceptual senses, every call to newallocates space on the stack - but as we've seen, that isn't what really happens even at the IL level. I'd like to highlight one particular case. Take this method: That "logically" has 4 stack allocations - one for the variable, and one for each of the three newcalls - but in fact (for that specific code) the stack is only allocated once, and then the same storage location is reused. EDIT: Just to be clear, this is only true in some cases... in particular, the value of guidwon't be visible if the Guidconstructor throws an exception, which is why the C# compiler is able to reuse the same stack slot. See Eric Lippert's blog post on value type construction for more details and a case where it doesn't apply. I've learned a lot in writing this answer - please ask for clarification if any of it is unclear!
https://itecnote.com/tecnote/c-does-using-new-on-a-struct-allocate-it-on-the-heap-or-stack/
CC-MAIN-2022-40
refinedweb
961
60.14
Two frontends to gs to concatenate pdfs and excise pages therefrom, respectively. #!/bin/bash ## # catpdf -- concatenate pdfs together # # usage -- catpdf INFILES OUTFILE # # notes -- requires ghostscript and userbool.sh # # written -- 6 June, 2011 by Egan McComb # # revised -- 19 December, 2011 by author ## usage() { echo "Usage: $(basename $0) INFILES OUTFILE" >&2 } writepdf() { command gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite "$@" } chkargs() { if (( $# < 3 )) then echo "Error: Too few arguments" >&2 usage exit $ERR_NARGS elif [[ -f "${!#}" ]] then echo "Warning: Output file exists" >&2 echo -n "Continue? [y/N] " >&2 read response if ! userbool.sh $response then echo "Aborting..." >&2 exit 1 else echo "Continuing..." >&2 fi fi for file in "${@:1:$(($#-1))}" do if [[ ! -e "$file" ]] then echo "Error: Invalid file '$file'" >&2 exit $ERR_VARGS fi done } ##----MAIN----## chkargs "$@" writepdf -&2; exit 1; } exit 0 #!/bin/bash ## # excpdf -- remove pages from pdfs with ghostscript # # usage -- excpdf PAGERANGE INFILE OUTFILE # -PAGERANGE is given with ranges # e.g. 3-5:7:9-15 keeps those pages # -Pages must be in numerical order # # notes -- requires catpdf # # written -- 19 December, 2011 by Egan McComb # # revised -- ## usage() { echo "Usage: $(basename $0) PAGERANGE INFILE OUTFILE" >&2 echo -e "\t-PAGERANGE is given with ranges" >&2 echo -e "\t e.g. 3-5:7:9-15 keeps those pages" >&2 echo -e "\t-Pages must be in numerical order" >&2 } trim() { tr ":\-," "\n" } writepdf() { gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -dFirstPage=${subrange[0]} -dLastPage=${subrange[-1]} "$@" } chkargs() { if (( $# != 3 )) then echo "Error: Wrong number of arguments" >&2 usage exit $ERR_NARGS fi chkfile "$2" &&&2 usage exit $ERR_VARGS elif ! trim <<< $1 | sort -nC || [[ ! -z "$(trim <<< $1 | uniq -d)" ]] then echo "Error: Invalid page range collation" >&2 usage exit $ERR_VARGS fi } chkfile() { if [[ ! -f "$1" ]] || ! grep -q "PDF" <<< $(file "$1") then echo "Error: Invalid input file '$1'" >&2 exit $ERR_VARGS fi } range() {&2 usage exit $ERR_VARGS fi&2; exit 1; } done catpdf ${tfiles[@]} "$out" rm ${tfiles[@]} } ##----MAIN----## chkargs "$@" range $1 exit 0 #!:45:02) Offline Hey all, Recently, I worked on a project that required that I convert Microsoft word files (openXML .docx) to another usable format (i.e. html). Although OpenOffice can do this quite well, it did not work well in my particular situation. Instead, I decided to write my own script to take care of the conversion. I thought I would share this in case anyone else ever decides they need it! Written in python, the script takes 2 arguments: (1) input path and (2) output path. The script depends the lxml and zipfile modules and is inspired partially by the docx module. Currently, the script dumps text to html, maintaining only vertical spacing attributes (i.e. linebreaks). I may add additional features later Here it is: import zipfile from lxml import etree import sys if(len(sys.argv) < 3): print("Usage: python [INPUT PATH.docx] [OUTPUT PATH.html]") else: fp_in = sys.argv[1] fp_out = sys.argv[2] # A .docx file is really just a zip file -- load an unpack it docx = zipfile.ZipFile(fp_in) xml = docx.read('word/document.xml') # pass the raw xml content to lxml for parsing xml_string = xml.encode('utf-8') document = etree.fromstring(xml_string) html_out = '<HTML>\n<HEAD></HEAD>\n<TITLE></TITLE>\n<BODY>\n' tag = '' # dump the document text to an html file, preserving basing formatting for element in document.iter(): # grab text and linebreaks tag = element.tag if(tag[ tag.find('}')+1 : len(tag) ] == 't'): html_out += element.text elif(tag[ tag.find('}')+1 : len(tag) ] == 'br'): html_out += '<br>\n' html_out += '</BODY>\n</HTML>' fout = open(fp_out, 'w') fout.write(html_out) fout.close() I wrote this pretty quickly so only minimal error checking is done. Any feedback or modifications are appreciated !! Offline Nice tool. Do you want me to merge into Post your handy self made command line utilities move to Community Contributions ? If you go with the latter, you may want to make a PKGBUILD and uplod to AUR. aur S & M :: forum rules :: Community Ethos Resources for Women, POC, LGBT*, and allies Offline Nice tool. Do you want me to merge into Post your handy self made command line utilities move to Community Contributions ? If you go with the latter, you may want to make a PKGBUILD and uplod to AUR. Oh I did not know about the command line utilities thread! That would be the most logical thread with which to merge it I think. Offline [...That would be the most logical thread with which to merge it I think. Here's (yet another) script for rsync-centric snapshot backups. It started life as a simple way of keeping track of my 3.5TB+ of data, it's now used in various different configs on all my computers and for my father's work servers, 100% Mac compatible. #!/bin/bash ### ~/bash/Arch-backup.sh - Backup script inspired by Apple TimeMachine. ### 19:48 Dec 4, 2011 ### Chris Cummins # ----------------------------------------------------------------------- # CONFIG # ----------------------------------------------------------------------- # LOCATION: Location to backup. [/filepath] # SNAPSHOTS_DIR: Backup directory. [/filepath] # SNAPSHOTS_LIMIT: Number of snapshots to store. # EXCLUDE_LOC: Location of rsync excludes file. # USER: Your username # FILE_MGR: File manager to open directory. # LOCATION="/" BACKUP="/mnt/data/backups" SNAPSHOTS_LIMIT=4 EXCLUDE_LOC="/mnt/data/backups/Exclude List" USER=ellis FILE_MGR=thunar ## rsync performance tuning. # VERBOSE: Set to "yes" for verbose output. # PROGRESS: Set to "yes" for rsync progress. # LOW_PRIORITY: Run the process with low priority [yes/no]. # VERBOSE="no" PROGRESS="no" LOW_PRIORITY="no" ## File handling. # ID_CODE: Sets the snapshot ID as found in 'by-id/' # ID_CODE: Sets the snapshot ID as found in 'by-date/' # ID_CODE="%Y-%m-%d_%H%M%S" DATE_CODE="%a %d %b, %T" ## Exit codes. # EXIT_NOROOT: No root priveledges. # EXIT_NODIR: Unable to create required directory. # EXIT_NOEXEC: Specified excludes list missing. # EXIT_RSYNC: rsync transfer failed. # EXIT_EXISTING: Backup with identical tag already exists. # EXIT_NOROOT=87 EXIT_NODIR=88 EXIT_NOEXEC=89 EXIT_EXISTING=90 EXIT_RSYNC=5 # ----------------------------------------------------------------------- # STAGE 1 # ----------------------------------------------------------------------- # Performs program admin, sets up directories, sets variables. # if [ "$UID" != 0 ] then echo "Arch-backup: [Stage 1] Must be ran as root!" exit $EXIT_NOROOT fi if [ ! -f "$EXCLUDE_LOC" ] then echo "Arch-backup: [Stage 1] Excludes list missing!" echo " '$EXCLUDE_LOC'" exit $EXIT_NOEXEC else echo "Arch-backup: [Stage 1] Using exclude list '$EXCLUDE_LOC'" RSYNC_EXC="--exclude-from=$EXCLUDE_LOC" fi if [ ! -d $BACKUP ] then echo "Arch-backup: [Stage 1] Creating directory:" echo " '$BACKUP'" mkdir -p $BACKUP if (( $? )) then echo "Arch-backup: [Stage 1] Unable to make required directory!" exit $EXIT_NODIR fi fi if [ ! -d "$BACKUP/by-id" ] then echo "Arch-backup: [Stage 1] Creating directory:" echo " '$BACKUP/by-id'" mkdir $BACKUP/by-id if (( $? )) then echo "Arch-backup: [Stage 1] Unable to make required directory!" exit $EXIT_NODIR fi fi if [ ! -d "$BACKUP/by-date" ] then echo "Arch-backup: [Stage 1] Creating directory:" echo " '$BACKUP/by-date'" mkdir $BACKUP/by-date if (( $? )) then echo "Arch-backup: [Stage 1] Unable to make required directory!" exit $EXIT_NODIR fi fi if [ -f "$BACKUP/by-id/.DS_Store" ] then echo "Arch-backup: [Stage 1] Removing Desktop Services Store..." rm "$BACKUP/by-id/.DS-Store" fi # BY_ID: Snapshot directory for by-id/ # BY_DATE: Snapshot directory for by-date/ # NO_OF_SNAPSHOTS: Current number of snapshots in by-id/ # Based on item count of by-id/ # OLDEST_SNAPSHOT: Oldest item in by-id/ by Modified time. # NEWEST_SNAPSHOT: Newest item in by-id/ by Modified time. # BY_ID=$(date +"$ID_CODE") BY_DATE=$(date +"$DATE_CODE") NO_OF_SNAPSHOTS=$(ls -1 "$BACKUP/by-id" | wc -l) OLDEST_SNAPSHOT=$(ls -t "$BACKUP/by-id" | tail -1) NEWEST_SNAPSHOT=$(ls -t1 "$BACKUP/by-id" | head -n1) echo "Arch-backup: Number of backups [ $NO_OF_SNAPSHOTS / $SNAPSHOTS_LIMIT ]" if [ -d "$BACKUP/by-id/$BY_ID" ] then echo "Arch-backup: [Stage 1] Directory with ID already exists!" echo " '$BACKUP/by-id/$BY_ID'" exit $EXIT_EXISTING fi if [ $NO_OF_SNAPSHOTS -gt $SNAPSHOTS_LIMIT \ -o $NO_OF_SNAPSHOTS -eq $SNAPSHOTS_LIMIT ] then echo "Arch-backup: [Stage 1] Snapshot Limit ($SNAPSHOTS_LIMIT) reached, removing:" echo " '$OLDEST_SNAPSHOT'" rm -rf "$BACKUP/by-id/$OLDEST_SNAPSHOT" echo "Arch-backup: [Stage 1] Removing broken symlinks..." find -L "$BACKUP/by-date" -type l -exec rm {} + 2>/dev/null fi if [ -d "$BACKUP/by-id/$NEWEST_SNAPSHOT" ] then echo "Arch-backup: [Stage 1] Using link destination:" echo " '$NEWEST_SNAPSHOT'" RSYNC_LINK="--link-dest=$BACKUP/by-id/$NEWEST_SNAPSHOT" fi echo "Arch-backup: Stage 1 complete, moving onto Stage 2..." # ----------------------------------------------------------------------- # STAGE 2 # ----------------------------------------------------------------------- # rsync of location with newest snapshot. # if [ $VERBOSE == "yes" ] then echo "Arch-backup: [Stage 2] Setting rsync '-v' flag..." RSYNC_V="-v" fi if [ $PROGRESS == "yes" ] then echo "Arch-backup: [Stage 2] Setting rsync '--progress' flag..." RSYNC_P="--progress" fi if [ $LOW_PRIORITY == "yes" ] then echo "Arch-backup: [Stage 2] Setting low program priority..." ionice -c 3 -p $$ renice +12 -p $$ fi echo "Arch-backup: [Stage 2] Beginning rsync of '$LOCATION'..." time rsync \ --delete \ --delete-excluded \ --archive \ --human-readable \ $RSYNC_V \ $RSYNC_P \ "$RSYNC_EXC" \ "$RSYNC_LINK" \ "$LOCATION/" "$BACKUP/In Progress..." if (( $? )) then echo "Arch-backup: [Stage 2] rsync failed!" #Cleanup failed attempt? exit $EXIT_RSYNC fi echo "Arch-backup: Stage 2 complete, moving onto Stage 3..." # ----------------------------------------------------------------------- # STAGE 3 # ----------------------------------------------------------------------- # Clean up new snapshot. # echo "Arch-backup: [Stage 3] Assigning backup ID..." mv "$BACKUP/In Progress..." "$BACKUP/by-id/$BY_ID" echo "Arch-backup: [Stage 3] Touching snapshot..." touch "$BACKUP/by-id/$BY_ID" echo "Arch-backup: [Stage 3] Creating date symlink..." ln -s "$BACKUP/by-id/$BY_ID" "$BACKUP/by-date/$BY_DATE" echo "Arch-backup: [Stage 3] Creating 'Most Recent' symlink..." rm "$BACKUP/Most Recent Backup" ln -s "$BACKUP/by-id/$BY_ID" "$BACKUP/Most Recent Backup" echo "Arch-backup: Backup complete: '$BACKUP/by-id/$BY_ID'" cd "$BACKUP/by-date/$BY_DATE" su $USER -c "$FILE_MGR" & exit 0 Typical / backup excludes file: ### /mnt/data/backups/Exclude List - rsync exclude list for filesystem backups. ### 19:46 Dec 26, 2011 ### Chris Cummins # ----------------------------------------------------------------------- # INCLUDES # ----------------------------------------------------------------------- # + /dev/console + /dev/initctl + /dev/null + /dev/zero # ----------------------------------------------------------------------- # EXCLUDES # ----------------------------------------------------------------------- # Files and directories to exclude from backups. # # Backup point. - /mnt/data/* # System directories. - /dev/* - /proc/* - /sys/* - /tmp/* - lost+found/ - /var/lib/pacman/sync/* # Removeable devices. - /media/* # Config files, virtual filesystems, caches etc. - /home/*/.gvfs - /home/*/.mozilla - /home/*/.netbeans # User files. - /home/*/Desktop - /home/*/Downloads - /home/*/Dropbox - /home/*/Music - /home/*/Pictures - /home/*/Video Sample output: Arch-backup: [Stage 1] Using exclude list '/mnt/data/backups/Exclude List' Arch-backup: Number of backups [ 4 / 4 ] Arch-backup: [Stage 1] Snapshot Limit (4) reached, removing: '2011-12-27_144804' Arch-backup: [Stage 1] Removing broken symlinks... Arch-backup: [Stage 1] Using link destination: '2011-12-27_144849' Arch-backup: Stage 1 complete, moving onto Stage 2... Arch-backup: [Stage 2] Beginning rsync of '/home/ellis/backup'... real 0m0.056s user 0m0.007s sys 0m0.003s Arch-backup: Stage 2 complete, moving onto Stage 3... Arch-backup: [Stage 3] Assigning backup ID... Arch-backup: [Stage 3] Touching snapshot... Arch-backup: [Stage 3] Creating date symlink... Arch-backup: [Stage 3] Creating 'Most Recent' symlink... Arch-backup: Backup complete: '/mnt/data/backups/by-id/2011-12-27_144850' Regards "Paradoxically, learning stuff is information-society enemy number one" Offline Most of my life is spent in terminal apps (mutt, vim, R, and bash of course). I was a bit tired of having a dozen identical and unhelpful icons in my app-switcher and tint2 panel. I'm sure there are other ways of achieving what I did below, but the simplicity and flexibility of this handful of bashrc lines have worked wonderfully for me. Note that this requires xseticon from the AUR, and as written assumes the relevant image files are in /usr/share/pixmaps. Mine were just downloaded from a google image search. ##bashrc excerpts # set arch icon as default for terminal xseticon -id "$WINDOWID" /usr/share/pixmaps/arch.png # update the window title to $PWD while under bash PROMPT_COMMAND='echo -e "\033]0;$PWD\007"' # window naming function wname() { echo -en "\033]0;$@\007"; } # aliases to set icons - the real fun: alias mutt='xseticon -id "$WINDOWID" /usr/share/pixmaps/mutt.png; wname mutt; mutt; xseticon -id "$WINDOWID" /usr/share/pixmaps/arch.png' alias r='xseticon -id "$WINDOWID" /usr/share/pixmaps/r.png; wname R; R --quiet; xseticon -id "$WINDOWID" /usr/share/pixmaps/arch.png' icon_vim() { xseticon -id "$WINDOWID" /usr/share/pixmaps/vim.png wname "vim $@"; vim "$@" xseticon -id "$WINDOWID" /usr/share/pixmaps/arch.png } alias vim='icon_vim ' Thanks to the writers/maintainers of xseticon. It's quite handy. Last edited by Trilby (2011-12-28 21:52:02) "UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman Offline Here is the first bash script that I wrote and that I still use daily, I call it "actiontime". I did not know about "cron" or the "at" command at the time I wrote it. #!/bin/bash while [ "$(date +%R)" != "$1" ]; do sleep 1 done usage: actiontime HH:MM && next_script (&) Last edited by xr4y (2012-01-02 21:25:04). … 7#p1026767 Offline karol, thanks for pointing to the nice awk script by falconindy. The advantage of the line above is that it's only one line, and it doesn't create a temporary file, but the disadvantage is that setconf might not be installed already, while awk usually ("always") is. Offline You may want to add your script to the wiki: … omatically setconf PKGBUILD $(makepkg -g 2>/dev/null | pee "head -1 | cut -d= -f1" "cut -d= -f2") ')' seems to give the same output with just one makepkg run, but: 1. It adds a requirement for 'moreutils' 2. It doesn't work with multiple checksums as setconf expects them surrounded in quotes :-( think the meld piece of this came from someone else on these forums, but I can't find where. Thank you, anonymous Arch'er! #!/bin/bash #System update script #uses pacaur to perform pacman and AUR system upgrades #then searches for .pac* files and opens them with meld #system upgrade pacaur -Syu # search for *.pac* files in /etc echo -n ":: Searching for *.pacnew and *.pacsave files..." countnew=$(sudo find /etc -type f -name "*.pacnew" | wc -l ) countsave=$(sudo find /etc -type f -name "*.pacsave" | wc -l ) count=$((countnew+countsave )) echo "$count file(s) found." # if files are found, merge *.pacnew and *.pacsave files with original configurations using meld if [ $count -gt 0 ] ; then pacnew=$(sudo find /etc -type f -name "*.pac*") echo ":: Merging $countnew *.pacnew and $countsave *.pacsave file(s)..." for config in $pacnew; do # Merge with meld gksudo meld ${config%\.*} $config >/dev/null 2>&1 & wait done #interactively delete *.pacnew and *.pacsave files echo ":: Removing files ... " sudo rm -i $pacnew fi echo ":: System upgrade complete." Last edited by rsking84 (2012-01-10 02:48:06) Offline karol, added the script to the wiki, thanks for pointing out that page. Hopefully multiple arrays with checksums will work with setconf in the future (not that it's that common). While I'm at it, here's a python2 script for finding libraries that have multiple definitions in header files by searching through /usr/include with ctags (takes some time to run). Don't know if it's useful or not yet, but here goes: #!/usr/bin/python2 # -*- coding: utf-8 -*- import os # filename -> package name packagecache = {} # package name + definition -> counter definitioncount = {} # package name -> (filename, list of duplicate definitions) dupedefs = {} def pkg(filename): if not filename in packagecache: print("Examining " + filename + "...") packagecache[filename] = os.popen3("pacman -Qo -q " + filename)[1].read().strip() return packagecache[filename] def main(): # Gather all function definitions in /usr/include data = os.popen3("ctags --sort=foldcase -o- -R /usr/include | grep -P '\tf\t'")[1].read() # Find the definitions and count duplicate ones for line in data.split("\n")[:-1]: fields = line.split("\t") name, filename = fields[:2] definition = line.split("/^")[1].split("$/;\"")[0].strip() if definition.endswith(","): # Skip the partial definitions continue id = pkg(filename) + ";" + definition if id not in definitioncount: definitioncount[id] = 0 else: definitioncount[id] += 1 # Gather all the duplicate definitions for id, count in definitioncount.items(): pkgname, definition = id.split(";", 1) if count > 1: if not pkgname in dupedefs: dupedefs[pkgname] = [definition] else: dupedefs[pkgname] += [definition] # Output the duplicate definitions per package for pkgname, deflist in dupedefs.items(): print("Duplicate definitions in %s:" % (pkgname)) for definition in deflist: print("\t" + definition) main() I recommend piping the output to a file, as it's quite abundant. Last edited by trontonic (2012-01-11 17:38:27) Offline Here is a script I wrote that connects to btjunkie.org, extracts the latest video uploads, from most seeded to least, then outputs a readable list with a tinyurl to the torrent. This script also saves the results in btjunkie.txt #!/usr/bin/python # Script Name: btjunkie.py # Script Author: Lucian Adamson # Script License: GPL v3 # Web Site: / # Written in: Python 3.2.2 # 2012-01-04, Lucian Adamson # Version 1.0: Finished, tho there are prob. a few bugs. Will update when # bugs are discovered and/or fixed. Also, I could probably implement # better error control and possibly add command line input. # CRUDE BEGINNING: Write a script to connect to btjunkie.org and return # the latest movie additions import re, urllib.request'+matcher+'</a></th>', text) return tuples def write_data(data): f = open(WRITE_FILE, 'w') f.write(data) f.close() def extract_formatted_info(): newHold={} names=extract_names() print("Parsing data and creating short urls. Depending on TinyURL, speeds will vary.") msg="New torrents on BTJUNKIE:\n\n" count=1 for x in names: (tmp1, tmp2) = x msg+=str(count) + ": " + str(tmp2) + "\n D/L:" + shrink_url("" + tmp1 + "/download.torrent") + "\n" count+=1 write_data(msg) return msg def main(): print(extract_formatted_info()) print("Data was saved to \"" + WRITE_FILE + "\" in current working directory") if __name__ == '__main__': main() like it. I replaced my "alias upgrade='sudo pacman -Syu && pacaur -u'" with it and modified it a bit. My version is included below. What I changed: (1) use the same formatting as the color version of pacaur and pacman to display messages. (2) run find only once instead of three times. (3) only use pacaur to update the AUR, dirrectly call pacman where possible #!/bin/bash #System update script #uses pacman and pacaur to upgrade all packages then searches #for .pacnew and pacsave files and opens them with meld #define colors reset="\e[0m" colorB="\e[1;34m" colorW="\e[1;39m" #custom echo function status () { echo -e "${colorB}:: ${colorW}$1${reset}"; } #system upgrade status "Starting repository upgrade" sudo pacman-color -Syu #AUR upgrade pacaur -u # search for *.pac* files in /etc status "Searching for *.pacnew and *.pacsave files in /etc..." files=$(sudo find /etc -iname "*.pacnew" -or -iname "*.pacsave") count=$(echo $files | wc -w) # if files are found, merge them with the original configurations using meld if [ $count -gt 0 ] ; then status "Merging $count configuration file(s)..." for config in $files; do # Merge with meld gksudo meld ${config%\.*} $config >/dev/null 2>&1 & wait done #interactively delete *.pacnew and *.pacsave files status "Removing files... " sudo rm -i $files else status "No configuration files found" fi status "System upgrade complete." Offline This is a script I wrote convert mp3 to m4r and transfer the converted files over to my jailbroken iPhone. #!/bin/bash # Script Name: iTone.sh # Script Author: Lucian Adamson # Script License: None, do as you please # Website: # Blog: # Description: A script that will convert 1 or more mp3 files to m4r format. # Additionally, this script will also transfer the new m4r files to your # jailbroken iPhone if you so wish. [[ ! $(which faac 2> /dev/null) ]] && echo "$(basename $0): Requires package \"faac\" installed" && exit 1 [[ ! $(which mplayer 2> /dev/null) ]] && echo "$(basename $0): Requires package \"mplayer\" installed" && exit 1 REMOVE=0 VERBOSE=0 TRANSFER=0 HOSTNAME='' PORT=22& 2 ;; esac done shift $((OPTIND-1)) for each in "$@"; do filename=$(basename $each) extension=${filename##*.} filename=${filename%.*} if [ $VERBOSE == 1 ]; then mplayer -vo null -vc null -ao pcm:fast:file=$filename.wav $each faac -b 128 -c 44100 -w $filename.wav else mplayer -vo null -vc null -ao pcm:fast:file=$filename.wav $each &> /dev/null faac -b 128 -c 44100 -w $filename.wav &> /dev/null fi mv -i $filename.m4a $filename.m4r [[ $REMOVE == 1 ]] && rm -rf "$each" && rm -rf "$filename.wav" done if [ $TRANSFER == 1 ]; then scp -P $PORT *.m4r $HOSTNAME:/Library/Ringtones/ fi Offline [[ ! $(which faac 2> /dev/null) ]] && echo ".. could be changed to: command -v faac &>/dev/null || echo "... Also be sure to quote your vars; especially filenames." Last edited by bohoomil (2012-01-18 05:29:20) :: Registered Linux User No. 223384 :: github :: infinality-bundle+fonts: good looking fonts made easy" # curl ifconfig.me does the same. Last edited by jordi (2012-01-20 18:58:10) Offline Well, you don't get a pretty colored country code. Offline # curl ifconfig.me does the same. So?... :: Registered Linux User No. 223384 :: github :: infinality-bundle+fonts: good looking fonts made easy Offline Just in case so didnt know this site: A nice one is this (wikipedia over dns): function whats() { dig +short txt $1.wp.dg.cx } Example: whats archlinux Answer: ..." systemd is like pacman. enjoys eating up stuff. Offline Apropos for the new pacman release, find out how much of a repo is signed: #!/bin/bash # # query a repo by name to see number of packages signed # IFS=$'\n' read -rd '' -a pkgs < <(pacman -Sql "$1") (( ${#pkgs[*]} )) || exit expac -S '%g' "${pkgs[@]/#/$1/}" | awk ' BEGIN { yes = no = 0 } { $1 == "(null)" ? no++ : yes++ } END { printf "Yes: %s [%.f%%]\n", yes, (yes / (yes + no) * 100) printf "No: %s\n", no }' Last edited by falconindy (2012-01-21 21:22:37) Offline
https://bbs.archlinux.org/viewtopic.php?pid=1044506
CC-MAIN-2018-26
refinedweb
3,500
58.99
This example demonstrates how to remove all formatting from a cell or range of cells. You can do this in one of the following ways. Apply the Normal style to a cell or range of cells via the Range.Style property. The Normal style object can be accessed from the Workbook.Styles collection by the style name (Normal) or index (by default, this style is added as the first in the collection of styles and cannot be deleted), or via the StyleCollection.DefaultStyle property. using DevExpress.Spreadsheet; // ... IWorkbook workbook = spreadsheetControl1.Document; Worksheet; Imports DevExpress.Spreadsheet ' ... Dim workbook As IWorkbook = spreadsheetControl1.Document Dim worksheet As
https://documentation.devexpress.com/WindowsForms/15438/Controls-and-Libraries/Spreadsheet/Examples/Formatting/How-to-Clear-Cell-Formatting
CC-MAIN-2017-39
refinedweb
104
60.51
Jay Hickerson Microsoft Corporation September 2005 Summary: Visual Studio 2005 expands typed data access with TableAdapter, a new object that greatly simplifies interacting with data objects on the client machine and sending updates to a database. (19 printed pages) Introduction Overview Generating TableAdapters The TableAdapter Code Creating TableAdapter Instances DataTable Columns Null Support Database Direct Methods Fill and GetData Methods Multiple Query Support The Queries TableAdapter More on Scalar Queries Updating the Database DataObjectAttribute Connection Strings Customizability and Extensibility A Quick Reference of TableAdapter Properties Conclusion In Visual Studio 2003, developers had a number of data-access components. However, using these components together was often tedious and didn't promote good object reuse. In Visual Studio 2005, we have expanded typed data access with a new object called a TableAdapter. With TableAdapters, the experience of interacting with data objects on the client machine and sending updates to a database is greatly simplified. TableAdapters encapsulate the objects necessary to communicate with a database and provide methods to access the data in a type-safe way. TableAdapter objects are not found in the .NET Framework. Unlike typed datasets, which inherit from the System.Data.DataSet class, TableAdapters are entirely generated by Visual Studio using the data model you create with the Data Source Configuration Wizard, or in the Dataset Designer. TableAdapters abstract the database type away from application code. For example, let's say you have developed your application using an Oracle database. Later you decide to port that database to Microsoft SQL Server. In Visual Studio 2002 and Visual Studio 2003, you would have had to change the object types you were using to access the database. OracleDataAdapters might have become SqlDataAdapters and OracleConnection objects would have become SqlConnection objects. Any commands you had created would also have to be changed. With TableAdapters, the "heavy lifting" is now handled for you. Changing your connection string and regenerating the dataset will regenerate the TableAdapter with the same name and interface. Internally the TableAdapter code will be regenerated to use the appropriate SQL objects instead of Oracle objects.. For example, an Oracle database might contain a column of type varchar2. The TableAdapter will map this type to string in the properties and methods exposed. If you later decide to use a database created with Microsoft SQL Server instead, the same field in the SQL Server database will be of type nvarchar but the TableAdapter interface will still use type string for the column. The second part of the abstraction is encapsulation of the database objects found in earlier versions of Visual Studio.. One caveat to all of this is that you may have to modify your SQL statements to match the syntax of the new database. The most common place you will encounter this is with parameter names. In the example above, any parameters used in the original Oracle statements will have to be changed from :ParamName to @ParamName so that they will be recognized by SQL Server. The code for a TableAdapter is generated after the TableAdapter is added to a dataset. A TableAdapter can be added to a dataset in several ways: Using the Data Source Configuration Wizard or dragging from Server Explorer will create a TableAdapter that is configured with defaults. A typical table from a database will include a select statement that looks something like SELECT CustomerID, ContactName, ContactName, Address, City, Region, PostalCode, Phone FROM Customers. Dragging a TableAdapter from the Toolbox or adding one through the context menu on the dataset designer will create a new, unconfigured TableAdapter and launch the TableAdapter Configuration Wizard. The following steps show a basic walkthrough of the TableAdapter Configuration Wizard: In the TableAdapter Configuration Wizard, you are prompted to use an existing connection to a database or create a new one: Once you have created a connection, you will be asked whether you would like to save the connection string in the application configuration file or have it generated in the code. (This option will not be shown in C++ or smart-device projects because they don't support strongly typed settings files. Instead the connection string will always be generated in the code.): You are then prompted to choose the command type for the TableAdapter. The options are to use a SQL statement, create a new stored procedure, or use an existing stored procedure: After choosing which command type to use, you are prompted with the appropriate pages to configure the command type you chose. For the sake of brevity, we won't follow all the paths through the wizard, but if you choose to use a SQL statement, you are presented with the following page: Clicking the Advanced Options button on the SQL Statement page provides options that control how complimentary SQL Statements are generated. You can choose whether to have corresponding Insert, Update, and Delete statements that write to the database generated for your select statement (Generate Insert, Update and Delete statements), whether to have update and delete statements fail if another user has modified the record (Use optimistic concurrency), and whether to refresh a DataTable after an Insert or Update statement by appending a select statement to the end of the command (Refresh the data table): After configuring the SQL command, you are prompted to generate methods for Fill and GetData and given the option to generate DBDirect methods that can be used to insert, update, and delete records directly in the database without using a DataTable (more on these below): The last page of the wizard presents a summary of what will be generated in the TableAdapter code. This page will also display any errors encountered in the TableAdapter creation process: The code for your entire dataset, including the TableAdapters, is stored in your project directory with the following file naming convention: <Dataset Name>.Designer.<Language Extension> For example, if you have a dataset named NorthwindDataSet in your Visual Basic project, the code file will be named NorthwindDataSet.Designer.vb. In addition to the designer code file, there is also a .xsc file which stores user interfaces (UI) preferences you have selected in the Data Sources Window and a .xss file which stores designer information about your objects (such as location and size). There may also be a .vb file where you can place your own code. The .vb file is generated the first time you select the View Code command on the Dataset Designer context menu. This file is used to store dataset validation code and other partial class methods and properties associated with your dataset and DataTables. In Solution Explorer, the files will be nested under your DataSet.xsd file. You can see the files by expanding the DataSet.xsd node. Depending on the profile you use, these files may be hidden. If you do not see the files, on the Solution Explorer toolbar, click Show All Files, and then expand the DataSet.xsd node. Double-clicking any of the files will open them in the appropriate editor. The code in the designer file is regenerated whenever a change is made to the corresponding dataset. Any modifications you make to this file will be overwritten the next time the file is generated. If you wish to modify or extend a TableAdapter, it is best to do so with a partial class in a separate code file that you add to your project. An example of extending a TableAdapter with a partial class is discussed below in Customizability and Extensibility. Once you have opened the designer code file, you will find your TableAdapters placed under a separate namespace. For example, if your dataset is named NorthwindDataSet, then the TableAdapters will be placed in the NorthwindDataSetTableAdapters namespace. This separates TableAdapters from the other objects in the dataset. The dataset itself is placed in the project's root namespace and contains nested classes for each DataTable and row type associated with the dataset. In Windows Forms projects, TableAdapter instances can be generated on a form by dragging the corresponding DataTable or any of its columns from the Data Sources Window onto the form. They can also be generated onto the form by dragging a typed TableAdapter object from the Toolbox. If you are writing an application that does not have a graphical user interface or you prefer not to use the form designer, TableAdapters can also be instantiated from code. Dim CustomersTableAdapter As New NorthwindDataSetTableAdapters.CustomersTableAdapter() NorthwindDataSetTableAdapters.CustomersTableAdapter customersTableAdapter = new NorthwindDataSetTableAdapters.CustomersTableAdapter(); To use the TableAdapter to fill a DataTable: Dim NorthwindDataSet as new NorthwindDataSet() CustomersTableAdapter.Fill(NorthwindDataSet.Customers) NorthwindDataSet northwindDataSet = new NorthwindDataSet(); customersTableAdapter.Fill(northwindDataSet.Customers); Code similar to this is generated in the Form_Load event handler when you drag objects from the Data Sources Window onto the form. When you create a TableAdapter in the designer, a corresponding DataTable is also created. This DataTable matches the schema of the default query. The default query is created by you in the TableAdapter Configuration Wizard, or is automatically created by Visual Studio through the Data Source Configuration Wizard, or by dragging an object from Server Explorer onto the Dataset Designer. When viewing the TableAdapter in the designer, the default query is always the topmost query and the icon next to the query has a checkmark in the upper-left corner: Changing the default query will also change the DataTable associated with the TableAdapter. TableAdapters use the new generic type, Nullable, to provide null support on type-safe parameters. Below is the generated function signature for the Insert function of a simplified OrdersTableAdapter: Public Overloads Overridable Function Insert(ByVal CustomerID As String, _ ByVal EmployeeID As System.Nullable(Of Integer), ByVal OrderDate As _ System.Nullable(Of Date)) As Integer public virtual int Insert(string CustomerID, System.Nullable<int> EmployeeID, System.Nullable<System.DateTime> OrderDate) The EmployeeID and OrderDate columns are both type Nullable because the corresponding columns in the Northwind database allow nulls. You can call the Insert function passing Nothing (null in C#) in the place of type-safe parameters instead of setting a separate property or using another bulky mechanism to indicate that the fields are null. OrdersTableAdapter.Insert("NEW", Nothing, Nothing) ordersTableAdapter.Insert("NEW", null, null); In addition to using Nullable on strongly typed columns, the generated TableAdapter code also checks for the use of null on columns of type strings that do not support null entries in the database. The following generated code is an example of how this check is performed: If (CustomerID Is Nothing) Then Throw New System.ArgumentNullException("CustomerID") Else Me.Adapter.UpdateCommand.Parameters(0).Value = CType(CustomerID,String) End If if ((CustomerID == null)) { throw new System.ArgumentNullException("CustomerID"); } else { this.Adapter.InsertCommand.Parameters[0].Value = ((string)(CustomerID)); } As you can see from the code above, if null is passed for a non-nullable string parameter, an ArgumentNull exception is thrown. You can change the handling of nulls in the TableAdapter by modifying the AllowDBNull attribute on individual columns in the associated DataTable. When this property is set to true, parameters are of type Nullable. When it is set to false, parameters are typed according to the database field they represent. The default setting for AllowDBNull is determined by the origin of the DataTable. For tables in the database, AllowDBNull is set based on whether each column in the database supports null values. For Transact-SQL statements AllowDBNull will be set to false on all columns. For stored procedures, AllowDBNull defaults to true for all columns. One item to note is that queries will still need special handling for null values in the table. For example, if you want to be able to select null items from the Region field of the Customers table, you will need to write a query that looks like this: SELECT * FROM Customers WHERE Region=@Region OR (Region IS NULL AND @Region IS NULL) In addition to the DataTable updating methods available on TableAdapters, you also have the option of generating methods that write to the database directly, without the need to modify a DataTable and send it to the database. This option is controlled through the GenerateDBDirectMethods property on the TableAdapter object in the Dataset Designer. The DBDirect methods generated are Insert, Delete, and an overload of Update that takes one type-safe parameter for each field in the DataTable. When you create a new TableAdapter, you are given two methods for retrieving data from a database into a DataTable. The Fill method takes an existing DataTable as a parameter and fills it. The GetData method returns a new DataTable that has been filled. Fill is a convenient way to populate a DataTable that already exists. For example, if you are using a DataSet instance in your application, then you can populate the DataTable members in your dataset by passing them to fill. When you call the Fill method, the value of the ClearBeforeFill property on the TableAdapter is checked. When the property is set to true, the Clear method on the DataTable is called before the DataTable is filled. When the property is set to false, the Clear method is not called. In the latter case, rows in the DataTable are merged with rows in the database. The default value of ClearBeforeFill is true. GetData is useful when you don't already have a DataTable instance. For example, you might want to implement search functionality on a table in your database. You can add a method to the TableAdapter that returns a new DataTable instance containing only the items that meet your search criteria. Each TableAdapter can have multiple queries associated with it. Within the TableAdapter these queries are stored as an array of command objects and accessed through type-safe method calls on the TableAdapter. By grouping queries with the same schema together, common operations can be encapsulated in one TableAdapter. For example, if you commonly filter a table on several different criteria you can add queries for each of those criteria. You might have the following two queries: SELECT * FROM Customers WHERE CustomerID=@CustomerID SELECT * FROM Customers WHERE Region=@Region Each query accessed by calling a method on the TableAdapter takes appropriate parameters and fills a DataTable. The function signatures for the Fill and GetData methods created for each of these queries are shown below. Public Overloads Overridable Function FillByCustomerID( _ ByVal dataTable As NorthwindDataSet.CustomersDataTable, _ ByVal CustomerID As String) As Integer Public Overloads Overridable Function GetDataByCustomerID( _ ByVal CustomerID As String) As NorthwindDataSet.CustomersDataTable public virtual int FillByCustomerID( NorthwindDataSet.CustomersDataTable dataTable, string CustomerID) public virtual NorthwindDataSet.CustomersDataTable GetDataByCustomerID( string CustomerID) Public Overloads Overridable Function FillByRegion( _ ByVal dataTable As NorthwindDataSet.CustomersDataTable, _ ByVal _Region As String) As Integer Public Overloads Overridable Function GetDataByRegion( _ ByVal_Region As String) As NorthwindDataSet.CustomersDataTable public virtual int FillByRegion( NorthwindDataSet.CustomersDataTable dataTable, string Region) public virtual NorthwindDataSet.CustomersDataTable GetDataByRegion( string Region) The schema of the DataTable associated with a TableAdapter is determined by the schema of the default query. However, you are not limited to creating queries that match the schema of the TableAdapter. For example, you might add a scalar query to get a count of all customers. By choosing the "SELECT which returns a single value" option in the TableAdapter Query Configuration Wizard, and then entering SELECT COUNT(*) FROM Customers for your SQL statement, a new query will be added to your TableAdapter. This query will use the connection object associated with the TableAdapter but return a scalar value instead of table data. In addition to TableAdapters with associated DataTables, there is a special TableAdapter that contains the global queries in your dataset that return single values. The default name of this TableAdapter is QueriesTableAdapter but, like any other TableAdapter, you can rename it. Queries contained in this TableAdapter are accessed in the same way that queries in other TableAdapters are accessed, with one difference: Instead of creating Fill and GetData methods, only one method is generated. This method will have an appropriate return value to match the return value of the query. Dim myQueries As New NorthwindDataSetTableAdapters.QueriesTableAdapter() customerCount = myQueries.CustomerCount() NorthwindDataSetTableAdapters.QueriesTableAdapter myQueries; customerCount = myQueries.CustomerCount().value; TableAdapters are also smart about handling queries with output parameters. For example, if you have a stored procedure called Output that takes one output parameter, the associated TableAdapter method parameters will be passed by reference: Public Overloads Overridable Function Output( _ ByRef p1 As System.Nullable(Of Integer)) public virtual int Output(ref System.Nullable<int> p1) When the method is called, p1 is passed to the stored procedure as a command parameter, the stored procedure is executed, and the value of the modified command parameter is placed back into p1. In code you can use this parameter in the same way you would use any other reference parameter. At times, you may want to retrieve a specific value from a row in a table as though it were a scalar value. You can do this with TableAdapters by creating a scalar query that returns only one column. The return value of the generated function will be an object that you can cast to the correct type. For example, if you would like the phone number of a particular customer, you can add a scalar query to your TableAdapter that looks like this: SELECT Phone FROM Customers WHERE CustomerID=@CustomerID The generated function will return an object (that you can cast to string) containing the phone number from the first row of the table: Dim phone As String Dim customerTableAdapter As New NorthwindDataSetTableAdapters.CustomersTableAdapter() phone = CType(customerTableAdapter.CustomerPhone("BOLID"), String) string phone; NorthwindDataSetTableAdapters.CustomersTableAdapter customersTableAdapter = new WindowsApplication2.NorthwindDataSetTableAdapters.CustomersTableAdapter(); phone = (string)customersTableAdapter.CustomerPhone("BOLID"); In this case there will be only one row returned since CustomerID is a unique field. NOTE: This behavior isn't specific to TableAdapters. The underlying DataAdapter provides this functionality. However, TableAdapters are designed to take advantage of this behavior and generate functions that return a single value. Similarly, you can add Insert, Update, and Delete queries that do not return any data, but act on the same table in the database that your select queries do. The most common way to update a database is by sending the changes contained within one or more DataTable objects to the database. TableAdapters provide several overloads of the Update method to facilitate this. Each of these overloads forward the passed-in parameter to the underlying DataAdapter Update method. Below is an example of calling update by passing a DataTable from a typed dataset: customersTableAdapter.Update(northwindDataSet.Customers) customersTableAdapter.Update(northwindDataSet.Customers); If you use a TableAdapter as an object data source in a Web project, you will see that the ObjectDataSource Wizard detects that the GetData method is available for select operations, and that the Update, Insert, and Delete methods are available for update, insert, and delete operations, respectively. The wizard is able to discover these methods on the TableAdapter because they each have DataObjectAttribute applied to them in the TableAdapter code. For each data method on the TableAdapter, the attribute is applied with the methodType property set to the appropriate value (Fill, Select, Insert, Update, or Delete). Visual Studio 2005 introduces typed settings that can be accessed programmatically. TableAdapters take advantage of this feature to offer connection strings that are stored in the app.config file for your application. By using a connection string stored as a typed setting you can change the connection string in the app.config file and all TableAdapters in your application will connect to the database using the new connection string. Using typed settings has the added benefit of providing defaults if the setting cannot be found in the app.config file. Instead of throwing an exception, the generated settings class returns the value that was set at compile time. For TableAdapters, the connection string you used to develop your application will be used when no other value for the connection string is found. TableAdapters are customizable and extensible in a number of ways through the Dataset Designer and with your own code. Below are some of the common techniques and properties used to customize TableAdapters. There is not a base TableAdapter object in the .NET Framework. Instead, TableAdapters inherit from System.ComponentModel.Component when they are created. However, you can change TableAdapters to inherit from a base class of your choosing by setting the BaseClass property in the Dataset Designer. It is important to note that one of the classes in the inheritance chain must inherit from System.ComponentModel.Component so that the TableAdapter can be dragged onto a Windows Form. The default accessibility of TableAdapters is public. You can restrict access to your TableAdapters from outside components by changing the Modifier property for the TableAdapter in the Dataset Designer. Similarly, you can share the connection that a TableAdapter uses by changing the ConnectionModifier property of the TableAdapter in the Dataset Designer. By default, the connection modifier is set to Friend. It is a good practice to leave this modifier set to the default to prevent unknown objects from using your connection (and possibly your credentials) to access the database. Each TableAdapter class is declared partial in the generated code. Partial classes provide a way to extend a given class by adding methods to the class in multiple code files. For TableAdapters, it's important that you write your partial classes in a separate file from the generated code so that Visual Studio does not overwrite your classes when the TableAdapter code is regenerated. (TableAdapter code is regenerated any time changes are made to a dataset in the Dataset Designer.) To add a partial class to your project, in Solution Explorer, right-click the project node and select Add->Class. You can then add your code to the new class file. The following example shows how you would add an overload to the database direct Delete method that takes a row instead of individual parameters for each field: Namespace NorthwindDataSetTableAdapters Partial Public Class CustomersTableAdapter Public Overloads Function Delete( _ ByVal row as NorthwindDataSet.CustomersRow) As Integer Return Me.Delete(row.CustomerID, row.CompanyName, row.ContactName, _ row.Address, row.City, row._Region, row.PostalCode, row.Phone) End Function End Class End Namespace namespace WindowsApplication3.NorthwindDataSetTableAdapters { public partial class CustomersTableAdapter { public int Delete(NorthwindDataSet.CustomersRow row) { return this.Delete(row.CustomerID, row.CompanyName, row.ContactName, row.ContactTitle, row.Address, row.City, row.Region, row.PostalCode, row.Country, row.Phone, row.Fax); } } } There are two important things to note here. The first is that the class is declared within the TableAdapter namespace. The second is that the class is marked partial. By doing these two things you are telling the compiler that you are extending the existing CustomersTableAdapter class as opposed to creating a new class with the same name in your root namespace. TableAdapters greatly simplify access to database providers and provide a type-safe way to execute database commands. With partial classes and inheritance they can be extended to accomplish almost any task specific to your requirements. In this article you have seen a broad overview of the features of TableAdapters and how they interact with other elements in your project. With these features you can leverage TableAdapter functionality to rapidly create database access objects that are highly reusable.
http://msdn.microsoft.com/en-us/library/ms364060(VS.80).aspx
crawl-002
refinedweb
3,864
52.6
React Hooks are a new feature added in the 16.8 release of the React JavaScript library. Hooks were intended to make it easy to use state and other reusable functionality in functional components. Before hooks, reusable functionality relied on class-based components and often required the user to use higher-order components (HOCs) and render props. While HOCs and render props are great on their own, they quickly become awkward when you try to use each of them multiple times in a single component. Why Care About React Hooks? If class components are working well for you, then you might wonder if learning about hooks is worth your time. I believe they’re worth it for several reasons. First, the React community is quick to adopt new practices and abandon old ones. The React team has heavily promoted hooks as the path toward writing React code that is clearer and easier to understand. Dan Abramov, a popular member of the React dev team, is a very strong proponent of hooks. And where the React dev team leads, the React community usually follows. This tendency to follow trends isn’t unique to the React community, and it doesn’t mean that class components are going away. They currently make up the bulk of most large React applications, and I expect that to remain true for the foreseeable future. But over time, I expect that many interesting libraries and utilities will only work with hooks. Hook-based libraries can make it very easy to work with forms, animations, and external data sources like web sockets. The catch is that you can’t use hooks in class-based components. Library authors are going to have to make a choice, and it appears that hooks will be what they choose as long as the React dev team is promoting hooks as the preferred path forward. As a developer, hooks also offer you the opportunity to write more concise code that is easier to debug and understand; they’re not just a pointless trend that offers no benefit. You definitely shouldn’t abandon class components if you love using them. They’re still great, and they’re still a big improvement over using React.createClass back in the React dark ages of 2013. But if you’re working on React apps as a member of a large team or make use of many external libraries, you’re going to encounter hooks in the wild sooner or later. What React Hooks Do Hooks offer a way to extract stateful logic from a component to make that logic easy to test and reuse. As a result, hooks can be used as an elegant replacement for sometimes-hacky workarounds like nested higher order components, providers, and render props. None of these features alone are bad, but as we touched on earlier, having to use all of them together in a single component can be a source of accidental complexity. Hooks can also be used to split complex components into smaller files and keep related bits of functionality together. For example, in a class component that needs to subscribe to a WebSocket, some of the code will have to go in the componentWillMount lifecycle method, and some will have to go in the componentWillUnmount method. Using a hook, we could keep all of the WebSocket code together in a single hook function instead of splitting it across multiple lifecycle methods. In fact, this is exactly the example we’re going to examine in the next section. Converting a Class Component to a Hook Component Now that we’ve discussed what hooks are and why you’d want to use them, let’s dive into the details of how to convert a class component to a hook component. Note that you shouldn’t feel the need to convert your class components to use hooks without a good reason. We’ll be walking through the conversion process because starting with something you already know will make hooks easier to understand, and at the end of the exercise you’ll end up with two identically functioning components that will help you easily compare and contrast the two approaches. As an example, we’re going to look at a React app that uses a class for a fairly common use case: making a WebSocket connection when the component mounts and closing the WebSocket connection when the component unmounts. The component is used as part of a simulated chat app. You can find the example in a StackBlitz workspace here. The class component we’re interested in is in the WSClass.js file. Let’s take a look at its code: import React from 'react'; import NavBar from './containers/NavBar'; import ChatWindow from './containers/ChatWindow'; import ChatEntry from './containers/ChatEntry'; export default class WsClass extends React.Component { constructor() { this.state = { messages: ["Test message"], newMessageText: "" } } componentDidMount() { this.socket = new WebSocket("wss://echo.websocket.org"); this.socket.onmessage = (message) => { const newMessage = `Message from WebSocket: ${message.data}` this.setState({ messages: this.state.messages.concat([newMessage]) }) } } componentWillUnmount() { this.socket.close(); } onMessageChange = (e) => { this.setState({ newMessageText: e.target.value }); } onMessageSubmit = () => { this.socket.send(this.state.newMessageText); this.setState({ newMessageText: "" }); } render() { return ( <> <NavBar title={"WebSocket Class Component"} /> <ChatWindow messages={this.state.messages} /> <ChatEntry text={this.state.newMessageText} onChange={this.onMessageChange} onSubmit={this.onMessageSubmit}/> </> ); } } As you can see, this is the kind of component you’ll encounter in a typical React app. In the constructor, we set up the component’s initial state. In componentDidMount, we establish a WebSocket connection to wss://echo.websocket.org. As its name implies, this is a socket that echoes back whatever we send to it, which is perfect for our example app. In componentWillUnmount, we call the close method on the WebSocket because we won’t be using it anymore. Closing the connection when we’re finished with it is important because in a real application with a chat window, there’s a good chance we’d be opening and closing the chat. If we don’t close the connection when our component unmounts, it’ll remain connected until the user reloads the page or navigates away to a different page. Over time, our app would use more and more memory until it crashes the user’s browser! We also add an event handler to handle the WebSocket’s onmessage event. This handler will be triggered whenever we receive a message from the socket. Our handler function will receive a MessageEvent as its only parameter. The event object’s data property contains the text of the message. Inside the event handler, we use this.SetState to add the data to our app state’s messages array. Next, in the render method, we lay out the components of our chat application. ChatWindow displays our chat messages, and ChatEntry lets us send chat messages. If you enter a message and click the send button, you’ll see that the WebSocket server echoes it right back: Now that we’ve seen how this app works as a class component, how would we go about creating it as a function component using hooks? If you’d like to jump ahead and it in action, you can find the final result on StackBlitz in the WSHook.js file. Otherwise, keep reading as we walk through the code. Here’s what our simulated hooks-based chat component looks like: import React, {useState, useEffect, useRef } from 'react'; import NavBar from './containers/NavBar'; import ChatWindow from './containers/ChatWindow'; import ChatEntry from './containers/ChatEntry'; export default () => { const [newMessage, setNewMessage] = useState(""); const [messages, setMessages] = useState(["Test message"]); const [state, setState] = useState({ messages: ["Test message"], newMessageText: "" }); const socket = useRef(new WebSocket("wss://echo.websocket.org")) useEffect(() => { socket.current.onmessage = (msg) => { const incomingMessage = `Message from WebSocket: ${msg.data}`; setMessages(messages.concat([incomingMessage])); } }); useEffect(() => () => socket.current.close(), [socket]) const onMessageChange = (e) => {; setNewMessage(e.target.value); } const onMessageSubmit = () => { socket.current.send(newMessage); setNewMessage("") } return ( <> <NavBar title={"WebSocket Hook Component"} /> <ChatWindow messages={messages} /> <ChatEntry text={newMessage} onChange={onMessageChange} onSubmit={onMessageSubmit}/> </> ); } You’ll notice that it looks pretty similar to the class-based component. So similar that you might wonder if it was worth the work to convert it. And in reality, the answer is that in many cases, your class components are perfectly fine as-is, and converting them to use hooks is a waste of time. Additionally, you’ll find that in some cases, hooks make your components more difficult to reason about and understand. We’ll address those issues as we walk through the code to ensure you can make an informed decision about when and where you’ll want to use hooks. We start our component with two uses of the useState hook: one to store the new message that we’ll be sending over the WebSocket, and one that stores the array of messages that we’ll be displaying in our chat window. This isn’t too different from the way we used state in the class component — we just have our new message string and messages array in separate state variables instead of in a single state object. Next, we call the useRef hook to set up our WebSocket. useRef is a handy helper that lets us set up a reference to a JavaScript object that persists between component renders. In a functional React component, the entire function is called every time the component re-renders in response to a state change. Usually this is fine, but sometimes we really need things to stick around – like our WebSocket. Without useRef, our WebSocket would be re-created every time we type a letter into the text box, and soon we’d have hundreds of open WebSocket connections. Next, we have two calls to the useEffect hook. useEffect is the hook you’ll want to use to replace the functionality of the class component lifecycle methods componentDidMount and componentWillUnmount. When you call useEffect, you pass it a function that contains code that should run when the component mounts and returns a function that will be run when the component unmounts. Here’s a brief example to illustrate the concept: useEffect(() => { console.log(“Component is rendering”); return () => { console.log(“Component is being destroyed.”); } }) That’s not too bad: just basic JavaScript, where a function returning a function is common. There’s one catch with useEffect, though: by default, it’ll run every time the component re-renders. However, useEffect can take an array of dependencies as its second argument. When you specify a list of dependencies, the effect will only run if one of those dependencies has changed. This is why we have two useEffect calls: we want to re-bind the WebSocket’s onmessage handler on every render because our handler creates a closure around the messages variable, so failing to re-bind will cause our handler to use stale data, and our chat app won’t work properly. We don’t want to close the WebSocket on every re-render, though, so we put the close handler in its own effect with socket in the array of dependencies we pass as the second argument to the useEffect call. This ensures that the socket will only be closed when the component is destroyed, which is what we want. Using effect hooks in this way can be confusing to reason about, especially if you’ve been working with class components for years. Sometimes, having named lifecycle methods will just be a better fit for the problem you’re trying to solve. Hooks are great for many use cases, but class components aren’t going to disappear. You should keep using them when you feel they’re the right fit for the problem you’re solving and use functional components and hooks where they’ll make your code easier to understand. After we’ve set up our hooks, the rest of the component is largely the same as the class component. Instead of creating instance methods to handle updates, we’ve assigned functions to variables that we can pass into the components that render our chat room. Overall, the functional component with hooks is slightly more concise than the class component, at the expense of being slightly harder to reason about if you’re accustomed to using class components. One big advantage of hooks is that it would be relatively easy to extract the WebSocket functionality out into a custom hook that could easily be reused in other components. To do the same thing with a class component, we’d have to create a higher order component which would take more work to create and would be more awkward to use. Conclusion And we’re done! We’ve successfully walked through the process of converting a stateful class component to a stateful hook component. As we’ve seen, hooks aren’t magic. They don’t do anything that class components couldn’t do. They do help you write code that’s more concise and easier to understand in some situations. It’s entirely possible to keep basing your apps on class components and only create hook components when you need to interact with hooks-only libraries. On the other hand, if you love hooks and want to use them everywhere, don’t hold back! Given the level of enthusiasm for hooks from the React dev team and the wider React community, hooks aren’t likely to disappear any time soon.. Wijmo's UI controls support all versions of React Download the latest version of Wijmo Wijmo's UI controls support all versions of React Download the latest version of WijmoDownload Now! Try SpreadJS's React spreadsheet components Download the latest version of SpreadJS Try SpreadJS's React spreadsheet components Download the latest version of SpreadJSDownload Now!
https://www.grapecity.com/blogs/moving-from-react-components-to-react-hooks
CC-MAIN-2020-29
refinedweb
2,288
54.12
I've booked this on. With the patch, something like below are possible. // unknown input public String evaluate(Object arg) { return arg == null ? null : String.valueOf(arg); } // typed variable public <T> T evaluate(T arg) { return arg; } // typed variable, nested public <T> T evaluate(Map<String, T> arg) { return arg.values().iterator().next(); } Thanks, Navis 2014-07-31 3:37 GMT+09:00 Jason Dere <jdere@hortonworks.com>: > Sounds like you are using the older style UDF class. In that case, yes > you would have to override evaluate() for each type of input. > You could also try overriding the GenericUDF class - that would allow you > to do a single method, though it may be a bit more complicated (can look at > the Hive code for some examples) > > > On Jul 30, 2014, at 7:43 AM, Dan Fan <dfan@appnexus.com> wrote: > > > Hi there > > > > I am writing a hive UDF function. The input could be string, int, double > etc. > > The return is based on the data type. I was trying to use the generic > method, however, hive seems not recognize it. > > Here is the piece of code I have as example. > > > > public <T> T evaluate(final T s, final String column_name, final int > bitmap) throws Exception { > > > > if (s instanceof Double) > > return (T) new Double(-1.0); > > Else if( s instance of Integer) > > Return (T) new Integer(-1) ; > > ….. > > } > > > > Does anyone know if hive supports the generic method ? Or I have to > override the evaluate method for each type of input. > > > > Thanks > > > > Dan > > > > > -- >. >
http://mail-archives.apache.org/mod_mbox/hive-dev/201408.mbox/%3CCACQ46vFO+NhVE+BimRhnNpbPEfBYQqD0SCQjFtz7KF-Dvo+4Fw@mail.gmail.com%3E
CC-MAIN-2017-43
refinedweb
250
74.9
Agenda See also: IRC log saz: hopes everybody read requirements doc ... next step - vote on the call (or via email) about publishing as 1st WD ... 2 or 3 iterations ... then WD note ... questions or comments? ... all clear <niq> +1 saz: please vote now on IRC <shadi> saz: yes <ChrisR> yes to EARL requirements jk: yes <CarlosI> yes <Sandor> sh: yes saz: drop confidence or keep as placeholder? ... changes will be published in new WD (before summer break?) <Zakim> niq, you wanted to say keep confidence. Having a confidence in test-description is not mutually exclusive niq: confidence should be kept saz: kept as is (high medium low)? ... or have different values? integer? float values? niq: should be open to different values saz: would be less interoparablility ... no definition for high, medium, low ... when to use a value for each test cr: should be optional saz: keep values, but property optional? cr: leave open, allow to use more values <JibberJim> I'm happy with open jk: also high, medium, low are not really interoperable saz: people use them, but the meaning could be different <niq> High: this evaluation believes the assertion. Medium: we are uncertain. Low: I'm just putting this in for completeness, but you can probably ignore it jk: imergo only uses error (100% sure) or warning(<100% sure) saz: use cannotTell? <Zakim> niq, you wanted to disagree with jk. confidences are properties of warning nk: need confidence for which warning you need to worry about <niq> no, that would be low where there's overlap jk: cannotTell vs fail/pass with medium confidence saz: confidence is not about warning; developers should be more strikt <Zakim> niq, you wanted to say fail with low confidence =~ pass with low confidence =~ can't tell nk: 'pass' is what we believe, but we are not sure (-> low confidence) ... sometimes it's in the middle saz: sub-classes of cannotTell? ... cmn will read minutes and comment nk: it's not about validity ... each of properties has its own purpose ... pass or fail with confidence to aggregate test cr: proper alt attribute: confidence is relevant saz: there is pass or fail (sure!) ... grey zone in between ... people used pass/fail with confidence together to descibe grey zone nk: extend validity will look clumsy <niq> "has no alt" <-- clear fail <niq> alt="bullet" <-- less certain fail why not cannotTell? nk: should not be merged with validity cr: low confidence -> needs a look by testing person saz: we should also use IRC and phone and mailing list <niq> yep saz: agreement not to drop confidence ... when to use high, medium, low? clear definition? saz: leave that up to developer? <niq> Suggest guidelines, something like - High: this evaluation believes the assertion. Medium: we are uncertain. Low: I'm just putting this in for completeness, but you can probably ignore it saz: values from developer's own namespace cr: ack to nk saz: not clear enough cr: should be optional <CarlosI> I agree jk: cannot be optional - one tool says fail with low confidence - another tool reads report and only says 'fail' nk: that's why we need confidence saz: nk, could you write a Note about confidence when testing against WCAG? nk: not sure if have enough time saz: others can contribute to this ... publish end of June/beginning of July <shadi> ACTION: Nick will prepare a draft proposal for the usage of high - medium - low confidence values in the context of WCAG and bring back to the group later in June for commenting [recorded in] cr: September will not be good ... prefers later jk: beginning of october is ok saz: f2f in first half of october ... thanks
http://www.w3.org/2005/06/07-er-minutes
CC-MAIN-2015-40
refinedweb
611
61.67
System.IO.Lazy Description Caution: - Although this module calls unsafeInterleaveIOfor you, it cannot take the responsibility from you. Using this module is still as unsafe as calling unsafeInterleaveIOmanually. Thus we recommend to wrap the lazy I/O monad into a custom newtypewith a restricted set of operations which is considered safe for interleaving I/O actions. - Operations like System.IO.hCloseare usually not safe within this monad, since they will only executed, if their result is consumed. Since this result is often ()this is quite unusual. It will also often be the case, that not the complete output is read, and thus the closing action is never reached. It is certainly best to call a closing action after you wrote the complete result of the lazy I/O monad somewhere return a :: LazyIO ais very different from liftIO (return a) :: LazyIO a. The first one does not trigger previous IO actions, whereas the second one does. Use it like import qualified System.IO.Lazy as LazyIO LazyIO.run $ do liftIO $ putStr "enter first line:" x <- liftIO getLine liftIO $ putStr "enter second line:" y <- liftIO getLine return x Because only the first line is needed, only the first prompt and the first getLine is executed. We advise to lift strict IO functions into the lazy IO monad. Lifting a function like readFile may lead to unintended interleaving.
http://hackage.haskell.org/package/lazyio-0.0.2/docs/System-IO-Lazy.html
CC-MAIN-2014-41
refinedweb
226
55.84
. i am aiming toward a vim like key binding, not as the default replacement, just as an alternate keymap. let me explain a bit, so you won't think i am forcing going into something just to imitate an old editor. the extensive customization could be made on sublime, requires more and more keybinding, i am just tired to flex my fingers from the ctrl, alt and other combinations everytime. so i thought the vim way to deal with it isn't quit bad. a simple overview on this vim feature: on the vim editor, you can select to go-out of editing mode, into a control mode, in which keys you press are not necessarily typed into screen, but rather are used as controls. so, on control mode, instead of pressing ctrl+f to search you use just the key '/'. if we use if for sublime we can set that instead of ctrl+shift+o to search through all opened files, use just the key 'o' for instance. so i got this far, first a plugin to easily switch between keymaps: import sublime, sublimeplugin class ToggleKeymapCommand(sublimeplugin.TextCommand): def run(self, view, args): mode = args[0] print mode print "switching to " + str(mode) + " keymap" sublime.options().set('keymap', mode) def isEnabled(self, view, args): return True second, under User/Default.sublime-keymap, add: <binding key="f12" command="toggleKeymap Vim"/> and last, create User/**Vim**.sublime-keymap, and just copy into it all the Default keymap (from Default/Default.sublime-keymap), adding the following line: <binding key="f12" command="toggleKeymap Default"/> now you can play with it. go to User/**Vim**.sublime-keymap, adding <binding key="/" command="showPanel find"/> to search with '/'. or replace the following: <binding key="alt+f3" command="findAllUnder"/> with <binding key="*" command="findAllUnder"/> so far so good, but here are the problems:1. i would like to use capslock as toggle key, so i can see the key light on my keyboard to show me on which mode i am, i tried it, and i can do it, but afterward, all the keys i press are on upper case (surprise, surprise...) it would be nice to override it, somehow.2. on vim, you can concatenate control keys on the control mode, so you can use not just 'o' to to open file, for example, but also 'ot' for open file in tab, 'od' for open file in directory, etc. on sublime i can concatenate only ctrl/alt/shift and one more normal key. i would like to be able to do it for any number of normal keys (you can think of it like the snippet mechanism)3. it has its glitches, so i bind the key '' to search-all-under (like alt+f3), it works, but, oops, it works also when i start to type the search terms, so if i try to use '' i get the effect of search-all-under again... i thought of using somehow, but i dont know if there is one for checking if the focus is on the main window or not... any way, feel free to comment, etc. happy subliming
https://forum.sublimetext.com/t/alternate-key-bindings/135/1
CC-MAIN-2017-09
refinedweb
522
68.3
Samsung PS-37S4A Manual Preview of first few manual pages (at low quality). Check before download. Click to enlarge. About Here you can find all about Samsung PS-37S4A like manual and other informations. For example: review. Samsung PS-37S4A manual (user guide) is ready to download for free. On the bottom of page users can write a review. If you own a Samsung PS-37S4A please write about it to help other people. [ Report abuse or wrong photo | Share your Samsung PS-37S4A photo ][ Report abuse or wrong photo | Share your Samsung PS-37S4A photo ] User reviews and opinions Comments posted on are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us. Documents Outputs for external devices. h) COMPONENT1, COMPONENT2 Video (Y/Pb/Pr) and audio (L-AUDIO-R) inputs for Component. c) EXT.1, EXT.2, EXT.3 Inputs or outputs for external devices, such as VCR, DVD, video game device or video disc players. i) ANT IN VHF/UHF (75) 75 Coaxial connector for Aerial/Cable Network. d) PC INPUT (RGB IN / AUDIO) Connect to the video and audio output jack on your PC. j) POWER IN Connect the supplied power cord. e) ONLY FOR SERVICE Connector for service only. f) AV2 (S-VIDEO / L-AUDIO-R) Video and audio inputs for external devices with an S-Video output, such as a camcorder or VCR. Page 8 Infrared Remote Control TURNS THE PDP ON AND OFF WHEN YOU PRESS A BUTTON , APPEARS ALONG WITH SELECTED MODE (TV, VCR, CATV, DVD OR STB) AND REMAINING BATTERIES ON LCD PICTURE EFFECT SELECTION SOUND EFFECT SELECTION TELETEXT CANCEL MAKE THE REMOTE CONTROL WORKS FOR TV, VCR, CATV, DVD PLAYER, STB DIRECT CHANNEL SELECTION AUTOMATIC SWITCH-OFF/ TELETEXT SUB-PAGE PICTURE SIZE/ TELETEXT SIZE SELECTION VOLUME INCREASE NEXT CHANNEL/ TELETEXT NEXT PAGE/ TELEWEB FORWARD TEMPORARY SOUND SWITCH-OFF VOLUME DECREASE EXTERNAL INPUT SELECTION/ TELETEXT PAGE HOLD PREVIOUS CHANNEL/ TELETEXT PREVIOUS PAGE/ TELEWEB BACKWARD INFORMATION DISPLAY/ TELETEXT REVEAL TV MODE SELECTION EXIT TELETEXT EXIT TELEWEB (DEPENDING ON THE MODEL) TELETEXT DISPLAY/ MIX BOTH TELETEXT INFORMATION AND THE NORMAL BROADCAST MENU DISPLAY/ TELETEXT INDEX MOVE TO THE REQUIRED MENU OPTION/ ADJUST AN OPTION VALUE RESPECTIVELY PICTURE STILL TELEWEB DISPLAY (DEPENDING ON THE MODEL) FASTEXT TOPIC SELECTION CONFIRM YOUR CHOICE (STORE OR ENTER) SOUND MODE SELECTION TruSurround XT MODE SELECTION PIP FUNCTIONS; - PIP ACTIVATING OR DEACTIVATING (PIP ON) - SWAPPING THE MAIN AND THE SUB PICTURE (SWAP) - LOCATION SELECTION (POSITION) - SOURCE SELECTION (SOURCE) - SIZE SELECTION (SIZE) - SCAN - SELECTING THE CHANNEL OF SUB PICTURE (P ) REMOTE CONTROL SETUP IF YOUR REMOTE CONTROL IS NOT FUNCTIONING PROPERLY, TAKE OUT THE BATTERIES AND PRESS THE RESET BUTTON FOR ABOUT 2~3 SECONDS. RE-INSERT THE BATTERIES AND TRY USING THE REMOTE CONTROL AGAIN. VCR/DVD FUNCTIONS; - REWIND (REW) - STOP - PLAY/PAUSE - FAST FORWARD (FF) The performance of the remote control may be affected by bright light. Page 9 Inserting the Batteries in the Remote Control You must insert or replace the batteries in the remote control when you: Purchase the PDP Find that the remote control is no longer operating correctly Remove the cover on the rear of the remote control by pressing the symbol ( ) downwards and then pulling firmly to remove it. Insert two R03, UM4, AAA 1.5V or equivalent batteries taking care to respect the polarities: - on the battery against - on the remote control + on the battery against + on the remote control Replace the cover by aligning it with the base of the remote control and pressing it back into place. When the television is initially powered ON, several basic customer settings proceed automatically and subsequently. The following settings are available. If the television is in Standby mode, press the POWER ( on the remote control. Result: ) button The message Plug & Play is displayed, and then the Language menu is automatically displayed a few seconds later. Move Plug & Play Enter Select the appropriate language by pressing the or button. Press the ENTER ( Result: ) button to confirm your choice. The message Antenna Input Check is displayed. Make sure that the antenna is connected to the TV, and then press the ENTER ( ) button. Result: The Country menu is displayed. Antenna Input Check Country Select your country by pressing the or button. Press the ENTER ( Result: ) button to confirm your choice. Country Austria Belgium Croatia Denmark Finland France Germany More Move Enter The Auto Store menu is displayed. ) button. To start the search, press the ENTER ( Result: The search will end automatically. Channels are sorted and stored in an order which reflects their position in the frequency range (with lowest first and highest last). When it has finished, the Clock Set menu is displayed. Auto Store Press ENTER to start channel memory 40 MHz 0% Start Enter Stop To stop the search before it has finished or return to normal viewing, press the MENU ( ) button. Press the ENTER ( ) button to set the clock. Press the or button to move to the hour or minute. Set the hour or minute by pressing the or button. When it has finished, the message Enjoy Your Watching is displayed, and then the channel which has been stored will be activated. Auto Store Channel store in process. Storing. 57 MHz 1% Stop Enter Stop Clock Hour 01 Adjust Move Min 00 Stop Enjoy Your Watching Page 17 Plug & Play Feature (continued) If you want to reset this feature. Setup Time Language AV Setup Digital NR Miscellaneous PC Setup : On : English Press the MENU button. The main menu is displayed. Result: Press the or button to select Setup. The options available in the Setup group are displayed. Result: Press the ENTER ( ) button. Press the or button to select Miscellaneous. Press the ENTER ( ) button. The options available in the Miscellaneous group are Result: displayed. Press the or button to select Plug & Play. Press the ENTER ( ) button. Result: The message Plug & Play is displayed. Select your country (or area) by pressing the or button. If you have selected the Others option but do not wish to scan the PAL frequency range, store the channels manually (see page 19). Press the ENTER ( ) button to confirm your choice Press the or button to select Auto Store. Press the ENTER ( ) button. Result: The Auto Store menu is displayed. ) button to start the search. Auto Store Channel store in process. Storing. 57 MHz 1% Stop Enter Return Press the ENTER ( Result: The search will end automatically. Channels are sorted and stored in an order which reflects their position in the frequency range, (with lowest first and highest last). The programme originally selected is then displayed. To stop the search before it has finished, press the MENU ( ) button. When the channels have been stored, you can: Sort them in the order required (see page 22) Clear a channel (see page 21) Fine-tune channel reception if necessary (see page 31) Assign a name to the stored channels (see page 23) Activate/deactivate the Digital Noise Reduction feature (see page 38) Page 19 Storing Channels Manually You can store up to 100 television channels, including those received via cable networks. When storing channels manually, you can choose: Whether or not to store each of the channels found The programme number of each stored channel which you wish to identify Prog. 9 Channel C -Manual Store Colour System Auto Search 59MHz Adjust Sound System BG Store ? Return Press the or button to select Manual Store. Press the ENTER ( ) button. Manual Store Result: The option available in the Manual Store group are displayed with Prog. is selected. Prog. 9 Colour System Auto Search 59MHz Adjust Sound System BG Store ? Return To assign a programme number to a channel, find the correct number by pressing the or button. If necessary, select the broadcasting standard required. Press the or button to select Colour System and press the or button. The colour standards are displayed in the following order. (depending on the model). Auto - PAL - SECAM or Auto - NT3.58 - NT4.43 - PAL60 Channel C - Move Manual Store Prog. 9 Channel Colour System Auto Search 59MHz Adjust Sound System BG Store ? Return Press the or button to select Sound System and press the or button. The sound standards are displayed in the following order. (depending on the model). BG - DK - I - L C - Move Page 20 Storing Channels Manually (continued) Manual Store Prog. 9 Channel C - Move Colour System Auto Search 59MHz Adjust Sound System BG Store ? Return If you know the number of the channel to be stored, see the following steps. Press the or button to select Channel. Press the or button to select C (Air channel) or S (Cable channel). Press the button. Press the numeric buttons (0~9), or button to select indicate the required number. Add/Delete Prog. 5 ----------------------------------------Added Deleted Deleted Deleted Deleted Adjust Page 22 Sorting the Stored Channels This operation allows you to change the programme numbers of stored channels. This operation may be necessary after using ATM. You can delete those channels you do not want to keep. TV Add/Delete Sort Name LNA Child Lock Edit : Off Press the or button to select Sort. Press the ENTER ( ) button. Result: The Sort menu is displayed. TV Prog. 6 --------------------- Select the channel that you wish to move by pressing the or button. Press the ENTER ( ) button. Select the number of the programme to which the channel is to be moved by pressing the or button. Press the ENTER ( ) button. Result: The channel is moved to its new position and all other channels are shifted accordingly. --------------------- TV Prog. 6 --------------------------------------------- ----Sort Repeat Steps 6 to 7 until you have moved all the channels to the required programme numbers. Page 23 Assigning Names to Channels Channel names will be assigned automatically when channel information is broadcast. These names can be changed, allowing you to assign new names. TV Add/Delete Move Enter Return Sort Name LNA Child Lock Press the or button to select Name. Press the ENTER ( ) button. Result: The Name menu is displayed with the current channel automatically is selected. Name Name --------------------- If necessary, select the channel to be assigned a new name by pressing the or button. Press the ENTER ( ) button. Result: Arrow indications are displayed around the name box. Press the or button to select a letter (A~Z), a number (0~9) or a symbol (-, space). Move on the previous or next letter by pressing the or button. When you have finished entering the name, press the ENTER ( button to confirm the name. ) Name Prog. 6 --------------------Name --------------------- Page 24 Using the LNA (Low Noise Amplifier) Feature This function is very useful in the situation that the TV is used in weak signal. LNA amplifies the TV signal in the weak signal area, but not noise. Press the or button to select LNA. Press the ENTER ( ) button. Result: The options available are listed. Select On or Off by pressing the or button. Press the ENTER ( ) button to confirm. If the picture is noisy with the LNA set to On, select Off. LNA setting is to be made for each channel. : Off Off On Page 25 Activating the Child Lock Activating the child lock This feature allows you to lock the television so that it cannot be switched on via the front panel. It can, however, still be switched on via the remote control. Thus, by keeping the remote control away from unauthorised users, such as children, you can prevent them from watching unsuitable programme. TV Prog. 6 ----------------------------------------Unlocked Unlocked Unlocked Unlocked Unlocked Move Enter Return Press the or button to select Child Lock. Press the ENTER ( ) button. Result: The Child Lock menu is displayed with the current channel automatically is selected. Blue Screen is displayed when the Child Lock is activated. Child Lock Press the or button to select the channel to be locked. Press the ENTER ( ) button. To lock the channel, select Locked by pressing the or button (to unlock the channel, select Unlocked). Press the ENTER ( ) button to confirm. Unlocked Unlocked Unlocked Unlocked Unlocked 4:44 PM Page 26 Displaying Information You can view the channel information and setting status you select by pressing the INFO ( ) button on the remote control. Picture Sound 00 : 00 : Dynamic : Music Changing the Picture Standard You can select the type of picture which best corresponds to your viewing requirements. Return TV Mode Custom Colour Tone Colour Control Film Mode Size DNIe PIP Move Picture : Dynamic : Normal : Off : 16 : 9 : On Press the MENU button. The main menu is displayed. Result: Press the or button to select Picture. The options available in the Picture group are Result: displayed. Press the ENTER ( ) button. The Mode is selected. Result: Press the ENTER ( ) button again. The options available are listed. Result: Select the option by pressing the or button. The following modes are available depending on the Result: input source. Dynamic - Standard - Movie - Custom High - Middle - Low - Custom (PC or DVI Mode). Press the ENTER ( ) button to confirm. TV Mode Custom Colour Tone Colour Control Film Mode Size DNIe PIP Move Picture : Dynamic Dynamic Standard : Normal Movie : Off Custom : 16 : 9 : On You can also set these options simply by pressing the P.MODE ( : Picture Mode) button. Page 27 Adjusting the Picture Settings Your television has several settings which allow you to control picture quality. Picture : Dynamic : Normal : Off : 16 : 9 : On Return Page 33 Selecting the Picture Size You can select the picture size which best corresponds to your viewing requirements. Size Auto Wide 16:9 Panorama Zoom 14:9 4:3 Press the or button until the Size is selected. Press the ENTER ( ) button Press the or button to change the setting. Auto Wide : Expanding and pulling up the picture from 4:3 to 16:9 ratio. 16:9 : Sets the picture to 16:9 wide mode. Panorama : Use this mode for the wide aspect ratio of a panoramic picture. Zoom : Magnify the size of the picture vertically on screen. 14:9 : Magnify the size of the picture more than 4:3. 4:3 : Sets the picture to 4:3 normal mode. PC to DVI Mode TV Auto Wide 16:9 Panorama Zoom 14:9 4:3 Size ) button to confirm. You can select these options by simply pressing the P.SIZE button on the remote control. The picture size can not be changed in the PIP mode. Depending on the input source, the P.SIZE options may vary. Positioning and Sizing the screen using Zoom Resizing the screen using the Zoom enables the positioning and sizing of the screen to up/down direction using the or button as well as the screen size. Move the screen up/down using the or button after selecting the by pressing the or button. Resize the screen vertically using the or button after selecting the by pressing the or button. Screen enlargement operates only in TV/Video/ S-Video/Component1,2 input modes. PC/DVI modes prevents the screen enlargement function. Page 34 Selecting the Film Mode You can automatically sense and process film signals from some sources and adjust the picture for optimum quality. Press the MENU button. Result: The main menu is displayed. Press the or button to select Picture. Result: The options available in the Picture group are displayed. Press the ENTER ( ) button. Picture : Dynamic : Normal : Off Off : 16 : 9 On : On Press the or button to select Film Mode. ) button. Press the ENTER ( Press the or button to change the setting (Off or On). On : Automatically senses and processes film signals from some sources and adjusts the picture for optimum quality. Off : Switches off the Film Mode. Enter Return Not available in the PC, Component (480P, 576P, 720P, 1080i) or DVI modes. Mode discrepancies, such as turning off Film Mode while viewing a film source or turning on Film Mode while viewing non-film sources, may affect the picture quality. Press the or button until the Picture is selected. Result: The options available in the Picture group are displayed. ) button. Picture : Dynamic : Normal : Off On : 16 : 9 : On Off Demo Enter Return Press the or button until the DNIe is selected. Press the ENTER ( ) button. Press the or button to change the setting. On: Switches on the DNIe mode. Off: Switches off the DNIe mode. Demo (Option): The screen before applying DNIe appears on the right and the screen after applying DNIe appears on the left. Press the ENTER ( ) button to confirm. Page 35 Setting the Blue Screen TV Setup Time Language AV Setup Digital NR : On Miscellaneous : English If no signal is being received or the signal is very weak, a blue screen automatically replaces the noisy picture background. If you wish to continue viewing the poor picture, you must set the Blue Screen mode to Off. PC Setup Enter Miscellaneous Melody : Off : Off Plug & Play Blue Screen Press the or button to select Miscellaneous. Press the ENTER ( ) button. Result: The options available in the Miscellaneous group are displayed. Press the or button to select Blue Screen. Press the ENTER ( ) button. Press the or button to change the setting (Off or On). Press the ENTER ( ) button to confirm. Move TV Melody Plug & Play Blue Screen Enter Miscellaneous : Off : Off Off On Blue Screen is displayed while no signal from the external device in the External Mode, regardless of the Blue Screen Setting. Setting the Melody Sound You can hear clear melody sound when the television is powered on or off. TV Melody Miscellaneous Melody Plug & Play Blue Screen : Off Off On : Off Press the or button to select Melody. Press the ENTER ( ) button. Press the or button to change the setting (Off or On). 5:21 PM Page 36 Viewing the Picture In Picture (PIP) You can display a sub picture within the main picture of TV program or external A/V devices. In this way you can watch TV program or monitor the video input from any connected devices while watching TV or other video input. Press the MENU button. The main menu is displayed. Result: Press the or button to select Picture. The options available in the Picture group are displayed. Result: Press the ENTER ( ) button. Select a position by pressing the or button. Press the ENTER ( ) button. Press the or button to select Prog. Press the ENTER ( ) button. Select the channel that you want to view through sub picture by pressing the or button. Press the ENTER ( ) button. PIP PIP Source Swap Size Position Prog. : On : TV : : : P08 P 1 If the sub picture is no signal and the main picture is output from a Component, PC or DVI signal, the sub picture will be blue. If the main picture is output from a Video signal, the sub picture will be black. Easy functions of remote control. Buttons PIP ON SWAP Feature Used to activate or deactivate the PIP function directly. Used to interchange the main picture and the sub picture. When the main picture is in the Video mode and the sub picture is in the TV mode, you might not hear the sound of main picture when pressing the SWAP ( ) button after changing the sub picture channel. At this time, reselect the channel of main picture. POSITION SOURCE Used to move the sub picture counterclockwise. Used to assign a source of sub picture (TV, Ext.1, Ext.2, Ext.3, AV1, AV2, S-Video, Component1, Component2, PC, DVI). Used to select a size of sub picture (Large, Small, Double1, Double2). Used to scan every memorized channel in order. To stop scanning, press it again. Used to select the channel of sub picture. SIZE SCAN P / Page 38 Using the Digital NR (Digital Noise Reduction) Feature TV Time Language AV Setup Digital NR Miscellaneous PC Setup If the signal received by your television is weak, you can activate this feature to help reduce any static and ghosting that may appear on the screen. Press the MENU button. The main menu is displayed. Result: Press the or button until the Setup is selected. The options available in the Setup group are displayed. Result: Press the ENTER ( ) button. Setup : English : On Off On Press the or button until the Digital NR is selected. Press the ENTER ( ) button. Select Off or On by pressing the or button. Press the ENTER ( ) button to confirm. Page 39 Changing the Sound Standard You can select the type of special sound effect to be used when watching a given broadcast. TV Mode Sound : Standard : Off : Off : Off Equalizer Auto Volume SRS TSXT Internal Mute Press the or button to select Sound. Result: The options available in the Sound group are displayed. ) button. TV Mode Equalizer Auto Volume SRS TSXT Internal Mute : Off : Off Sound : Standard Standard Music Movie Speech The Mode is selected. ) button again. Select the option by pressing the or button. The sound effects are displayed in the following order. Standard - Music - Movie - Speech - Custom. : Off Custom You can also set these options simply by pressing the S.MODE ( : Sound Mode) button. Page 40 Adjusting the Sound Settings The sound settings can be adjusted to suit your personal preferences. TV Mode Equalizer Auto Volume SRS TSXT Internal Mute : Off : Off : Off Sound : Standard Press the MENU button. The main menu is displayed. Result: Press the or button to select Sound. The options available in the Sound group are displayed. Result: Press the ENTER ( ) button. Equalizer Press the or button to select Equalizer. Press the ENTER ( ) button. The Equalizer menu is displayed with the current Result: option. Select the option (volume, balance, equalizer) to be adjusted by pressing the or button. Press the or button to reach the required setting. When you are satisfied with the settings, press the MENU ( button to store them. ) 1K 3K 10K Adjust Move Return If you make any changes to the equalizer settings, the sound standard is automatically switched to the Custom. Adjusting the Volume Automatically Each broadcasting station has its own signal conditions, and so it is not easy for you to adjust the volume every time the channel is changed. This feature lets you automatically adjust the volume of the desired channel by lowering the sound output when the modulation signal is high or by raising the sound output when the modulation signal is low. TV Mode Equalizer Auto Volume SRS TSXT Internal Mute Sound : Standard : Off : Off : Off TV Mode Equalizer Auto Volume SRS TSXT Internal Mute : Off Off : Off On : Off Sound : Standard Press the or button to select Auto Volume. Press the ENTER ( ) button. Select the option (Off or On) by pressing the or button. Press the ENTER ( ) button to confirm. Page 41 Setting the TruSurround XT TruSurround XT. Page 53 Using the TeleWeb Menu after Displaying the TeleWeb Bookmarking the current page. TeleWeb On the page you wish to add to the bookmark list press the MENU button. Result: The TeleWeb menu is displayed. Bookmark List Add Bookmark Home Page Current programme. Selected provider. Service is loading. Move Add Press the or button to select Add Bookmark. Press the ENTER ( ) button. Result: The current page is added to the bookmark list. You can register up to six pages. If you add another URL when six bookmarks have been listed, the first URL will be deleted and the new URL will be listed as the sixth. TeleWeb Bookmark List Add Bookmark Home Page Entering the bookmarked URL. Press the MENU button. Result: The TeleWeb menu is displayed with the Bookmark List is selected. ) button. Current programme. Selected provider. Service is loading. Move Enter Exit ------- The stored URLs are listed. Bookmark List SAMSUNG ELECTRONICS TARA SYSTEMS KIA MOTORS MICRONAS OLYMPIC NEWS ECONOMICS Move Delete Exit To erase a URL, select a required URL and press the TeleWeb (red) button. Result: The selected URL is erased and the listed URLs moves up one by one. Select the URL you wish to enter by pressing the or button and press the ENTER ( ) button. Result: The current page moves to the selected URL and the menu is disappeared. Opening the Home Page Bookmark List Add Bookmark Home Page Current programme. Selected provider. Service is loading. Move Enter Press the MENU button. Result: The TeleWeb menu is displayed. ------- Press the or button to select Home Page. Press the ENTER ( ) button. Result: It moves to a home page. Page 54 Easy Accessing the TeleWeb Service via the Remote Control Button TeleWeb Function Activate or deactivate the TeleWeb feature directly. Display the forward page of the TeleWeb Display the backward page of the TeleWeb Select a option. Confirm your choice. Page 55 Connecting to the External Devices MONITOR OUT are used for the equipment with an RGB output, such as video game devices or video disc players. Rear of the TV (Input/Output) or Decoder / Video game device Video disc player Satellite receiver This end can be fitted with: A SCART connector Three RCA connectors (VIDEO + AUDIO-L/R) If you have a second VCR and wish to copy cassettes tape, connect the source VCR to EXT.1(EXT.2, EXT.3, AV1 or AV2) and the target VCR to MONITOR OUT so that you can redirect the signal from EXT.1(EXT.2, EXT.3, AV1 or AV2) to MONITOR OUT. and the VCR to MONITOR OUT so that you can redirect the signal from EXT.1(EXT.2, EXT.3, AV1 or AV2) to MONITOR OUT. When you wish to record a programme, connect the receiver to EXT.1(EXT.2, EXT.3, AV1 or AV2) Page 60 Setting up Your PC Software (Windows only) The Windows display-settings for a typical computer are shown below. But the actual screens on your PC will probably be different, depending upon your particular version of Windows and your particular video card. But even if your actual screens look different, the same, basic set-up information will apply in almost all cases. (If not, contact your computer manufacturer or Samsung Dealer.) On the windows screen, select in the following sequence: Start Settings Control Panel. When the control panel screen appears, click on Display and a display dialog-box will appear. Select the Settings tab in the display dialog-box. The two key variables that apply the television-PC interface are Resolution and Colours. The correct settings for these two variables are: Size (sometimes called Resolution) - 1024 x 768 pixels Colour 32-bit colour Shown at left is a typical screen for Display dialog box. If a vertical-frequency option exists on your display settings dialog box, the correct value is 60 Hz. Otherwise, just click OK and exit the dialog box. Page 61 Input Mode (PC/DVI) Both screen position and size will vary depending on the type of PC monitor and its resolution. The table below shows all of the display modes that are supported: Video Signal Dot x Line 640 x x x x 480 Vertical frequency (Hz) 70.086 85.080 85.080 70.087 85.039 59.940 72.809 75.000 85.008 56.250 60.317 72.188 75.000 85.061 60.000 72.000 75.000 60.004 70.069 75.029 84.997 Horizontal frequency (kHz) 31.469 37.861 37.861 31.469 37.927 31.469 37.861 37.500 43.269 35.156 37.879 48.077 46.875 53.674 29.838 36.072 37.650 48.363 56.476 60.023 68.677 Vertical polarity N N P P P N N N N N/P P P P P Horizontal polarity P P N N N N N N N N/P P P P P IBM PC / AT Compatible 800 x 600 848 x 480 nVidia chipset N N P P N N P P 1024 x 768 (N: Negative / P: Positive) The interlace mode is not supported. The television might operate abnormally if a non-standard video format is selected. The PC text quality for PS-37S4A is optimum in XGA mode (1024 x 768@60Hz). When this television is used as a PC display, 32-bit colour is supported. Your PC display screen might appear different depending on the manufacturer (and your particular version of Windows). Check your PC instruction book for information about connecting your PC to a television. If a vertical and horizontal frequency-select mode exists, select 60Hz (vertical) and 48.3kHz (horizontal). In some cases, abnormal signals (such as stripes) might appear on the screen when the PC power is turned off (or if the PC is disconnected). If so, press the SOURCE button to enter the Video mode. Also, make sure that the PC is connected. When connecting a notebook PC to the television, make sure that the PC screen is displayed through only the television (Otherwise, random signals might appear). When horizontal synchronous signals seem irregular in PC mode, check PC power saving mode or cable connections. The Display Settings Table above complies to the IBM/VESA standards, and based on the analog input. The DVI support mode is regarded as same to the PC support mode. The best timing for the vertical frequency to each mode is 60Hz. Page 62 Pin Configurations SCART Connector (EXT.1/EXT.2/EXT.3) Pin 11 Pins 5, 7, 9, 11, 13, 15 and 16 are used for RGB processing and are only wired on the EXT.1 or EXT.3 connector. Signal Audio output R Audio input R Audio output L Audio chassis return Video chassis return (RGB blue) Audio input L RGB blue input Switching voltage Video chassis return (RGB green) AV-Link (Ext 1) RGB green input Pin Signal Video chassis return (RGB red) RGB red input Blanking signal (RGB switching) Video chassis return Blanking signal ground Video output Video input Screening/chassis return PC Input Connector (15Pin) Pin 15 PC IN Red (R) Green (G) Blue (B) Grounding Grounding (DDC) Red (R) Grounding Green (G) Grounding Blue (B) Grounding Reserved Sync Grounding Grounding Data (DDC) Horizontal sync. Vertical sync. Clock (DDC) DVI Input Connector (24Pin) Pin Signal T.M.D.S. Data2T.M.D.S. Data2+ T.M.D.S. Data2/4 Shield T.M.D.S. Data4T.M.D.S. Data4+ Clock (DDC) Data (DDC) Not Connected T.M.D.S. Data1T.M.D.S. Data1+ T.M.D.S. Data1/3 Shield T.M.D.S. Data3T.M.D.S. Data3+ +5V Power 5V Grounding Hot Plug Detect T.M.D.S. Data0T.M.D.S. Data0+ T.M.D.S. Data0/5 Shield T.M.D.S. Data5T.M.D.S. Data5+ T.M.D.S. Clock Shield T.M.D.S. Clock+ T.M.D.S. Clock- Page 63 Troubleshooting: Before Contacting Service Personnel Before contacting. Check the volume. Check whether the volume MUTE ( has been pressed. ) button on the remote control Normal picture but no sound No picture or black and white picture Sound and picture interference Adjust the colour settings. Check that the broadcast system selected is correct. Try to identify the electrical appliance that is affecting the television, then move it further away. Plug your television into a different mains socket. Check the direction, location and connections of your aerial. This interference often occurs due to the use of an indoor aerial. Replace the remote control batteries. Clean the upper edge of the remote control (transmission window). Check the battery terminals. Blurred or snowy picture, distorted sound Remote control malfunctions Page 64 Programming the Remote Control for Other Components CABLE CODE LIST DVD CODE LIST Anvision GI Hamlin Hitachi Jerrold Macom Magnavox Oak Panasonic Philips Pioneer RCA Regal Regency SA Samsung Scientific Atlanta Sprucer Stargate 2000 Sylvania Texscan Tocom Universal Viewstar Wamer amex Zenith 017, 003, 024, 031 025, 030 038, 039 025, 030 019, 023, 003, 022, 027, 037, 044 019, 021, 023, 028 018, 020, 004, 044 014, 022, 015, 023 042, 043 000, 001, 002, 003, 004, 005, 006, 007 042, 033, 034 019, 021,023, 017, 029, 035, 037, 045 Samsung Samsung JVC PROSCAN, RCA Panasonic LG(Goldstar) Sony Denon Curtis Mathes Page 65 Programming the Remote Control for Other Components (continued) VCR CODE LIST Admiral 020 Aiwa 025 Akai 027, 004, 032 Audio Dynamics 007, 026 Bell&Howell 018 Broksonic 022 Candle 002, 006, 003, 015, 008, 055 Canon 021, 056 Citizen 002, 006, 003, 015, 008, 055 Colortyme 007 Craig 002, 024 Curtis Mathes 002, 017, 007, 008, 021, 025, 056, 064, 066 Daewoo 003, 015, 010, 011, 012, 013, 014, 016 DBX 007, 026 Dimensia 017 Dynatech 034 Emerson 001, 003, 006, 021, 022, 025, 030, 032, 034, 040, 047, 050, 052, 060, 063, 065, 066, 067, 069 Fisher 018, 024, 028, 029, 048, 051, 061 Funai 025 General Electric 017, 002, 021, 005, 056 Go Video 002 LG(Goldstar) 006, 007, 008, 009, 010 Harman Kardon 007 Hitachi 019, 025, 041, 042 Instant Replay 021 JC Penny 002, 007, 018, 019, 021, 026, 037, 041, 054, 056 JVC 007, 008, 018, 021, 026, 037 Kenwood 007, 008, 018, 026, 037 KLH 070 Lioyd 025 Logik 038 LXI 025 Magnavox 021, 036, 056, 059 Marantz 018, 021, 007, 026, 037, 008, 062, 036 Marta 006 MEI 021 Memorex 006, 021, 024, 025 MGA 034 Midland 005 Minolta 019, 041 Mitsubishi 019, 034, 041, 046 Montgomery Ward 020 MTC 002, 025 Multitech 002, 025, 038, 005 NEC Optimus Panasonic Pentax Pentex Research Philco Philips Pioneer Portland PROSCAN Quartz Quasar Radio Shack/Realistic 018, 007, 026, 037, 008, 062, 021, 056, 071, 072 019, 021, 036, 056, 059 021, 036 019, 026, 039, 053 015, 049, 018 021, 056 006, 018, 020, 021, 024, 025, 029, 034, 048, 056 RCA 017, 019, 002, 021, 035, 041, 043, 057, 068 Samsung 000, 001, 002, 003, 004, 005 Sansui 026 Sanyo 018, 024 Scott 003, 047, 052, 067 Sears 018, 019, 006, 024, 028, 029, 041, 048, 051 Sharp 020, 034, 045, 015 Shimom 027, 033, 038, 058 Signature 025 Sony 027, 033, 044 Sylvania 021, 025, 036, 056, 059 Symphonic 025 Tandy 018, 025 Tashika 006 Tatung 037 Teac 025, 037, 068 Technics 021 Teknika 021, 006, 025, 031 TMK 066 Toshiba 003, 019, 029, 051, 052 Totevision 002, 006 Unitech 002 Vector Research 007, 026 Victor 026 Video Concepts 007, 026 Videosonic 002 Wards 002, 003, 006, 019, 020, 021, 024, 025, 034, 038, 041 Yamaha 007, 008, 018, 026, 037 Zenith 023, 027, 033, 073 Total experience On the whole, the image projected on the silver screen was pretty good. It had a rather a cinema-like look. It was clearly not processed the same way TV's process images -- both in the good and the bad. The tonality was nice, but the separation between in-focus and background elements wasn't. Moreover, because of the way an LCD projector works, there was occasional temporal aliasing -- flashes of "rainbow" color as the eye moved rapidly across the screen. A projector/silver screen setup is clearly for the dedicated home theater only. Because of the time the projector takes to warm up, the need to properly darken the room, and the sheer size of the picture, ordinary TV is simply out of the question. And for pure image quality, the projector in the end doesn't match up to comparably priced plasma, LCD, or rear-projection sets. But for a really "cinema-like" experience, it's great -- assuming you have the room to spare. Since I already had a pair of decent speakers and my amplituner was becoming unusable, that was the first thing on my shopping list. After interviewing some audiophile friends of mine and determining that an amplituner they found respectable cost way more than I intended to pay for it, I started doing some research on the net and in magazines. This is the conclusion at which I arrived: Any integrated amplituner from a major brand with the connectors needed to plug in your stuff will do. In practice, this means an entry-level 5.1 home theater amplituner from, say, Yamaha, Sony, or Pioneer. Even the cheap ones are really good nowadays, with S/N ratios and harmonic distortion levels better than on pretty respectable gear, say, 15-20 years ago. There's actually precious little difference in the way the damn things sound until you get up to the 700+ euro price level -- and at that point, the amplituner would really require better speakers to do it justice, at which point we're easily looking at $3000+ for the sound only. What you get by going up from the basic 300-or-so-euro model is nicer finish, more and better connectors, eventually 6.1 and 7.1 sound capability, a snazzier remote, and so on. But the sound is really not that different -- and in any case, the speakers, the space you set them up in, and your ears will matter more than the small differences between them. However, I would recommend staying away from the integrated DVD+amplituner "home theater packages" if you care at all about the "natural" quality of the sound: those things sound just plain weird. If you're considering a "surround" home theater speaker kit, do yourself a favor and get yourself a pair of something like these instead. They'll cost less and sound way better. The 7.2's are discontinued, but I'm sure the 8.2's you can get for under 200 euros are not worse. Oh, and I've no reason to think Wharfedales are better than any of the other major brands; capitalism being what it is, I doubt any of 'em are actually bad and many could be better. The numbers -- 5.1, 6.1, 7.1 -- indicate the number of channels on the amp. That is, you can attach 5, 6, or 7 speakers and a subwoofer (for the low sounds) to it. My not-so-humble opinion is that even 5.1 is overkill most of the time, and 6.1 and 7.1 is just nonsense unless you actually have a room you can dedicate solely to home theater and enough rather nice speakers to make good use of the sound. in which case you probably won't be looking at 300-euro amplituners anyway. Several audiophiles I've both talked with and read are emphatically of the opinion that a solid pair of stereo speakers will whup any boxed 5-6-7.1 set any day of the week -- even in the "three-dimensionality" of the sound. A boxed set will sound better than a TV, and having a center speaker will help cover up some of the most glaring deficiencies in cheap speakers, but a pair of halfway decent (which does not necessarily mean super-expensive) primary speakers will always sound much better. This certainly jibes with my experience of the Sony 5.1 boxed set that I discuss above. My choice: The Yamaha RX-V350. Why? Because I got a demo copy for cheap, it was decently built, had good connectors for the main speakers, and I imagined I liked the sound just a hair better than the corresponding Sony and Pioneer that I also listened to. Of course, a few days later I came across the next model up for just about the same price I paid (another demo copy). But really, I'm pretty sure that I'd have been just as happy with any of them. What I learned: Steer clear of "surround" boxed sets. Instead, go with a dedicated amplituner and a pair of dedicated main speakers. Even inexpensive ones will sound much better than a kit that may actually end up costing more. If you want a significant improvement over that, be prepared to budget several thousand euros -- and while certainly real, the difference between a 500 euro setup and a 10,000 euro setup may be smaller than you think. (Yes, I have listened to both.) If you have to choose between good Another Kind of Picture primary speakers and anything else, get the good primary speakers. Then add a subwoofer (but only if needed -- even pretty small speakers can handle bass better than you might expect nowadays, and adding a poor subwoofer would only add more bass rather than better bass). Consider the other "effects" speakers only if you have good primaries and you have a dedicated room big enough to set them up so that they're not barking in your ear if you're sitting close to them. If your room is big and you'll have people sitting towards the side, the center speaker will help. If so small that the people sitting towards the sides will be sitting just by a rear speaker, the rear speakers may do more harm than good. Whatever you add, buy at roughly the same quality as your main speakers; if they're worse, they'll only muddy up the sound, and if they're better, you won't be making the most of them. My experience: Well, a new cheap Yamaha sounds better than an old dirty NAD, no question about it. The sound is clearer and "breathes more easily," there's noticeably less amplifier hum, and of course everything works -- and it's designed to work with a home theater. The sound also compares surprisingly well to my father's way more expensive Genelec setup -- nope, it's not in the same class, but when I listen to his stuff and then come home to listen to my stuff, the transition is less painful than I'd have expected. No gripes. Picture A TV is a pretty major purchase for a consumer durable. Therefore, I put a good deal of effort into researching them, and just looking at them. I carried a copy of the Two Towers DVD in my satchel and asked to watch it on some models that interested me. It was an interesting quest, and one where my original specifications got turned around almost 180 degrees. I started on my quest because my eye had been caught by some pretty impressive-looking and not horribly expensive LCD TV's in the 30-32 inch category. Some ViewSonics were on display at a store I often visit. The size felt about right for the pretty small space I have, too. I also felt that it would be very important to have HDTV capability for the not-too-distant future. So I originally started scouting for a 30-32 inch LCD TV with a vertical resolution of at least 768 points. I fairly quickly discovered a number of things: The panels are all pretty good, but the image processing varies. While the quality of the picture on LCD TV's looked very impressive at first sight -- sharp, punchy, colorful -- the more I looked at them, the less I liked them. Almost all of them had an unnatural, "plasticky" look to them, much like photos that have been de-noised with too much NeatImage and then had the poop sharpened out of them. I also found both the higlights and the shadows often lacking in detail, and many of them had weird and often unpleasant shifts towards the blue in shadow areas. As a group, LCD TV's, especially the inexpensive ones I had originally looked at, often had the same things wrong with them as inexpensive point-and-shoot digicams: oversharpened, overly colorful, and overly contrasty at the expense of shadow and highlight detail. Clearly, response times were an issue on some of them, too: in pan shots, the background smeared into a blur and "flattened out" the picture where others (and especially any plasma screen) would retain a crisp image even when it was moving very quickly. The exceptions: Panasonic, Sony, and to a degree, LG. When looking at the TV's as a group and picking out the ones that I really like, these three invariably stood out, in this order. The Panasonics in particular had a low-key, natural look about them, with excellent skin tones and especially skin textures -- the people looked like people instead of plastic mannequins. The Sonys were close, and the LG's not far behind especially when looking at DVD's. The Panasonics were clearly the best at making the most of a poor signal. Interestingly, none of these were the ones that initially grabbed my attention -- my eye was rather naturally drawn to the punchiest of the bunch first, and these three weren't there. However, when looking at the ranks of TV's on the walls, the ones that I liked most were never the LCD's. They were the plasma panels. I can't say exactly what it was that I liked more about them, especially compared to the best LCD's, but the image just somehow looked more natural and "film-like" on them. Moreover, the LCD's that I really liked, especially the Panasonics, cost almost as much as plasma screens anyway. So, at the next step I moved my sights up to 37-inch plasma screens: the biggest size I could comfortably fit in the space I had in mind. So, more research followed. I read reviews and studied specs, and looked at pictures of the things: after all, you'll be seeing a TV even when it's off, so being ugly is a major point against it. I originally stuck to my original specification of "HDTV capable" which meant that I eliminated all of the so-called "EDTV" plasma screens, with a resolution of only 480 lines vertically. This meant that I was looking at low-end 37-inchers from Samsung, Sony, Thompson, and a few others -- the high-resolution Panasonic being out of my budget. The one I liked most from the specs and design was the Samsung PS-37S4A. It looks downright cool, with a simple, understated elegance, has a "megapixel" panel (1024 x 1024), decent contrast ratio, all the connectors you could wish for, and the price was right. I had already practically made up my mind to buy it when I marched into the store, trusty Two Towers in hand. But I didn't buy it. The reason was that I just couldn't get the kind of picture out of it that I wanted, even from the DVD let alone broadcast TV. The shadows were completely blocked, and the skins were polished to a plasticky sheen worthy of Ken and Barbie. Which only goes to show that specs can tell you only so much. So I informed the salesman who had kindly set up a DVD player for me that unfortunately that one was out, but he had a really good chance to sell me something else. What he said was, "Fine: let me plug this into that one; yeah, I know it's only EDTV but I think you'll like it, and if you do, I can sell it for the same price as the Samsung." What "it" was, was the Panasonic TH-37PA50 -- the little brother to the HDTV-capable (and good deal more expensive) TH-37PV500 (which they also had on display). As far as I can tell, this model is not sold in the US; however, the same panel appears to be used on the "industrial" series TH-37PWD7UY []. Hell yeah, I liked it. Once I turned down the "everything at 11" store settings, the picture was everything I wanted: subtle shadow and highlight detail, beautiful skin texture, beautiful but low-key color, and a "three-dimensional" look that very few of the TV's had managed to create. Yep, it only has about half the pixel count of, say, the Samsung, but at normal viewing distance of a moving picture, there was no noticeable difference in sharpness. What I don't know, of course, is how it will look with an HDTV signal compared to higher-resolution screens, but I have a feeling it will acquit itself well: even on a DVD with pretty good production values like Two Towers, the weak link in resolution is clearly the signal and not the panel -and a higher-resolution signal won't make the shadows and highlights look any better on the Samsung. The Panasonic TH-37PA50. In my opinion, it had the best picture of any TV in this size class irrespective of pixel count (other than its higher-pixel-count sibling, the TH-37PV500. and that was damn close to a draw). (Picture ripped off the Panasonic website; presumably it's there for this kind of use so I don't think I'm breaking any rules here.) So there we are, my choice for TV was the Panasonic TH-37PA50: just like with cameras, the number of pixels turned out to be less important than the quality of the pixels. However, there was another one on display with a similar price tag and a higher-resolution panel, better connectors, and very similar picture quality: the Sony KE-P37XS1. Unfortunately it was a no-go because of the speakers on the sides: it would have been too wide to fit in the spot I wanted for it. What I learned: There are differences between sets that are very real but not immediately obvious -- and that are often masked by the garish "look at me" settings the sets have in the store. By the numbers, LCD's ought to be better than plasma screens. However, I just didn't like the way they looked -- they looked "artificial" and "digital" in a bad way. Image processing is the bee's knees -- TV's with the same panels looked wildly different depending on what was feeding them. To my eye, Panasonic had the best image processing with Sony a close second and LG another standout; I wasn't too impressed by any of the others. And finally, just like with cameras, the quality of the pixels matters more than the quantity: a great EDTV looked much better than an average HDTV. I wouldn't have bought one otherwise. My experience: The picture on the Panasonic is even better at home than at the store, what with the better lighting and all. It's actually at its best in moderate room lighting: if the room is completely darkened, I have to turn the brightness down a fair bit which means that the shadows go "noisy" -- since plasma cells have a certain minimum discharge level, the TV renders the darkest tones by switching off some pixels, which shows to the eye as noise. If there's some ambient light and the brightness is turned up to match, this isn't visible. I would like the TV to have a few more connectors -- DVI and HDMI would be nice, although the three SCART's and the component connector do get the job done. I'm mostly thinking about the future: the only way to get an HDTV signal into the box is through the component connection, and if I have a Blu-Ray DVD player/recorder and HDTV tuner, what then? But I figure there will be some way. I like the looks and the build of the TV too, and the usability is at least acceptable, although I would like a dedicated control to switch between viewing modes (instead of going through the menu) and find the "wait for the second digit" pause when switching channels via the number pad mildly annoying. But for the picture quality at this price I'm happy to put up with them. Sources -- DVD and set-top Today's a confusing time to buy widgets that plug into your TV. Take two fairly basic things -- a DVD player and set-top box. With the DVD player, you're already confronted with a huge number of options: the different types of connectors, signals, regions, features, and media boggle the mind. There have got to be tens of thousands of possible combinations of features, some of which make a huge amount of difference and others precious little. Here, in a nutshell, is what I learned about the topic. Start with the connectors Make sure the DVD player has the best connector available on your TV -- or, at worst, the second-best. On the other hand, if you insist on the latest-generation connector that your TV doesn't even have, you might be overpaying by hundreds of euros for a feature you won't even need -- and that might cost peanuts by the time you've upgraded your set to take advantage of it. In rough order of preference, here are the connectors to look for: 1. HDMI. This is the new, fancy connector for HDTV signals. The image quality is great and the usability is simple. If your TV has one, it would be good that your DVD has it too. 2. Component video. This is actually a set of three connectors, for red, green, and blue separately. The quality is just as good as with HDMI, but it might need some mildly puzzling configuration to set up properly. 3. S-VHS. This is a distinct step down already: usually doesn't support progressive scan (see below). 4. SCART. This you'll find on any TV and DVD. Dead simple but not the best for quality. If, like me, you end up using component video rather than HDMI to connect the stuff, you'll need a separate audio connector. In order of preference, look for: 1. Optical digital. You need this to support multi-channel sound. 2. Coaxial digital. If you only have two speakers, this is just as good as optical digital. 3. RCA stereo. Being analog, it's more prone to interference, bad connections, and such, but if correctly set up you really have to have a golden ear to tell the difference. You need either HDMI or component video to support progressive scanning -- the single most important feature on a DVD player you want to plug into a nice TV. Progressive scanning A regular TV signal is interlaced. It consists of frames. A TV picture is made up of horizontal lines. In an interlaced signal, every other frame transmits the odd lines, and every other, the even lines. Now, a digitally-driven high-tech TV takes this signal, and interpolates the missing lines for each frame, and draws a solid picture every time. This makes for a less flickery, steadier image. However, a DVD contains MPEG video, which isn't (internally) interlaced: it generates the interlaced signal out of it. In other words, the DVD player grabs half the data in a still that makes up the movie, passes it to the TV, which interpolates the other half and shows it. Then it sends the other half. This is repeated for every frame. Obviously, this is pretty inefficient: information is lost, and artifacts (from the interpolation) are introduced. This is where progressive scanning comes in. A DVD player that supports progressive scanning will pass complete, "intact" frames of video to the TV. This results in a steadier, more detailed and more natural-looking picture. (Yeah, it really does -- the difference between my old interlaced DVD player and my new progressive-scan one is huge.) So much for the theory. In practice, this is what you need to know: Get a TV that supports progressive scanning. It'll have either an HDMI or component video input (or both). Get a DVD player that also supports progressive scanning. If both have HDMI connections, great. If not, component will give a picture that's just as good, but perhaps with a bit more trouble. Make sure to connect the two through the HDMI or component video connectors and set both to use progressive scanning. At least with component video, this isn't necessarily the case by default. If you need to use component video output and you want to use surround sound, make sure the DVD has digital optical audio out. Seriously. If there's one feature on your DVD player that you need to look out for, it's progressive scanning and connectors that support it and match your TV. Other features There's a lot of other features associated with DVD's that don't really mean much. For example, some machines convert a DVD signal to an HDTV signal. Others have fancy image enhancement circuits. These don't really matter: they can't make the signal any better than any ol' progressive scanning DVD player with good-quality connectors and plugged in with cables that aren't absolute rubbish. However, some other features that won't affect the picture will make a significant difference with regards to usability. You may or may not need them. You may want to make note of: Format support. In addition to vanilla store-bought DVD's and CD's, many players can handle CD's, SACD's, CD-R's with JPEG's, DivX's, MP3's, and so on. If there's some particular format you want to be able to play, keep an eye out for it: just because it fits in the tray doesn't necessarily mean it'll play. Multiregion support. Most if not all DVD players are "hackable" to work with DVD's from any "region." Some are multiregion out of the box. However, some are easier than others, and some can't be hacked without a special remote or other weird tricks. If you need to play DVD's bought elsewhere than your region, make sure you can get them to work. If it's a recorder, what happens when you switch it off? It makes a big difference to usability if the signals from the antenna and the video/audio in get passed through even when it's off. General usability. There's a big difference in sheer usability between different makes of DVD players. Some are beastly, others are really nice. Look at the remote. If it looks simple with barely any buttons, the player is probably a breeze to use. If it has a zillion tiny buttons you need a ball-point pen to even push, it's probably hell. Even if you use a different remote. This is a good indicator of how much attention the designers paid to usability. In particular, Sonys are usually excellent, while most of the no-names are pretty horrible. To record or not to record? Back in the day, life was simple. If you wanted to record a TV program for later viewing, you got a VCR and plugged it into your TV with an RF cable that only went one way, or, later, a SCART connector that was just as simple. Assuming you could figure out how to set the clock, you were pretty much all set. Not anymore -- especially if you have digital TV, cable, or satellite decoders to deal with. Your options for recording stuff include: Good ol' VHS or S-VHS VCR, either standalone or in combo with a DVD player. I suggest you forget it. The quality sucks. Get your precious tapes transferred onto DVD and chuck the tape player. DVD recorder, either with or without a hard disk. Digital TV decoder with a hard disk. These hard-disk equipped widgets are known as TiVO. My strong recommendation would be to go with a DVD recorder. I went with one without a hard disk, simply because all the hard disk adds is capacity and a tiny amount of convenience, but it costs rather a lot. Why? A DVD recorder allows you to record both analog and digital signals, from any source. Digital TV decoders are in a state of flux. DVD recorders are mature technology. A DVD recorder is cheaper than a digital TV decoder with a hard disk. You pay less for recording capability, and you get a more versatile, simpler, and longer-lived system. And last but definitely not least, a DVD recorder simplifies your connections and the control of your mess o' stuff a great deal. Even if you rarely record anything, you might still want to look into getting a DVD recorder just for the added convenience of not having to switch between different types of AV sources depending on what you're watching. Choosing a digi-TV decoder Again, a mess of options to choose from. Here I'm even less of an expert with the rest of my stuff. In my opinion, this matters the least. If it's a decent brand, not known to have serious teething issues like the crashes on some Nokias (shame!), and is known to have decent usability, any will do. If it's officially supported by your cable company, great. If it has more comprehensive connectors than the basic RF and SCART, awesome. If it supports both cable and antenna, fabulous -- you won't have to sell it even if you move. I picked the XSat CDTV410 -- a basic box with uncommonly good connectors and both cable and terrestrial support, for very little more than the cheapest cable-only boxes. The real trick is in stringing it together. Plugging it all in Here's the problem: if you did like me and got boxes with lots of good connections, you'll find that there are literally dozens of possible ways to string them together. Some will produce a better picture than others. Some will be much more straightforward in use than others. So it really makes a lot of difference how you connect them. Here's what I did: My antenna connects to my DVD recorder (RF In). The RF out on my DVD recorder connects to the RF In on my digital cable decoder. The RF out on my digital cable decoder connects to the RF in on my TV. Could it be simpler? Yeah, a little -- in particular, timed recording is a bit klugey. However, this is a far cry from having to toggle between different AV inputs at the TV, some of which are progressive and some of which aren't -- not to mention trying to figure out exactly what the DVD is recording when I'm watching something. Point being, it works without much hassle and gives the best quality my widgets can do. The remote The next obvious problem I encountered was that of the remote. Because I had happily shopped across brands, I had a set of wildly different and non-interoperable remotes. I was vaguely aware of the existence of universal remotes, but had no idea about the huge variety among them (nor that some of them cost more than my damn TV). Fortunately, I'm more than passingly familiar with usability design and have a very good idea of how I use the widgets I use, and how I want to use them. This helped me a great deal in finding something that worked -- indeed, something that works much better than I expected a universal remote could work. In the "LCD or hard button" fight, I'm a hard button fundamentalist. I really dislike touch-screens because of the lack of tactile feedback. I learn widgets with my fingers, and if my fingers have nothing to grasp, I find using a widget very very tedious. So I knew I wanted something with enough hard buttons for all the functions I regularly access. However, I also wanted something that's laid out nicely, with the most important buttons at the thumb and less important ones spread around, and all the buttons big enough to easily push. I knew I needed the "learning" capability, since I wanted to be able to control the entire setup as a unit and I expect to use the remote as the system evolves. And I didn't want to spend an insane amount of money for it. A design that "just works" -- the Philips SBC RU 760/00 doesn't draw undue attention to itself, and manages to simplify the insanely complicated task of controlling a raft of different home electronics devices elegantly and (almost) seamlessly. after a period of adaptation, of course. I was surprised to discover that something almost exactly like I wanted actually existed, and wasn't even very expensive. It's the Philips SBC RU 760/00. It has a deceptively low number of nice, chunky buttons, no screen of any kind, is fully programmable (and comes with preset codes for hundreds upon hundreds of devices), and is extremely well-built (the top is aluminum). The only thing I would've done differently is switching between devices -- now it's done sequentially with a single button, while there would've been room for separate buttons for each of the devices. However, what with the Shift button and the learning capability, it was a snap to set it up to control two or even three devices at a time in a single mode: for example, if I'm controlling the DVD player, now using the Shift button with the menu controls allows me to get into the TV's menus, and the audio controls with no counterparts in the DVD player control the amplituner. On the other hand, if I'm in TV mode, the play control buttons control the DVD. And if I'm listening to music, the controller controls my amplituner and my CD player at the same time. In other words, it integrates the mess of different brand widgets I have into a whole almost as seamless as the Bang & Olufsen wonder my boss has at his home. This little marvel is without a doubt the best remote I have ever used, and that includes the B&O one that came with a TV I used to own. Cables and stands and stuff The final item that needed purchasing was a TV stand. I took a tour of the local furniture stores with my wife, and was surprised to find out that apparently people are ready to pay more for a TV stand than I paid for the entire set of stuff I bought. Moreover, most of the stands were either made-to-measure (and very pricey) or rather inconvenient sizes. In the end and after a fair bit of legwork, I found one that was just about right -- fits at least four standard-width AV devices, has nice smoked-glass doors, carries 100 kg of weight, and even suits the color and styling of the TV. It was also a steal at 79 euros at IKEA. Cables are the subject for a major war among audiophiles. Some golden ears believe that high-end cables costing in the hundreds of euros per meter sound better than basic cables costing cents per meter. Fortunately, at this level it hardly matters. As long as the contacts are OK, any cable will work. In fact, for the coaxial digital audio connector I found that a regular ol' RCA-plugged cable worked fine, ohms be damned. (Digital cables either work or they don't; if they do, there's no conceivable way they could affect the sound.) The only cable where I did notice a difference between two I tried was the SCART connecting my DVD to the TV, so if you're in the mood of buying cables, I'd suggest you prioritize that one -- I needed one, to connect the digital tuner to the DVD recorder. You Gets What You Pays For The biggest eye-openers in the process of shopping for this home theater were the fact that resolution really isn't such a huge deal, and the difference that a good universal remote can make. Accepting the former brought a major leap in image quality at this price point, and discovering the latter removed the nagging feeling that it would be best to stick within a single brand for interoperability. It was also interesting to find out how much money it would be possible to spend for things that don't significantly improve the quality or the usability of the system -- made-to-measure stands, remotes with big color LCD touch-screens, exotic cables, and so on. The way technological development has brought what would have been a high-end home theater experience a few years ago within reach of mere mortals was a quite a discovery too. The old rule of "you gets what you pays for" that applies so well to cameras applies just as well to home theater. most of the time anyway. There's no doubt that spending five or ten times more would've gotten something seriously better -- real surround sound, a bigger and sharper screen, better "future-proofing" meaning that it could take full advantage of HDTV and Blu-Ray once they hit the market in a year or two and so on. However, whether it would've been five or ten times better is debatable. I believe that this kind of setup -- a low-end dedicated amplituner, low-end hi-fi speakers, good-quality EDTV plasma screen, good universal remote based on hard buttons -- does provide much more bang for the buck than most alternatives. It retains significant upgrade capacity (e.g. more speakers), and will handle the future just fine. Improving it significantly would easily double or triple the cost and require a much bigger space to put it in. Moreover, it's a great deal better than the default choice at this price point: an inexpensive set of surround speakers, a higher-resolution (possibly bigger) display from a cheaper brand with poorer image processing, a combined DVD player/amplituner. For a novice like me, the temptation is big to buy by the numbers -- more pixels, more channels, more speakers, more connections. Yet it's not the best way. Just like with cameras, with home theater sometimes less is more. Stuff I Looked At Some of the stuff I looked at (and think may be worth looking at) during my adventure: Philips SBC RU 760/00 [ Another Kind of Picture -- it would be hard to improve on the design of this universal remote. Seriously. Add more stuff and it becomes harder to use. Remove stuff and you won't be able to do everything you want. Rearrange it, and it'll be less accessible. Looks and feels good too. Kaxs TV stand by IKEA [ -- I honestly couldn't find one that looked better or fit better, and the cheapest of the ones I did find cost over three times at much. Not everything IKEA sells is a winner, but this one looks like it. Panasonic TH-37PV500 [ -- if money was no object, I would've bought this one. However, while it was better than the one I did buy, it wasn't twice as good (which is almost what it would've cost). Panasonic TH-37PA50 [ -- the Panny I did buy. These two are sort of like the EOS-350D and EOS-20D: the cheaper one is about 90% as good as the more expensive one at about 2/3 the price. Made sense for me. ProCaster AV4330 7.1 amplituner [] -- if I had felt less leery of buying "no-brand," and if the fittings on the front panel had struck me as a little bit better put together, I might have bought this one. It quacks like a bargain and it walks like a bargain, but I'm not confident enough to say that it isn't actually a duck to go and buy one. Wharfedale speakers [] -- not all of them expensive, and some of the cheap ones are emphatically not junk. I'm entirely certain that you could do enormously worse than going with, say, their 8.2 or Pacific Pi-10 speakers. Panasonic TX-32LX50 [ -- the best of the LCD TV's I looked at. Cost it, too. Samsung PS 37S4A [] -- I liked everything about it other than the picture. Sony KE-P37XS1 [ -- If the speakers had been below the screen rather than at the sides, I would probably have bought this one. To my eye, the image quality wasn't quite as good as the Panasonic I did buy, but the connectors and "future-proofing" is better. LG RZ-30LZ50 [] -- I liked this almost as much as the Sony and Panny LCD's. No surprise, though: it costs about as much too. Panasonic DMR-E65 [] (recently discontinued and available for pretty cheap) and Panasonic DMR-ES10 [ -- two very similar reasonably priced DVD recorders, either of which would probably have fit my bill. The E65 is older but has even more connections than the ES10, but the ES10 supports more formats and was multiregion out of the box. The E65 wasn't easily multiregionizable, and the other was out of stock, so I didn't buy either of them in the end. Samsung DVDR-120 [] -- This is the one I bought. Sort of combines the nicest features of both of the Panasonics -- excellent connections and comprehensive features, and very "polite" with the other equipment. Comes with a Another Kind of Picture remote that works with most common brands of TV, but is still a poor substitute for a real universal remote. I like it a lot so far. Unless otherwise indicated, all materials on this site are by Petteri Sulonen. They are licensed under the Creative Commons Attribution License []. I would appreciate it if you dropped me a line if you want to reproduce them. Any trademarks are property of their respective owners; their use is purely editorial and does not constitute an infringement. RR-US006 DSC-HX5v B C1000 DVD-RV60 PNA 4420 DSI XL X5270 WM-GX674 DSC-P100 GSM712 Series Kodak ZD15 114GT AL-800 840 Whistler 1530 Yamaha D24 VGN-N38l-W Paperport 10 LQ-580 Gt A8S-X HAR-D1000 1 8 Ipod Nano VSX-920 F5D7000 Thinkpad G41 CD1401B 21 VF-R30NKX 42PQ30 PRO 9000 UX-5500R Smart-SET Walkman ST-50 Nokia 6234 14950 A DES-1250G Coupe 2001 SCS 140 MM1402 LN-S4051D DSC-T90 PS-42C62H LX300 BLK Series 12000T PRO L20 Amplifier Canvas LP750 CDR770-00 SWM-745 CFD-G505 P4GPL-x- X2580 CPU-01015B MCV903 M9000 PM1000 EMX2000 LH-C6230W Zapper KDC-W534U CS-55P VGN-CR21s-W TC-21S3RC SL-PG590 BV2600UA League SS-325X Electromate 400 AVR 125 MCD2540E-M RTS-E10 Stand MD-1 164MF Fantom-X7 NA-16VG1 BJC-8500 CDL450 Vision 4 RM-PZ1SD Of Rage DCR-TRV285E ASR08T Terra SR-DVM70us-sr-dvm70 KVT-725DVD-B WF203ANW Front DPL580 Copia 9912 XR-1300R EX-Z850 DG834PN CO-tech1808 Review GS120V Cable 2
http://www.ps2netdrivers.net/manual/samsung.ps-37s4a/
CC-MAIN-2014-41
refinedweb
12,430
70.84
Contents - Rejection Notice - Abstract - Rationale - Basic Syntax - Extended Syntax - Semantics - Conclusion Rejection Notice A quick poll during my keynote presentation at PyCon 2007 shows this proposal has no popular support. I therefore reject it. Abstract Python-dev has recently seen a flurry of discussion on adding a switch statement. In this PEP I'm trying to extract my own preferences from the smorgasboard of proposals, discussing alternatives and explaining my choices where I can. I'll also indicate how strongly I feel about alternatives I discuss. This PEP should be seen as an alternative to PEP 275. My views are somewhat different from that PEP's author, but I'm grateful for the work done in that PEP. This PEP introduces canonical names for the many variants that have been discussed for different aspects of the syntax and semantics, such as "alternative 1", "school II", "option 3" and so on. Hopefully these names will help the discussion. Rationale A common programming idiom is to consider an expression and do different things depending on its value. This is usually done with a chain of if/elif tests; I'll refer to this form as the "if/elif chain". There are two main motivations to want to introduce new syntax for this idiom: - It is repetitive: the variable and the test operator, usually '==' or 'in', are repeated in each if/elif branch. - It is inefficient: when an expression matches the last test value (or no test value at all) it is compared to each of the preceding test values. Both of these complaints are relatively mild; there isn't a lot of readability or performance to be gained by writing this differently. Yet, some kind of switch statement is found in many languages and it is not unreasonable to expect that its addition to Python will allow us to write up certain code more cleanly and efficiently than before. There are forms of dispatch that are not suitable for the proposed switch statement; for example, when the number of cases is not statically known, or when it is desirable to place the code for different cases in different classes or files. Basic Syntax I'm considering several variants of the syntax first proposed in PEP 275 here. There are lots of other possibilities, but I don't see that they add anything. I've recently been converted to alternative 1. I should note that all alternatives here have the "implicit break" property: at the end of the suite for a particular case, the control flow jumps to the end of the whole switch statement. There is no way to pass control from one case to another. This in contrast to C, where an explicit 'break' statement is required to prevent falling through to the next case. In all alternatives, the else-suite is optional. It is more Pythonic to use 'else' here rather than introducing a new reserved word, 'default', as in C. Semantics are discussed in the next top-level section. Alternative 1 This is the preferred form in PEP 275: switch EXPR: case EXPR: SUITE case EXPR: SUITE ... else: SUITE The main downside is that the suites where all the action is are indented two levels deep; this can be remedied by indenting the cases "half a level" (e.g. 2 spaces if the general indentation level is 4). Alternative 2 This is Fredrik Lundh's preferred form; it differs by not indenting the cases: switch EXPR: case EXPR: SUITE case EXPR: SUITE .... else: SUITE Some reasons not to choose this include expected difficulties for auto-indenting editors, folding editors, and the like; and confused users. There are no situations currently in Python where a line ending in a colon is followed by an unindented line. Alternative 3 This is the same as alternative 2 but leaves out the colon after the switch: switch EXPR case EXPR: SUITE case EXPR: SUITE .... else: SUITE The hope of this alternative is that it will not upset the auto-indent logic of the average Python-aware text editor less. But it looks strange to me. Alternative 4 This leaves out the 'case' keyword on the basis that it is redundant: switch EXPR: EXPR: SUITE EXPR: SUITE ... else: SUITE Unfortunately now we are forced to indent the case expressions, because otherwise (at least in the absence of an 'else' keyword) the parser would have a hard time distinguishing between an unindented case expression (which continues the switch statement) or an unrelated statement that starts like an expression (such as an assignment or a procedure call). The parser is not smart enough to backtrack once it sees the colon. This is my least favorite alternative. Extended Syntax There is one additional concern that needs to be addressed syntactically. Often two or more values need to be treated the same. In C, this done by writing multiple case labels together without any code between them. The "fall through" semantics then mean that these are all handled by the same code. Since the Python switch will not have fall-through semantics (which have yet to find a champion) we need another solution. Here are some alternatives. Alternative A Use: case EXPR: to match on a single expression; use: case EXPR, EXPR, ...: to match on mulltiple expressions. The is interpreted so that if EXPR is a parenthesized tuple or another expression whose value is a tuple, the switch expression must equal that tuple, not one of its elements. This means that we cannot use a variable to indicate multiple cases. While this is also true in C's switch statement, it is a relatively common occurrence in Python (see for example sre_compile.py). Alternative B Use: case EXPR: to match on a single expression; use: case in EXPR_LIST: to match on multiple expressions. If EXPR_LIST is a single expression, the 'in' forces its interpretation as an iterable (or something supporting __contains__, in a minority semantics alternative). If it is multiple expressions, each of those is considered for a match. Alternative C Use: case EXPR: to match on a single expression; use: case EXPR, EXPR, ...: to match on multiple expressions (as in alternative A); and use: case *EXPR: to match on the elements of an expression whose value is an iterable. The latter two cases can be combined, so that the true syntax is more like this: case [*]EXPR, [*]EXPR, ...: The * notation is similar to the use of prefix * already in use for variable-length parameter lists and for passing computed argument lists, and often proposed for value-unpacking (e.g. a, b, *c = X as an alternative to (a, b), c = X[:2], X[2:]). Alternative D This is a mixture of alternatives B and C; the syntax is like alternative B but instead of the 'in' keyword it uses '*'. This is more limited, but still allows the same flexibility. It uses: case EXPR: to match on a single expression and: case *EXPR: to match on the elements of an iterable. If one wants to specify multiple matches in one case, one can write this: case *(EXPR, EXPR, ...): or perhaps this (although it's a bit strange because the relative priority of '*' and ',' is different than elsewhere): case * EXPR, EXPR, ...: Discussion Alternatives B, C and D are motivated by the desire to specify multiple cases with the same treatment using a variable representing a set (usually a tuple) rather than spelling them out. The motivation for this is usually that if one has several switches over the same set of cases it's a shame to have to spell out all the alternatives each time. An additional motivation is to be able to specify ranges to be matched easily and efficiently, similar to Pascal's "1..1000:" notation. At the same time we want to prevent the kind of mistake that is common in exception handling (and which will be addressed in Python 3000 by changing the syntax of the except clause): writing "case 1, 2:" where "case (1, 2):" was meant, or vice versa. The case could be made that the need is insufficient for the added complexity; C doesn't have a way to express ranges either, and it's used a lot more than Pascal these days. Also, if a dispatch method based on dict lookup is chosen as the semantics, large ranges could be inefficient (consider range(1, sys.maxint)). All in all my preferences are (from most to least favorite) B, A, D', C, where D' is D without the third possibility. Semantics There are several issues to review before we can choose the right semantics. If/Elif Chain vs. Dict-based Dispatch There are several main schools of thought about the switch statement's semantics: - School I wants to define the switch statement in term of an equivalent if/elif chain (possibly with some optimization thrown in). - School II prefers to think of it as a dispatch on a precomputed dict. There are different choices for when the precomputation happens. - There's also school III, which agrees with school I that the definition of a switch statement should be in terms of an equivalent if/elif chain, but concedes to the optimization camp that all expressions involved must be hashable. We need to further separate school I into school Ia and school Ib: - School Ia has a simple position: a switch statement is translated to an equivalent if/elif chain, and that's that. It should not be linked to optimization at all. That is also my main objection against this school: without any hint of optimization, the switch statement isn't attractive enough to warrant new syntax. - School Ib has a more complex position: it agrees with school II that optimization is important, and is willing to concede the compiler certain liberties to allow this. (For example, PEP 275 Solution 1.) In particular, hash() of the switch and case expressions may or may not be called (so it should be side-effect-free); and the case expressions may not be evaluated each time as expected by the if/elif chain behavior, so the case expressions should also be side-effect free. My objection to this (elaborated below) is that if either the hash() or the case expressions aren't side-effect-free, optimized and unoptimized code may behave differently. School II grew out of the realization that optimization of commonly found cases isn't so easy, and that it's better to face this head on. This will become clear below. The differences between school I (mostly school Ib) and school II are threefold: - When optimizing using a dispatch dict, if either the switch expression or the case expressions are unhashable (in which case hash() raises an exception), school Ib requires catching the hash() failure and falling back to an if/elif chain. School II simply lets the exception happen. The problem with catching an exception in hash() as required by school Ib, is that this may hide a genuine bug. A possible way out is to only use a dispatch dict if all case expressions are ints, strings or other built-ins with known good hash behavior, and to only attempt to hash the switch expression if it is also one of those types. Type objects should probably also be supported here. This is the (only) problem that school III addresses. - When optimizing using a dispatch dict, if the hash() function of any expression involved returns an incorrect value, under school Ib, optimized code will not behave the same as unoptimized code. This is a well-known problem with optimization-related bugs, and waste lots of developer time. Under school II, in this situation incorrect results are produced at least consistently, which should make debugging a bit easier. The way out proposed for the previous bullet would also help here. - School Ib doesn't have a good optimization strategy if the case expressions are named constants. The compiler cannot know their values for sure, and it cannot know whether they are truly constant. As a way out, it has been proposed to re-evaluate the expression corresponding to the case once the dict has identified which case should be taken, to verify that the value of the expression didn't change. But strictly speaking, all the case expressions occurring before that case would also have to be checked, in order to preserve the true if/elif chain semantics, thereby completely killing the optimization. Another proposed solution is to have callbacks notifying the dispatch dict of changes in the value of variables or attributes involved in the case expressions. But this is not likely implementable in the general case, and would require many namespaces to bear the burden of supporting such callbacks, which currently don't exist at all. - Finally, there's a difference of opinion regarding the treatment of duplicate cases (i.e. two or more cases with match expressions that evaluates to the same value). School I wants to treat this the same is an if/elif chain would treat it (i.e. the first match wins and the code for the second match is silently unreachable); school II wants this to be an error at the time the dispatch dict is frozen (so dead code doesn't go undiagnosed). School I sees trouble in school II's approach of pre-freezing a dispatch dict because it places a new and unusual burden on programmers to understand exactly what kinds of case values are allowed to be frozen and when the case values will be frozen, or they might be surprised by the switch statement's behavior. School II doesn't believe that school Ia's unoptimized switch is worth the effort, and it sees trouble in school Ib's proposal for optimization, which can cause optimized and unoptimized code to behave differently. In addition, school II sees little value in allowing cases involving unhashable values; after all if the user expects such values, they can just as easily write an if/elif chain. School II also doesn't believe that it's right to allow dead code due to overlapping cases to occur unflagged, when the dict-based dispatch implementation makes it so easy to trap this. However, there are some use cases for overlapping/duplicate cases. Suppose you're switching on some OS-specific constants (e.g. exported by the os module or some module like that). You have a case for each. But on some OS, two different constants have the same value (since on that OS they are implemented the same way -- like O_TEXT and O_BINARY on Unix). If duplicate cases are flagged as errors, your switch wouldn't work at all on that OS. It would be much better if you could arrange the cases so that one case has preference over another. There's also the (more likely) use case where you have a set of cases to be treated the same, but one member of the set must be treated differently. It would be convenient to put the exception in an earlier case and be done with it. (Yes, it seems a shame not to be able to diagnose dead code due to accidental case duplication. Maybe that's less important, and pychecker can deal with it? After all we don't diagnose duplicate method definitions either.) This suggests school IIb: like school II but redundant cases must be resolved by choosing the first match. This is trivial to implement when building the dispatch dict (skip keys already present). (An alternative would be to introduce new syntax to indicate "okay to have overlapping cases" or "ok if this case is dead code" but I find that overkill.) Personally, I'm in school II: I believe that the dict-based dispatch is the one true implementation for switch statements and that we should face the limitiations up front, so that we can reap maximal benefits. I'm leaning towards school IIb -- duplicate cases should be resolved by the ordering of the cases instead of flagged as errors. When to Freeze the Dispatch Dict For the supporters of school II (dict-based dispatch), the next big dividing issue is when to create the dict used for switching. I call this "freezing the dict". The main problem that makes this interesting is the observation that Python doesn't have named compile-time constants. What is conceptually a constant, such as re.IGNORECASE, is a variable to the compiler, and there's nothing to stop crooked code from modifying its value. Option 1 The most limiting option is to freeze the dict in the compiler. This would require that the case expressions are all literals or compile-time expressions involving only literals and operators whose semantics are known to the compiler, since with the current state of Python's dynamic semantics and single-module compilation, there is no hope for the compiler to know with sufficient certainty the values of any variables occurring in such expressions. This is widely though not universally considered too restrictive. Raymond Hettinger is the main advocate of this approach. He proposes a syntax where only a single literal of certain types is allowed as the case expression. It has the advantage of being unambiguous and easy to implement. My main complaint about this is that by disallowing "named constants" we force programmers to give up good habits. Named constants are introduced in most languages to solve the problem of "magic numbers" occurring in the source code. For example, sys.maxint is a lot more readable than 2147483647. Raymond proposes to use string literals instead of named "enums", observing that the string literal's content can be the name that the constant would otherwise have. Thus, we could write "case 'IGNORECASE':" instead of "case re.IGNORECASE:". However, if there is a spelling error in the string literal, the case will silently be ignored, and who knows when the bug is detected. If there is a spelling error in a NAME, however, the error will be caught as soon as it is evaluated. Also, sometimes the constants are externally defined (e.g. when parsing a file format like JPEG) and we can't easily choose appropriate string values. Using an explicit mapping dict sounds like a poor hack. Option 2 The oldest proposal to deal with this is to freeze the dispatch dict the first time the switch is executed. At this point we can assume that all the named "constants" (constant in the programmer's mind, though not to the compiler) used as case expressions are defined -- otherwise an if/elif chain would have little chance of success either. Assuming the switch will be executed many times, doing some extra work the first time pays back quickly by very quick dispatch times later. An objection to this option is that there is no obvious object where the dispatch dict can be stored. It can't be stored on the code object, which is supposed to be immutable; it can't be stored on the function object, since many function objects may be created for the same function (e.g. for nested functions). In practice, I'm sure that something can be found; it could be stored in a section of the code object that's not considered when comparing two code objects or when pickling or marshalling a code object; or all switches could be stored in a dict indexed by weak references to code objects. The solution should also be careful not to leak switch dicts between multiple interpreters. Another objection is that the first-use rule allows obfuscated code like this: def foo(x, y): switch x: case y: print 42 To the untrained eye (not familiar with Python) this code would be equivalent to this: def foo(x, y): if x == y: print 42 but that's not what it does (unless it is always called with the same value as the second argument). This has been addressed by suggesting that the case expressions should not be allowed to reference local variables, but this is somewhat arbitrary. A final objection is that in a multi-threaded application, the first-use rule requires intricate locking in order to guarantee the correct semantics. (The first-use rule suggests a promise that side effects of case expressions are incurred exactly once.) This may be as tricky as the import lock has proved to be, since the lock has to be held while all the case expressions are being evaluated. Option 3 A proposal that has been winning support (including mine) is to freeze a switch's dict when the innermost function containing it is defined. The switch dict is stored on the function object, just as parameter defaults are, and in fact the case expressions are evaluated at the same time and in the same scope as the parameter defaults (i.e. in the scope containing the function definition). This option has the advantage of avoiding many of the finesses needed to make option 2 work: there's no need for locking, no worry about immutable code objects or multiple interpreters. It also provides a clear explanation for why locals can't be referenced in case expressions. This option works just as well for situations where one would typically use a switch; case expressions involving imported or global named constants work exactly the same way as in option 2, as long as they are imported or defined before the function definition is encountered. A downside however is that the dispatch dict for a switch inside a nested function must be recomputed each time the nested function is defined. For certain "functional" styles of programming this may make switch unattractive in nested functions. (Unless all case expressions are compile-time constants; then the compiler is of course free to optimize away the swich freezing code and make the dispatch table part of the code object.) Another downside is that under this option, there's no clear moment when the dispatch dict is frozen for a switch that doesn't occur inside a function. There are a few pragmatic choices for how to treat a switch outside a function: - Disallow it. - Translate it into an if/elif chain. - Allow only compile-time constant expressions. - Compute the dispatch dict each time the switch is reached. - Like (b) but tests that all expressions evaluated are hashable. Of these, (a) seems too restrictive: it's uniformly worse than (c); and (d) has poor performance for little or no benefits compared to (b). It doesn't make sense to have a performance-critical inner loop at the module level, as all local variable references are slow there; hence (b) is my (weak) favorite. Perhaps I should favor (e), which attempts to prevent atypical use of a switch; examples that work interactively but not in a function are annoying. In the end I don't think this issue is all that important (except it must be resolved somehow) and am willing to leave it up to whoever ends up implementing it. When a switch occurs in a class but not in a function, we can freeze the dispatch dict at the same time the temporary function object representing the class body is created. This means the case expressions can reference module globals but not class variables. Alternatively, if we choose (b) above, we could choose this implementation inside a class definition as well. Option 4 There are a number of proposals to add a construct to the language that makes the concept of a value pre-computed at function definition time generally available, without tying it either to parameter default values or case expressions. Some keywords proposed include 'const', 'static', 'only' or 'cached'. The associated syntax and semantics vary. These proposals are out of scope for this PEP, except to suggest that if such a proposal is accepted, there are two ways for the switch to benefit: we could require case expressions to be either compile-time constants or pre-computed values; or we could make pre-computed values the default (and only) evaluation mode for case expressions. The latter would be my preference, since I don't see a use for more dynamic case expressions that isn't addressed adequately by writing an explicit if/elif chain. Conclusion It is too early to decide. I'd like to see at least one completed proposal for pre-computed values before deciding. In the mean time, Python is fine without a switch statement, and perhaps those who claim it would be a mistake to add one are right.
http://docs.activestate.com/activepython/3.6/peps/pep-3103.html
CC-MAIN-2018-17
refinedweb
4,098
58.82
You can subscribe to this list here. Showing 2 results of 2 Hello, Following the CLISP port of cl-pdf and cl-typesetting by Klaus Weidner, there have been some heated discussion about LOOP and CLISP on the cl-typesetting-devel mailing list. There are 3 cases where CLISP does not work like all the other implementation tested. ;;; case 1 (loop for i in () maximize i) CLISP => nil Others => 0 In this case the standard says that it is unspecified so it's ok. But it is also written that variables containing numbers must be initialized to 0 (or 0.0 if it's a float) ;;; case 2 (loop for i from 0 below 2 for j from 0 below 2 finally (return (list i j))) others => (2 1) clisp => (2 2) For this: In 6.1.2.1 Iteration Control: "When iteration control clauses are used in a loop, the corresponding termination tests in the loop body are evaluated before any other loop body code is executed." For me "any other" applies to other clauses as well. The reference to do and do* is unfortunate because they have only one termination form. ;;; case 3 (loop for r in '(42) finally (return r)) all => 42 (loop for r on '(42) finally (return r)) others => nil clisp => (42) "6.1.2.1.2 The for-as-in-list subclause" says: "The variable var is bound to the successive elements of the list in form1 before each iteration." So (loop for r in '(42) finally (return r)) => 42 is ok. In "6.1.2.1.3 The for-as-on-list subclause" you have The variable var is bound to the successive tails of the list in form1. There is no "before" here so nil is the tail but it's not a list (atom nil) => t so the clause terminates. (loop for r on '(42) finally (return r)) should return nil. Don't forget that nil is an atom (atom nil) => t (loop for r on '(42 . 43) do finally (return r)) should return 43 So here clisp seems wrong. I agree that these parts of the standard are not fully or clearly specified but in any case I think it would be better if we could avoid to put a few #+clisp in the code. ;-) So would you agree to make CLISP more compliant or at least more like the other Lisp implementations ? Thanks Marc Kaz, > ,(getf `(:display (the display ,display) > :event-key (the keyword ,event-key) > :event-code (the card8 (logand #x7f (read-card8 0))) > :send-event-p (logbitp 7 (read-card8 0)) > ,@',(mapcar #'(lambda (form) > (clx-macroexpand form env)) > get-code)) > variable))) > > (this is the correct form from CLOCC/GUI/CLX) > LW expands it differently from CLISP. > Kaz, could you please look at this? did you have an opportunity to look at this? -- Sam Steingold () running w2k <> <> <> <> <> Lottery is a tax on statistics ignorants. MS is a tax on computer-idiots.
http://sourceforge.net/p/clisp/mailman/clisp-devel/?viewmonth=200404&viewday=25
CC-MAIN-2014-23
refinedweb
493
69.11
Featured Replies in this Discussion Write a GUI program that has 3 text fields. Get two values in the two text fields and display sum in the third field. ANd Use a button to do this... (assuming ure just starting out) It is just my first time.. we have a project in school.. can you suggest any simple but useful program that i can submit? thank you for helping.. i really need this.. GUI is just a shell wrapping around your real work. In other words, it is more like a presentation. If you have done any program that displays results on monitor, you can add GUI to it, such as replace how you enter inputs or display the output. i have done some GUI examples but i do not know how to arrange the buttons i made... can you help me on how to code in making specific location of the buttons i made? i really need it for my project.. thank you so much for your reply.. Try BorderLayout. I'll try to give you a step-by-step walkthrough: - Create a JFrame - public class MyFrame extends JFrame { ... } - Get the content pane - JPanel contentPane = (JPanel) this.getContentPane() - Set layout manager to BorderLayout - contentPane.setLayout(new BorderLayout()) - Add a text area in the centre of the frame - contentPane.add(new JTextArea("my text"), BorderLayout.CENTER) - Add a button to the top of the frame - contentPane.add(new JButton("my button"), BorderLayout.NORTH) - Make the frame visible so you can actually see all of this - this.setVisible(true) Some slightly more difficult things to try: - Create the JButton and JTextArea outside of the method call so that you can reference them with a variable later - private JButton myButton = new JButton("my button"); - Create a listener for the button, that changes the text in the JTextArea when it is clicked - myButton.addActionListener(this), and put code into an actionPerformed(...)method
https://www.daniweb.com/software-development/java/threads/382130/help-for-java-programs-that-includes-gui
CC-MAIN-2015-32
refinedweb
319
67.76
wxPython Hello world behaviour in Windows This seems very strange but the most simplest examples of wxPython has strange behaviour on 2 different systems running Windows but is ok on Linux. The problem, while I move the applications windows it looks like it leaves behind copies of itself, you get this effect sometimes when a window hangs or if windows slows down. It is nothing I am doing wrong, I just want to know how to do it right. I am using python23 and 2.5.2.8 so there should be no problem. from wxPython.wx import * class MyApp(wxApp): def OnInit(self): frame = wxFrame(NULL, -1, "Hello from wxPython") frame.Show(true) self.SetTopWindow(frame) return true app = MyApp(0) app.MainLoop()
http://forums.devshed.com/python-programming-11/wxpython-stronge-beyond-belief-198403.html
CC-MAIN-2018-26
refinedweb
124
72.26
os.path.abspath(__file__) returns wrong path after chdir. So I don't think abspath of module can be trivially and reliably derived from existing values. $ cat foo.py import os print(os.path.abspath(__file__)) os.chdir('work') print(os.path.abspath(__file__)) $ python foo.py /home/inada-n/foo.py /home/inada-n/work/foo.py On Sun, Sep 29, 2013 at 9:21 AM, Nick Coghlan <ncoghlan at gmail.com> wrote: > Note that any remaining occurrences of non-absolute values in __file__ are > generally considered bugs in the import system. However, we tend not to fix > them in maintenance releases, since converting relative paths to absolute > paths runs a risk of breaking user code. > > We're definitely *not* going to further pollute the module namespace with > values that can be trivially and reliably derived from existing values. > > Cheers, > Nick. > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > > > -- INADA Naoki <songofacandy at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-ideas/2013-September/023487.html
CC-MAIN-2016-36
refinedweb
166
50.73
... I am pleased to announce that DevExpress Universal v18.2.4, which we just published, has added full support for the new Visual Studio 2019 Preview 1 release. Although this is excellent news, do please note that we know of three issues (so far!) with VS2019 that you may run into. For more information, see here: Also please be warned that you will undoubtedly see this notification from VS2019: “Visual Studio has detected one or more extensions that are at risk of not functioning in a future VS update. Click 'Learn more' for next steps.”. This is nothing to be worried about - we know about it, and it will be fixed in the near future. Anyway, in further news, this release will also provide support for the current preview version of .NET Core 3. (Another warning: not only is .NET Core 3 in preview, but also our support for it is. Here Be Dragons!) To show off this support we have uploaded modified versions of both the “Outlook Inspired” (WPF & WinForms) and “Stock Market” (WinForms only) demos to GitHub. WinForms:: Each repository has a README with instructions on how to compile and run these demos, as well as how to create your own DevExpress-based projects that target the .NET Core 3 framework. I warn you as well that if you are using XAF and XPO, they do not at the time of writing support .NET Core 3. The team continues to research what's needed to provide such support in XPO and XAF WinForms/SPA apps. The coverage of various features in those frameworks strongly depends on the supported .NET APIs provided by .NET Core 3, and when they're introduced (for instance, at the moment, no ASP.NET WebForms, pruned WCF support, etc.). XPO already supports the .NET Standard 2.0 specification and can be used in .NET Core 2 apps, so we do not expect that our .NET Core 3 tests will take long. I, for one, am excited about these new enhancements, especially the .NET Core 3 support. Of course, along with everything else we do, we’re ready to hear about what you think of it all. Your feedback prompts and encourages us to improve our controls. So, what do you think? Will you be using .NET Core 3? What about VS2019? Let us know. Great news! You are awesome. great work, looking forward to full .NET Core 3 support for you WinForms controls P.S. your links above both point to the same page "" Both point to T701724 I am definitely interested in both .Net Core 3 and VS 2019... Hopefully you will be able to add support for Razor Components (server side Blazor) which will be part of ASP.NET Core 3.0. Yes, Blazor will be cool!!! Agreed - Blazor is going to be huge! A nice set of basic Razor Components would be fantastic. You *must* invest in Blazor. Please make DevExtreme widgets full compatible with Blazor. Blazor is definitely the next big thing in web development. We are currently focusing on Blazor apps for mobile and Intranet apps. Blazor will be released as Razor Components with .NET Core 3.0 Just my 2 ct. Regards Sven Great!!! +1 for Blazor support. We have plans to use it in our future apps I really like the DeveExpres components, but the expectation is great to know when we will have Devexpress Razor Componets / Blazor? What is the forecast for the launch? yeah, i like it too! Hey, i just tried to compile our Application on .NET Core with your Preview, got following error: error MC3074: The tag 'ReportDesigner' does not exist in XML namespace 'schemas.devexpress.com/.../userdesigner&. Line 16 Position 10. Is the reportdesigner not part of the package? Please or to post comments.
https://community.devexpress.com/blogs/ctodx/archive/2018/12/20/visual-studio-2019-and-net-core-3-support.aspx
CC-MAIN-2019-22
refinedweb
632
78.45
Source code: Lib/xml/dom/pulldom.py The xml.dom.pulldom module provides a “pull parser” which can also be asked to produce DOM-accessible fragments of the document where necessary. The basic concept involves pulling “events” from a stream of incoming XML and processing them. In contrast to SAX which also employs an event-driven processing model together with callbacks, the user of a pull parser is responsible for explicitly pulling events from the stream, looping over those events until either processing is finished or an error condition occurs. Warning The xml.dom.pulldom module is not secure against maliciously constructed data. If you need to parse untrusted or unauthenticated data see XML vulnerabilities. Example: from xml.dom import pulldom doc = pulldom.parse('sales_items.xml') for event, node in doc: if event == pulldom.START_ELEMENT and node.tagName == 'item': if int(node.getAttribute('price')) > 50: doc.expandNode(node) print(node.toxml()) event is a constant and can be one of: node is a object of type xml.dom.minidom.Document, xml.dom.minidom.Element or xml.dom.minidom.Text. Since the document is treated as a “flat” stream of events, the document “tree” is implicitly traversed and the desired elements are found regardless of their depth in the tree. In other words, one does not need to consider hierarchical issues such as recursive searching of the document nodes, although if the context of elements were important, one would either need to maintain some context-related state (i.e. remembering where one is in the document at any given point) or to make use of the DOMEventStream.expandNode() method and switch to DOM-related processing.())
http://docs.python.org/3.2/library/xml.dom.pulldom.html
CC-MAIN-2013-48
refinedweb
274
50.94
Create and execute a new child process #include <spawn.h> pid_t spawn( const char * path, int fd_count, const int fd_map[ ], const struct inheritance * inherit, char * const argv[ ], char * const envp[ ] ); If you set fdmap[X] to SPAWN_FDCLOSED instead of to a valid file descriptor, the file descriptor X is closed in the child process. For more information, see Mapping file descriptors, below. variable=value that's used to define an environment variable. If the value of envp is NULL, then the child process inherits the environment of the parent process. libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The spawn() function creates and executes a new child process, named in path. As shown below, these functions eventually call spawn(), which in turn sends a message to the process manager, procnto: The child process inherits the following attributes of the parent process or thread (as appropriate): The child process has several differences from the parent process or thread (as appropriate): The child process also has these differences from the parent process if you haven't set the SPAWN_EXEC flag: path path. Similarly, if the set-group ID mode bit is set, the effective group ID of the child process is set to the group ID of path.(). As described above, you can use the fd_count and fd_map arguments to specify which file descriptors you want the child process to inherit. int fd_map = { 1, 3, 5 };then the mapping is as follows: and file descriptors 0, 2, 4, and 6 (and higher) are().
https://www.qnx.com/developers/docs/7.1/com.qnx.doc.neutrino.lib_ref/topic/s/spawn.html
CC-MAIN-2022-40
refinedweb
262
66.78
dont think that you should make usee of include paths which directly point into a subdir of "include" like "include/GL". Instead the path to the subdir should be specified in any of the respective include statment. This will shorten commandline parameters for the compiler, reduce search time for includes somewhat and make the code easier to understand. The appended patch was created against X4.1.0 sources. I've checked that compilation is working for the whole glx subdir when this patch got applied. Regards Alex. PS: i am not subscribed to the (CC'ed) XFree86 list. i have _no_ write access to CVS of XFree86 or DRI. --- xc/lib/GL/Imakefile.orig Tue Apr 3 04:29:32 2001 +++ xc/lib/GL/Imakefile Wed Nov 28 22:36:25 2001 @@ -105,7 +105,6 @@ INCLUDES = -I$(INCLUDESRC) \ -I$(XINCLUDESRC) \ -I$(EXTINCSRC) \ - -I$(INCLUDESRC)/GL \ -I$(GLXLIBSRC)/glx \ -I$(MESASRCDIR)/src \ -I$(MESASRCDIR)/src/X86 \ --- xc/lib/GL/glxclient.h.orig Tue Apr 10 18:07:49 2001 +++ xc/lib/GL/glxclient.h Wed Nov 28 22:37:05 2001 @@ -50,8 +50,8 @@ #include <GL/glx.h> #include <string.h> #include <stdlib.h> -#include "glxint.h" -#include "glxproto.h" +#include "GL/glxint.h" +#include "GL/glxproto.h" #include "glapitable.h" #ifdef NEED_GL_FUNCS_WRAPPED #include "indirect.h" $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ WOW $89 for 10,000 targeted user $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ The Cheapest Source of Traffic on the Internet... ANYWHERE... INCREDIBLE... Targeted Traffic to your website... No Pop-up's... Yes, YOU CAN flood your site on the lead up to Christmas by buying one of our breath taking traffic packages. We truly believe that if you ain't selling now you ain't gonna be here in 3 months time. This is the time when most online spending takes place. Be part of it... Don't hang around as this offer will only last until 30th Nov 2001... ************************************ Free! Website Health Check... ************************************ Service FREE to all first time customers. We can check your website and advice on issues that will affect your website's ability to be seen by others. If your website is not right, you will not sell. There are millions of websites out there, make sure you make a difference. - The best price for your targeted traffic... *************************************** *$89 for 10,000 targeted users *************************************** Check it out! We have taken a further $10 off the cheapest and best traffic on the web. Valid until 30th November... With prices like this you an afford to promote your products and services over the potentially profitable Christmas season. Due to the wrong reasons, there are more and more people buying online as they no longer feel safe going away from home. Reach these client base today and place your first traffic order @ 10,000 targeted users for $89. Watch your site get flooded with potential customers for the tiny amount of less than a cent per user! And if they don't come you get your money back. That simple!!! We can also offer other marketing solutions on request. Trafficwow.net is creating the Internet Marketing tools of the future! We suggest you come visit us @ and get the best traffic prices on the Internet... We support credit card and PayPal transactions. We Are Fully Compliant to Senate bill1618,Title 3, section 301. <> This message is sent in compliance of the new e-mail bill: Section 301.per section 301, paragraph (a)(2)(c) of S 1618. <> Further Transmission to you by the sender of this e-mail may be stopped at no cost to you by sending a reply e-mail it this address with the word REMOVE in the subject line. marketing@... I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/dri/mailman/dri-devel/?viewmonth=200111&viewday=28
CC-MAIN-2017-04
refinedweb
650
76.93
Created on 2007-04-02 05:23 by Rhamphoryncus, last changed 2007-04-24 10:06 by zseil. This issue is now closed. warnings.warn() gets the filename using the globals' __file__ attribute. When using eval or exec this is often not the context in which the current line was compiled from. The line number is correct, but will be applied to whatever file the globals are from, leading to an unrelated line being printed below the warning. The attached patch makes it use caller.f_code.co_filename instead. This also seems to remove the need to normalize .pyc/.pyo files, as well as not needing to use sys.argv[0] for __main__ modules. It also cleans up warnings.warn() and adds three unit tests. The test_stackoverflow_message test I added causes warnings.py to use sys.__warningregistry__ rather than test_warnings.__warningregistry__. This circumvents this self-declared hack of deleting test_warnings.__warningregistry__ to allow regrtest -R to run. This version of the patch uses a more general solution to allowing regrtest -R to run. Probably doesn't qualify as a hack anymore... File Added: python2.6-warningfilename2.diff sys._getframe(sys.maxint) produces an OverflowError on 64bit GCC. Patch changed to use sys._getframe(10**9) instead. File Added: python2.6-warningfilename3.diff I've rewritten the unit tests I added to follow the style of walter.doerwald's changes to test_warnings.py (commited to SVN during the same period in which I posted my patch). The guard_warnings_registry() context manager is now in test_support. File Added: python2.6-warningfilename4.diff I'm not sure why all the code was in there to begin with it was ancient history. Hopefully Guido remembers. Guido, do you see any particular issue with this approach (ie, removing all the gyrations in warnings and using f_code.co_filename? There is a difference in behaviour. If a module modifies its __file__, the old code would pick up the modified version. This new code will find the real file. I'm not sure if that's a bug or a feature though. :-) Similarly with __name__. Adam, I like adding the warning_guard using a with statement. Although I don't think comment should be completely removed. Could you add a comment about why the guard is necessary (ie, the repeated calls/-R part)? Have you tested with -R and with -u all? ie: ./python -E -tt ./Lib/test/regrtest.py -R 4:3: and ./python -E -tt ./Lib/test/regrtest.py -u all (two separate runs). In the last test case you are replacing (the old spam7), why does "sys" become "no context"? The main reason for using __file__ instead of the filename baked into the code object is that sometimes .pyc files are moved after being compiled; the code object is not updated, but __file__ has the correct path as it reflects how the module was found during import. Also, code objects may have relative pathnames. Neal: Comment added. regrtest -R and -u all report no problems specific to this patch. A clean build of trunk does show problems without it, with test_structmembers, test_tcl, test_telnetlib, test_unicode_file, and test_uuid. At this point I've seen so many weird failures that I think I've started repressing them. ;) When no stack frame is available warn() falls back to sys as the module. Since I'm trying to avoid claiming a warning came from a different file, I made it appear to come from <no context> instead. Looking at it now though I wonder if I should go a step further, not having a __warningregistry__ or module at all for such warnings. Guido: I see what you mean about stale co_filename attributes. I'm not sure what the best fix is now. I'm not exactly enthused at having to learn how the import mechanisms work. ;) File Added: python2.6-warningfilename5.diff I "forgot", test_bsddb3 has problems too, but again they're not related to my patch. Unfortunately, test_bsddb3 has lots of problems. :-( They vary by platform and version of bsddb used. Given the current code works, I'm not sure it's worth spending time cleaning this up. Especially since I plan to move this code to C. It might be worthwhile to separate out the warning guard. I can't remember if the __warningregistry__ was mucked with in multiple places. If it's only used in the one place, I'm not sure that's worth it either. Learning import is a test of manhood. :-) test_struct is the only thing that touches __warningregistry__ besides test_warning and (in my other patch) test_py3kwarnings. Looks like it would benefit from it, and could use some decorators too.. def check_float_coerce(format, number): ... check_float_coerce = with_warning_restore(deprecated_err) *cough* Paraphrases of religious quotes relating to the creation of decorators aside, I think that'd be better written as a contextmanager. By the time I'm done I'm going to end up with two dozen patch sets covering 80% of the python codebase x_x. Bug #1180193 has a patch with a fix for stale co_filename attributes:
http://bugs.python.org/issue1692664
CC-MAIN-2014-41
refinedweb
838
69.28
Both Visual Studio Online and Team Foundation Server 2015 make it easy to achieve the Continuous Integration Automation. You can see the quick video which shows Continuous Integration workflow and a DevOps walkthrough using Visual Studio 2015 For the purpose of this blog I am going to walk you through and example of using Visual Studio Online ‘VSO’ with an existing Git repository and then look at some best practices for setting testing and deployments. Preliminary requirements Setup Visual Studio online via DreamSpark Visual Studio Online is the fastest and easiest way yet to plan, build, and ship software across a variety of platforms. Get up and running in minutes on our cloud infrastructure without having to install or configure a single server. Using Visual Studio Online and Git - Create the Team Project and Initialize the Remote Git Repo - Open the Project in Visual Studio, Clone the Git Repo and Create the Solution - Create the Build Definition - Enable Continuous Integration, Trigger a Build, and Deploy the Build Artifacts - Deploying the build artefacts to our web application host server Getting Started 1. Create the Team Project and Initialize the Remote Git Repo Create a new team project by Logging onto VSO, going to the home page, and click on the New.. link. Enter a project name and description. Choose a process template. Select Git version control, and click on the Create Project button. The project is created. Click on the Navigate to project button. The team project home page is displayed. We now need to initialize the Git repo. Navigate to the CODE page, and click on Create a ReadMe file. The repo is initialized and a Master branch created. For simplicity I will be setting up the continuous integration on this branch. Below shows the initialized master branch, complete with README.md file. 2. Open the Project in Visual Studio, Clone the Git Repo and Create the Solution Next we want to open the project in Visual Studio and clone the repo to create a local copy. Navigate to the team project’s Home page, and click on the Open in Visual Studio link. Visual Studio opens with a connection established to the team project. On the Team Explorer window, enter a path for the local repo, and click on the Clone button. Now click on the New… link to create a new solution. Select the ASP.NET Web Application project template, enter a project name, and click on OK. Choose the ASP.NET 5 Preview Web Application template and click on OK. Now add a unit test project by right-clicking on the solution in the solution explorer, selecting the Add New Project option, and choosing the Unit Test Project template. I have named my test project CITest.Tests. Your solution should now look like this. The UnitTest1 test class is generated for us, with a single test method, TestMethod1. TestMethod1 will pass as it has no implementation. Add a second test method, TestMethod2, with an Assert.Fail statement. This 2nd method will fail and so will indicate that the CI test runner has been successful in finding and running the tests. 1: using System; 2: using Microsoft.VisualStudio.TestTools.UnitTesting; 3: 4: namespace CITest.Tests 5: { 6: [TestClass] 7: public class UnitTest1 8: { 9: [TestMethod] 10: public void TestMethod1() 11: { 12: } 13: 14: [TestMethod] 15: public void TestMethod2() 16: { 17: Assert.Fail("failing a test"); 18: } 19: } 20: } Save the change, and build the solution. We now want to commit the solution to the local repo and push from the local to the remote. To do this, select the Changes page in the Team Explorer window, add a commit comment, and select the Commit and Push option. The master branch of the remote Git repo now contains a solution, comprising of a web application and a test project. 3. Create a Build Definition We now want to create a VSO build definition. Navigate to the team project’s BUILD page, and click on the + button to create a new build definition. Select the Visual Studio template and click on OK. The Visual Studio build definition template has 4 build steps – - Visual Studio Build – builds the solution - Visual Studio Test – runs tests - Index Sources & Publish Symbols – indexes the source code and publishes symbols to .pdb files - Publish Build Artifacts – publishes build artifacts (dlls, pdbs, and xml documentation files) For now accept the defaults by clicking on the Save link and choosing a name for the build definition. We now want to test the build definition. Click on the Queue build… link. Click on the OK button to accept the build defaults. We are taken the build explorer. The build is queued and once running we will see the build output. The build has failed on the Build Solution step, with the following error message – The Dnx Runtime package needs to be installed. The reason for the error is that we’re using the hosted build pool and so we need to install the DNX runtime that our solution targets prior to building the solution. Return to the Visual Studio and add a new file to the solution items folder. Name the file Prebuild.ps1, and copy the following powershell script into the file. 1: DownloadString(''))} 2: 3: # load up the global.json so we can find the DNX version 4: $globalJson = Get-Content -Path $PSScriptRoot\global.json -Raw -ErrorAction Ignore | ConvertFrom-Json -ErrorAction Ignore 5: 6: if($globalJson) 7: { 8: $dnxVersion = $globalJson.sdk.version 9: } 10: else 11: { 12: Write-Warning "Unable to locate global.json to determine using 'latest'" 13: $dnxVersion = "latest" 14: } 15: 16: # install DNX 17: # only installs the default (x86, clr) runtime of the framework. 18: # If you need additional architectures or runtimes you should add additional calls 19: # ex: & $env:USERPROFILE\.dnx\bin\dnvm install $dnxVersion -r coreclr 20: & $env:USERPROFILE\.dnx\bin\dnvm install $dnxVersion -Persistent 21: 22: # run DNU restore on all project.json files in the src folder including 2>1 to redirect stderr to stdout for badly behaved tools 23: Get-ChildItem -Path $PSScriptRoot\src -Filter project.json -Recurse | ForEach-Object { & dnu restore $_.FullName 2>1 } The script bootstraps DNVM, determines the target DNX version from the solution’s global.json file, installs DNX, and then restores the project dependencies included in all the solution’s project.json files. With the Prebuild.ps1 file added, your solution should now look like this. Commit the changes to the local repo and push them to the remote. We now need to add a Powershell build step to our build definition. Return to VSO, and edit the build definition. Click on the + add build step… link and add a new PowerShell build step. Drag the Powershell script task to the top of the build steps list, so that it it is the 1st step to run. Click on the Script filename ellipses and select the Prebuild.ps1 file. Click on Save and then Queue build… to test the build definition. This time all build steps succeed. However, if we look more closely at the output from the Test step, we see a warning – No results found to publish. But we added 2 test methods to the solution? The clue is in the second “Executing” statement which shows that the vstest.console was executed for 2 test files – CITest.Tests.dll, which is good. And Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll, which is bad. We need to modify the Test build step to exclude the UnitTestFramework.dll file. Edit the build definition, select the Test step, and change the Test Assembly path from **\$(BuildConfiguration)\*test*.dll;-:**\obj\** to **\$(BuildConfiguration)\*tests.dll;-:**\obj\**. Click on Save and then click on Queue Build… The build now fails. But this is what we want to happen. TestMethod2 contains an Assert.Fail() statement, and so we are forcing the Test build step to fail as shown below. We have successfully failed (not something I get to say often), hence proving that the tests are being correctly run. 4. Enable Continuous Integration, Trigger a Build, and Deploy the Build Artifacts We have a working Pre-build step that downloads the target DNX framework, a working Build step that builds the solution, and a working Test step that fails due to TestMethod2. We will now set-up the continuous integration, and then make a change to the UnitTest1 class in order to remove (fix) TestMethod2. We will then commit and push the change, which should hopefully trigger a successful build thanks to the continuous integration. Edit the build definition, and navigate to the Triggers tab. Check the Continuous Integration (CI) check-box and click on Save. Edit the UnitTest1.cs file in Visual Studio, and delete the TestMethod2 method. Commit and push the changes. Return to VSO and navigate to the BUILD page. In the Queued list we should now see a build waiting to be processed, which it will be in due course. All build steps should now succeed. The target DNX version is installed onto the build host. The solution is built. The tests are run. The symbol files are generated. And finally, the build artefacts are published. So we have a new build that has been tested. 5. Deploying the build artefacts to our web application host server If we are hosting our web application on Windows Azure, we can add an Azure Web Application Deployment step to our build definition and in so doing have the build artifacts automatically deployed to Azure when our application is successfully built and tested. Alternatively, we can manually download the build artefacts and then copy to our chosen hosting server. To do this, navigate to the completed build queue, and open the build. then click on the Artifacts tab and click on the Download link. A .zip file will be downloaded containing the artifacts. Test, Test, Test So we now have the site built using continuous deployment, now lets look at how we can do testing. Perquisites Prerequisites for executing build definitions is to have your build agent ready, here are steps to setup your build agent, you can find more details in this blog . Creating a build definition and select “Visual Studio” template. Selecting Visual Studio template will automatically add a Build Task and Unit Test Task. Please fill in the parameters needed for each of the tasks. Build task is straight-forward, it just takes the solution that has to be built and the configuration parameters. As I had mentioned earlier this solution contains product code, unit test code and also automated selenium tests that we want to run as part of build validation. Final step is to add the required parameters needed for the Unit Test task – Test Assembly and Test Filter criteria. One key thing you notice below in this task is we take the unit test dll and enumerate all tests in it and run the tests automatically. You can include a test filter criteria and filter on traits defined in test cases if you want to execute specific tests. Another important point, unit tests in Visual Studio Test Task always run on build machine and do not require any deployment/additional setup. See figure 3 below. Using Visual Studio Online for Test Management - Setting up machines for application deployment and running tests - Configuring for application deployment and testing - Deploying the Web Site using Powershell - Copy Test Code to the Test Machines - Deploy Visual Studio Test Agent - Run Tests on the remote Machines - Queue the build, execute tests and test run analysis - Configuring for Continuous Integration Getting Started 1. Setting up machines for application deployment and running tests Once the Build is done and the Unit tests have passed, the next step is to deploy the application (website) and run functional tests. Prerequisites for this are: - Already provisioned and configured Windows Server 2012 R2 with IIS to deploy the web site or a Microsoft Azure Website. - A set of machines with all browsers (Chrome, Firefox and IE) installed to automatically run Selenium tests on these machines. Please make sure Powershell Remote is enabled on all the machines. Once the machines are ready, go to the Test Hub->Machine page to create the required machine configuration as shown in the below screen shots. Enter machine group name and enter the FQDN/IP Address of the IIS/Web Server machine that is setup earlier. You might also need to enter the admin username and password for the machine for all further configurations. The application under test environment, should always be a test environment not a production environment as we are doing integration tests targeting the build. For Test Environment, give a name to the test environment and add all IP address of the lab machines that were setup already with the browsers. As I had mentioned earlier test automation system is capable of executing all tests in a distributed way and can scale up to any number of machines (we will have another blog). At the end of this step, in machines hub you should have one application under test environment and a test environment, in the example we are using them as “Application Under Test” and “Test Machines” respectively as the names of the machine groups. 2. Configuring for application deployment and testing In this section, we will show you how to add deployment task for deploying the application to the web server and remote test execution tasks to execute integration tests on remote machines. We will use the same build definition and enhance it to add the following steps for continuous integration: 3. Deploying the Web Site using Powershell We first need to copy the all the website files to the destination. Click on “Add build step” and add “Windows Machine File Copy” task and fill in the required details for copying the files. Then add “Run Powershell on Target Machine Tasks” to the definition for deploying/configuring the Application environment. Chose “Application under Test” as the machine group that we had setup earlier for deploying the web application to the web server. Choose powershell script for deploying the website (if you do not have deployment web project, create it). Please make sure to include this script in the solution/project. This task executes powershell script on the remote machine for setting up the web site and any additional steps needed for the website. 4. Copy Test Code to the Test Machines As Selenium UI tests which are we are going to use as integration tests are also built as part of the build task, add “Copy Files” task to the definition to copy all the test files to the test machine group “Test Machines” which was configured earlier. You can chose any test destination directory in the below example it is “C:\Tests” 5. Deploy Visual Studio Test Agent To execute on remote machines, you first deploy and configure the test agent. To do that, all you need is a task where you supply the remote machine information. Setting up lab machines is as easy just adding a single task to the work flow. This task will deploy “Test Agent” to all the machines and configures them automatically for the automation run. If the agent is already available and configured on the machines, this task will be a no-op. Unlike older versions of Visual Studio – now you don’t need to go manually copy and setup the test controller and test agents on all the lab machines. This is a significant improvement as all the tasks can be done remotely and easily. 6. Run Tests on the remote Machines Now that the entire lab setup is complete, last task is to add “Run Visual Studio Tests using Test Agent” task to actually run the tests. In this task specify the Test Assembly information and a test filter criteria to execute the tests. As part of build verification we want to run only P0 Selenium Tests, so we will filter the assemblies using SeleniumTests*.dll as the test assembly. You can include a runsettings file with your tests and any test run parameters as input. In the example below, we are passing the deployment location of the app to the tests using the $(addurl) variable. Once all tasks are added and configured save the build definition. 7. Queue the build, execute tests and test run analysis Now that the entire set of tasks are configured, you can verify the run by queuing the build definition. Before queuing the build make sure that the build machine and test machine pool is setup. Once the build definition execution is complete, you will get a great build summary with all the required information needed for you to take the next steps. As per the scenario, we have completed the build, executed unit tests and also ran Selenium Integration Tests on remote machines targeting different browsers. Build Summary has the following information: - A summary of steps that have passed and color coded on the left and details in the right side panel. - You can click on each task to see detailed logs. - From the tests results, you can see that all unit tests passed and there were failures in the integration tests. Next step is to drill down and understand the failures. You can simply click on the Test Results link in the build summary to navigate to the test run results. Based on the feedback, we have created a great Test Run summary page with a set of default charts and also mechanism to drill down into the results. Default summary page has the following built-in charts readily available for you - Overall Tests Pass/Fail, Tests by priority, configuration, failure type etc. If you want to drill deeper on the tests, you can click on the “Test Results” tab, you will get to see each and every test – test title, configuration, owner, machine where it was executed etc. For each failed test, you can click on the “Update Analysis” to analyze the test. In the below summary you notice that IE Selenium tests are failing. You can directly click on “Create Bug” link at the top to file bugs, it will automatically take all test related information metadata from the results and include it in the bug – it is so convenient. 8. Configuring for Continuous Integration Now that the tests are all investigated and bugs filed, you can configure the above build definition for Continuous Integration to run build, unit tests and key integration tests automatically for every subsequent check-in. You can navigate to the build definition and click on Triggers. You have two ways to configure: - Select “Continuous Integration” to execute the workflow for all batched check-ins - Select a specific schedule for validating the quality after all changes are done. You can also choose both as shown below, the daily scheduled drop can be used as daily build for other subsequent validations and consuming it for partner requirements. Using the above definition, you are now setup for “Continuous Integration” of the product to automatically build, run unit tests and also key integration tests for validating the builds. All the tasks shown above can be used in Release Management workflow as well to enable Continuous Delivery scenarios. Summary To summarize what all we have achieved in this walk through: - Created a simple build definition with build, unit testing and automated tests - Simplified experience in configuring machines and test agents - Improvements in build summary and test runs analysis - Configuring the build definition as a “continuous integration” for all check-ins You write "Using Visual Studio Online and GitHub". What does GitHub have to do with any of this? Don't you mean: "Using Visual Studio Online and Git"? How do we create machine group with IP Address and WinRM Port 5985. Could not connect using winrm without trustedhosts in visual studio online "Download string not recognized"…. how do I fix that? Prefix “(New-Object System.Net.WebClient)” to the DownloadString() method. It should look like this. (New-Object System.Net.WebClient).DownloadString(“”) If the powershell script still does not work, use the script found at this website. @James Hancock, I had the same issue. I just followed the link between the brackets and copied the powershell script that I found there. If you execute that script before the Prebuild.ps1 script, you're good to go. (don't forget to delete the DownloadString() line) The code for the Prebuild.ps1 script does not currently compile on Visual Studio Team Services. I used the following script: # bootstrap DNVM into this session. # load up the global.json so we can find the DNX version $globalJson = Get-Content -Path $PSScriptRoot\global.json -Raw -ErrorAction Ignore | ConvertFrom-Json -ErrorAction Ignore if($globalJson) { $dnxVersion = $globalJson.sdk.version } else { Write-Warning “Unable to locate global.json to determine using ‘latest'” $dnxVersion = “latest” } # install DNX # only installs the default (x86, clr) runtime of the framework. # If you need additional architectures or runtimes you should add additional calls # ex: & $PSScriptRoot\src -Filter project.json -Recurse | ForEach-Object { & dnu restore $_.FullName 2>1 } As found here:
https://blogs.msdn.microsoft.com/uk_faculty_connection/2015/09/07/continuous-integration-and-testing-using-visual-studio-online/
CC-MAIN-2017-51
refinedweb
3,523
63.29
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. Hi Jonathan, On 1 Feb 2015, at 15:10, Jonathan Wakely wrote: > On 01/02/15 15:08 +0000, Jonathan Wakely wrote: >> I failed to CC gcc-patches on this patch ... >> >> On 29/01/15 13:02 +0000, Jonathan Wakely wrote: >>> Jakub pointed out that we have some attributes that don't use the >>> reserved namespace, e.g. __attribute__ ((always_inline)). >>> >>> This is a 4.9/5 regression and the fix was pre-approved by Jakub so >>> I've committed it to trunk. >>> >>> When we're back in stage1 I'll fix the TODO comments in the new tests >>> (see PR64857) and will also rename testsuite/17_intro/headers/c++200x >>> to .../c++2011. >>> > > The new test fails on darwin (PR64883) and --enable-threads=single > targets (PR64885). > > This is a workaround for 64883. Tested x86_64-linux, committed to > trunk. > <patch.txt> the following additional tweaks provide further work-arounds. ... checked on darwin12 and darwin14. I have a fixincludes patch for next stage #1. Iain Attachment: darwin-additional-attr-fixes.txt Description: Text document
https://gcc.gnu.org/legacy-ml/gcc-patches/2015-02/msg00122.html
CC-MAIN-2021-04
refinedweb
189
69.48
. Basic plotting Matplotlib can plot just about anything you can imagine! For this blog I’ll be using only a very simple plot to illustrate how it can be done in Excel. There are examples of hundreds of other types of plots on the matplotlib website that can all be used in exactly the same way as this example in Excel. To start off we’ll write a simple function that takes two columns of data (our x and y values), calculates the exponentially weighted moving average (EWMA) of the y values, and then plot them together as a line plot. Note that our function could take a pandas dataframe or series quite easily, but just to keep things as simple as possible I’ll stick to plain numpy arrays. To see how to use pandas datatypes with PyXLL see the pandas examples on github:. from pyxll import xl_func from pandas.stats.moments import ewma import matplotlib.pyplot as plt @xl_func("numpy_column<float> xs, " "numpy_column<float> ys, " "int span: string") def mpl_plot_ewma(xs, ys, span): # calculate the moving average ewma_ys = ewma(ys, span=span) # plot the data plt.plot(xs, ys, alpha=0.4, label="Raw") plt.plot(xs, ewma_ys, label="EWMA") plt.legend() # show the plot plt.show() return "Done!" To add this code to Excel save it to a Python file and add it to the pyxll.cfg file (see for details). Calling this function from Excel brings up a matplotlib window with the expected plot. However, Excel won’t respond to any user input until after the window is closed as the plt.show() call blocks until the window is closed. The unsmoothed data is generated with the Excel formula =SIN(B9)+SIN(B9*10)/3+SIN(B9*100)/7. This could just as easily be data retrieved from a database or the output from another calculation. Non-blocking plotting Matplotlib has several backends which enables it to be used with different UI toolkits. Qt is a popular UI toolkit with Python bindings, one of which is PySide. Matplotlib supports this as a backend, and we can use it to show plots in Excel without using the blocking call plt.show(). This means we can show the plot and continue to use Excel while the plot window is open. In order to make a Qt application work inside Excel it needs to be polled periodically from the main windows loop. This means it will respond to user inputs without blocking the Excel process, or stopping Excel from receiving user input. Using the windows ‘timer’ module is an easy way to do this. Using the timer module has the advantage that it keeps all the UI code in the same thread as Excel’s main window loop, which keeps things simple. from PySide import QtCore, QtGui import timer def get_qt_app(): """ returns the global QtGui.QApplication instance and starts the event loop if necessary. """ app = QtCore.QCoreApplication.instance() if app is None: # create a new application app = QtGui.QApplication([]) # use timer to process events periodically processing_events = {} def qt_timer_callback(timer_id, time): if timer_id in processing_events: return processing_events[timer_id] = True try: app = QtCore.QCoreApplication.instance() if app is not None: app.processEvents(QtCore.QEventLoop.AllEvents, 300) finally: del processing_events[timer_id] timer.set_timer(100, qt_timer_callback) return app This can be used to embed any Qt windows and dialogs in Excel, not just matplotlib windows. Now all that’s left is to update the plotting function to plot to a Qt window instead of using pyplot.show(). Also we can give each plot a name so that when the data in Excel changes and our plotting function gets called again it re-plots to the same window instead of creating a new one each time. from matplotlib.figure import Figure from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QT as NavigationToolbar # dict to keep track of any plot windows _plot_windows = {} @xl_func("string figname, " "numpy_column<float> xs, " "numpy_column<float> ys, " "int span: string") def mpl_plot_ewma(figname, xs, ys, span): """ Show a matplotlib line plot of xs vs ys and ewma(ys, span) in an interactive window. :param figname: name to use for this plot's window :param xs: list of x values as a column :param ys: list of y values as a column :param span: ewma span """ # create the figure and axes for the plot fig = Figure(figsize=(600, 600), dpi=72, facecolor=(1, 1, 1), edgecolor=(0, 0, 0)) ax = fig.add_subplot(111) # calculate the moving average ewma_ys = ewma(ys, span=span) # plot the data ax.plot(xs, ys, alpha=0.4, label="Raw") ax.plot(xs, ewma_ys, label="EWMA") ax.legend() # Get the Qt app. # Note: no need to 'exec' this as it will be polled in the main windows loop. app = get_qt_app() # generate the canvas to display the plot canvas = FigureCanvas(fig) # Get or create the Qt windows to show the chart in. if figname in _plot_windows: # get the existing window from the global dict and # clear any previous widgets window = _plot_windows[figname] layout = window.layout() if layout: for i in reversed(range(layout.count())): layout.itemAt(i).widget().setParent(None) else: # create a new window for this plot and store it for next time window = QtGui.QWidget() window.resize(800, 600) window.setWindowTitle(figname) _plot_windows[figname] = window # create the navigation toolbar toolbar = NavigationToolbar(canvas, window) # add the canvas and toolbar to the window layout = window.layout() or QtGui.QVBoxLayout() layout.addWidget(canvas) layout.addWidget(toolbar) window.setLayout(layout) # showing the window won't block window.show() return "[Plotted '%s']" % figname When the function’s called it brings up the plot in a new window and control returns immediately to Excel. The plot window can be interacted with and Excel still responds to user input in the usual way. When the data in the spreadsheet changes the plot function is called again and it redraws the plot in the same window. Next steps The code above could be refined and the code for creating, fetching and clearing the windows could be refactored into some reusable utility code. It was presented in a single function for clarity. Plotting to a separate window from Excel is sometimes useful, especially as the interactive controls can be used and may be incorporated into other Qt dialogs. However, sometimes it’s nicer to be able to present a graph in Excel as a control in the Excel grid in the same way the native Excel charts work. This is possible using PyXLL and matplotlib and will be the subject of the next blog! All the code from this blog is available on github. Additional Resources: - Download a FREE 30 day trial of PyXLL - See PyXLL FAQs - Download the Enthought Canopy Python distribution, which includes the Matplotlib, Pandas, NumPy, IPython, PySide, and Qt libraries referenced in the blog pos - Visit the Matplotlib website for plot examples It’s look like matlab. And very useful for who will make graph on excel.
http://blog.enthought.com/python/plotting-in-excel-with-pyxll-and-matplotlib/
CC-MAIN-2019-04
refinedweb
1,160
55.64
Opened 10 years ago Closed 4 years ago #11027 closed Bug (duplicate) Storage backends should know about the max_length attribute of the FileFields Description Currently the Storage Backends may return a filename which is longer than the FileField actually supports. This of course breaks it. If the Storage knew about the maxmimum length it is allowed to returned, it could make sure, that the filenames aren't longer. Change History (20) comment:1 Changed 10 years ago by comment:2 Changed 10 years ago by comment:3 Changed 9 years ago by comment:4 Changed 9 years ago by comment:5 Changed 9 years ago by When fixing this ticket, we should keep in mind, that the storage location (maybe a larger directory structure) needs to be considered too. If my file field has a max_length of 45, and my upload_to/storage location is "uploads/users/images/gallery/photos/", the filename has a max length of 8 (the directory name has 37 chars). comment:6 Changed 9 years ago by After a short discussion with jezdez, this should be considered as DDN. comment:7 Changed 8 years ago by comment:8 Changed 8 years ago by comment:9 Changed 8 years ago by comment:10 Changed 8 years ago by comment:11 Changed 8 years ago by Marking this as a bug, since #15247 makes it clear that's what it is. Discussed briefly with jezdez on IRC and concluded that the fix proposed here probably is the right one, adding a max_length argument to the get_valid_name method of storage classes. Unfortunately, this is backwards incompatible, and it won't just break code that was relying on buggy behavior, it will break all custom storage backends. So it requires a deprecation path, and during the deprecation period that will require introspecting the get_valid_name method to see whether it accepts the valid_name argument (or trying with, catching the error, and then trying without). Either one of those is kind of ugly, but the alternative is to introduce a new method, like "get_valid_name_with_max_length". That makes the deprecation-period code a bit nicer, but then we're stuck with the longer method name for good. Probably better to take the short-term hit. comment:12 Changed 8 years ago by comment:13 Changed 8 years ago by comment:14 Changed 8 years ago by Hey, it is not very difficult problem. I wrote a solution and use it in my projects. It is not perfect solution, but it is much better then unhandled exception. I hope you take it in django. def filefield_maxlength_validator(value): """" Check if absolute file path can fit in database table and filename length can fit in filesystem. """ FILESYSTEM_FILENAME_MAXLENGTH = 255 # ext4, for example RESERVE = 50 # We need reserve for generating thumbnails and for suffix if filename already exists, example - filename_85x85_1.jpg # For the initial file saving value.path contains not the same path, which will be used to save file filename = value.name filepath = os.path.join( settings.MEDIA_ROOT, value.field.generate_filename(value.instance, filename) ) bytes_filename = len(filename.encode('utf-8')) # filename length in bytes bytes_filepath = len(filepath.encode('utf-8')) # path length in bytes # File path length should fit in table cell and filename lenth should fit in in filesystem if bytes_filepath > value.field.max_length or bytes_filename > FILESYSTEM_FILENAME_MAXLENGTH - RESERVE: if os.path.isfile(value.path): os.remove(value.path) raise exceptions.ValidationError(_(u"File name too long.")) return value FileField.default_validators = FileField.default_validators[:] + [filefield_maxlength_validator] comment:15 Changed 8 years ago by Milestone 1.3 deleted comment:16 Changed 7 years ago by comment:17 Changed 7 years ago by Overriding get_valid_name is probably not a workable solution. Currently, it claims only to return a filename "that's suitable for use in the target storage system." This just means a short cleaning of whitespace characters, hyphens, etc. So making it actually return a valid filename (in terms of length) would be a change in mission. Additionally, we actually need to validate the length of the entire storage path (which get_valid_name doesn't have access to). And finally, even if we got a valid name, get_available_name can increase the length of that name during the saving process, leaving us exactly where we started. get_available_name needs to take a max_length argument, or it needs to be able to guarantee that it won't increase the length of the filename. Since the latter isn't possible as far as I can see, I expect to do the former. If get_available_name *can't* make a name within the given max length, it will raise a ValueError. FileField validation would then need to call get_available_name *after* generating a filename and instead of (or after) a simple check of the length of the filename. Any ValueError raised can be re-raised as a ValidationError. Since get_available_name is part of the public API, this would mean either doing introspection, wrapping every call to get_available_name in try/except, or adding a new method called (say) get_valid_storage_path, which would handle max_length and the available_name checks, then deprecate get_available_name. Personally, I would lean toward the latter option, since I find the whole name/filename/path/storage path naming mess confusing. comment:18 Changed 7 years ago by I kind of like the idea that get_valid_name() just converts the name to be something valid for the storage backend. Then, get_available_name has the final control of how to truncate the name, and to make sure the name is within the name length limit. The contract is altered from "gimme an available file name as close to the requested name" to "gimme an available file name as close to the requested name, max length N". It seems the final name is determined by .save() - if the file name clashes due to race conditions, then .save() can alter the file name. This brings another point to the discussion: if final file name is determined by .save(), then shouldn't .save() also know the file name length restriction? As for implementation, I like the try-except approach most. comment:19 Changed 7 years ago by Yes, .save() (and ._save()) would also need to know about the max_length restriction. 1.2 is feature-frozen, moving this feature request off the milestone.
https://code.djangoproject.com/ticket/11027
CC-MAIN-2019-22
refinedweb
1,036
62.58
Since bug 183794 has been implemented there is an asymetry between the way people register new actions (annotation) and the way they reference them (layer). For the sake of usability, it is desirable to provide some annotation based way of creating references. Proposal is part of Let's keep things simple and add org.openide.awt.ActionReference annotation. Need to see a proposed patch incl. nontrivial uses to really evaluate, especially with subtle parts like name() and id(). Some of the issues brought up in are handled by a separate @ActionID & @ActionRegistration and use of package annotations. [JG01] While a generic @ActionReference is useful for extensibility to unforeseen domains such as the Projects tab context menu, the current proposal forces people to remember, and type, magic strings like "Toolbars" (capitalized differently from "JToolBar" by the way!). If you are opposed to having @ActionMenuReference etc. then at least introduce some constants into ActionReference, e.g.: /** Main menu bar. Path will be a menu code name, possibly with submenus. */ String MENU = "Menu/"; /** Main tool bar set. Path will be a toolbar code name. */ String TOOLBARS = "Toolbars/"; so you can write e.g. @ActionReference(path=ActionReference.TOOLBARS + "Edit", position=123) I would suggest a special annotation for Shortcuts, e.g. import static java.awt.event.KeyEvent.*; import static ActionShortcutReference.Stroke.*; import static ActionShortcutReference.Modifiers.*; @ActionShortcutReference({@Stroke(key=VK_X, modifiers=CTRL), @Stroke(key=VK_WINDOWS)}) since the syntax is rather complex and it is easy to make typos. But if generic @ActionReference is to be used for this as well then there must be a SHORTCUTS constant whose Javadoc carefully describes the required name() syntax, {@link}ing to appropriate specifications such as KeyEvent and Utilities.stringToKeys. The processor for a generic annotation could also hardcode syntax checks for certain well-known locations. For example, require and try to parse name() if in Shortcuts; or verify that paths under Toolbars specify no subfolder; or require a position attribute for menu items. This is not as nice as a custom annotation but better than nothing. [JG02] Would be useful to have position-valued attributes separatorBefore() and separatorAfter(), which would require position(). These are very commonly needed for menu registrations (also possible for toolbar registrations), and specifying separators in a layer is cumbersome. Builder is on: First set of changes too: Re. JG01 The problem without multiple difference annotations is that I don't see easy way to incorporate them into @ActionReferences(value={}). I understand that it is necessary to provide some guidance when typing the path. Rather than the tricks mentioned in JG01, I'd like to use getCompletions and provide either hardcoded or real hints. (In reply to comment #3) > The problem without multiple difference annotations is that I don't > see easy way to incorporate them into @ActionReferences(value={}). You wouldn't need to; you would have multiple top-level annotations. You only need @Somethings if you would otherwise need to have more than one @Something. (Would be nicer if JSR 175 were amended so you could repeat a given annotation, if the annotation were meta-annotated to allow this.) > use getCompletions and provide either hardcoded or real hints Better than nothing. BTW String.isEmpty is easier to read than .equals(""). Is there no ActionReference.name() as in wiki, needed for shortcuts? This is why I wanted to see some representative uses in various modules. ElementType.PACKAGE was forgotten in @Target on @ActionReference, and the processor fails to handle this case. Delete empty @return. Next round. I am dealing with the wizards. I have problems with properly positioning the action. I've added toInteger() method but I am not sure how to implement it: Separators are also not handled. Simplest thing is to leave them in XML file. Still todo. Re. "ElementType.PACKAGE was forgotten". It does not make much sense to put just one @ActionReference on package, usually you want to put there more. So I left out the element intentionally. Do not use Proxy in createActionReference. This is too complicated. Simply define a small struct class with the desired getters for use from FreeMarker. Regarding toPosition - you would I think need to duplicate code from CreatedModifiedFiles.orderLayerEntry. Needs to find the surrounding FileObject's and pick a position between them. Ability to add just a single @ActionReference on a package would be consistent with other package annotations which permit both the single and multiple forms. Not a major issue to omit it. Version which imho misses only better getCompletions in the annotation processor. Shows sample usage, properly generates separators (in layer), positions the actions fine:!? Please notice also one additional contract between openide.awt and apisupport.project: It is not working properly yet as the IDE's code completion support needs a bit of time to settle down. Regardless of that I'd like to integrate tomorrow. We'll deal with the proper behaviour of code completion in default branch. (In reply to comment #9) > Please notice also one additional contract between openide.awt and > apisupport.project: This looks like a lot more trouble than it's worth. If this level of code completion were really necessary (and I don't think it is, since new actions are usually placed by wizard) it would be better to look up the available layer positions from the classpath (probably processor path) inside ActionProcessor. The project owner may be null, etc. Also please do not use Thread.CCL to find a module class. It is unreliable and will not work at all in OSGI mode. I still think specifying separators in the annotation is valuable. This is a very common requirement. I also think that defining constants for well-known action locations, and providing warnings or errors for definite violations of semantics (such as malformed shortcut names), is a higher priority than code completion. (In reply to comment #8) >!? Because the code is needlessly complex and hard to track. Would be simpler and clearer to define a small struct. As nobody questioned the fast status of the review, I am going to integrate now. I believe the API is minimalistic, but useful as it is right now. We can certainly work on improvements (like the separator stuff; but it is not clear how to do it best. Should we mimic layer based solution or should we rather use some group="...." attribute which seems to be standard in the other IDEs? Anyway that is a matter for separate issue). I plan to improve the live getCompletions from layers with Dušan Bálek. I believe it is going to be beneficial for the IDE support for Processor.getCompletions in general. If there are unavoidable issues with that approach, we can indeed easily turn it off. Re. "hard to track code" - but the code is tested and I prefer consistency to easier to implement dual concepts. Merged as core-main#d1d13a81525d (In reply to comment #12) > We can certainly work on improvements (like the separator stuff Yes, it can be compatibly added. My concern is about modules which migrate to the annotation before then, and newly added actions using wizard; it may be harder to retroactively deal with their separators. This matters mainly if we release the current annotation in 6.10. > Should we mimic layer based solution or should we rather use > some group="...." attribute which seems to be standard in the other IDEs? I thought about group="..." as well. But I cannot think of a way to do this at all compatibly. (In reply to comment #13) > it may be > harder to retroactively deal with their separators. This matters mainly if we > release the current annotation in 6.10. I reported it as bug 189848
https://netbeans.org/bugzilla/show_bug.cgi?id=189558
CC-MAIN-2017-26
refinedweb
1,277
58.58
I am getting an “undeclared identifier” error with the global variable addition to the Fahrenheit/Celsius exercise. I have looked at my code and it looks like it reads in the book. Any ideas? Thanks in advance. [code]#include <stdio.h> #include <stdlib.h> //Declare a global variable float lastTemperature; float fahrenheitFromCelsius(float cel) { lastTemperature = cel; float fahr = cel * 1.8 + 32.0; printf("%f Celsius is %f Fahrenheit\n", cel, fahr); return fahr; } [/code] Here is the second half of the code, as it is getting cut off in the forum tool: [code]int main(int argc, const char * argv[]) { float freezeInC = 0; float freezeInF = fahrenheitFromCelsius(freezeInC); printf("Water freezes at %f degrees Fahrenheit\n", freezeInF); printf("The last temperature converted was %f\n", lastTemperature); return EXIT_SUCCESS; }[/code]
https://forums.bignerdranch.com/t/global-variable-lasttemperature-undeclared-identifier/4996
CC-MAIN-2018-43
refinedweb
129
52.39
So, thankyou Liam, I have been tagged, whatever that really means. The task being to write 5 things about myself that others may not know. So, who to "tag" next. It has to be 5 of my friends and colleagues here at Microsoft UK who I know have blogs. (Thankfully, most of my friends are not technical apart from those I work with) Graham Tyler Martin Kearn Mark Bower Paul Holdaway Jessica Gruber I have enourmous respect for these guys! We all work for Microsoft in the UK and passionate about the products we work with. The SharePoint Product team has recently launched their own blog at This is a must read with articles initially from Kurt Delbene, Corporate VP of the Office Servers group, and Jeff Teper, General Manager of SharePoint Portal, Search and Content Management. We are promised many more entries from the senior SharePoint development team in the near future. A great add-on for SharePoint and it is free. Retention Server from 80-20 Software has just been formally released!. My previous post supplied a webpart to customise the Search Results. Thanks to some feedback, we discovered that the search results for Documents stored in a Portal Area was not working correctly, so I have updated the ZIP with the new code that seems to work. However, you may need to configure this one! The issue was found because the SPS index does not carry information, other than URLs, as to whether the result came from a WSS Site or a Portal Area. When the Document was in a Portal Area, the code would malform the URL for the Document Details page. So, the fix I have put in is to see whether the URL has "/sites/" after the Portal URL, i.e.. If it does not, then it assumes that the document is for a Portal Area. If it does, then it assumes it is a WSS Document. Now, I realise that not everyone keeps the default "/sites/" container for their WSS sites so, this is configurable. If your WS Sites container was changed at install/configuration time, then update the new custom property for the webpart as shown below! I was recently asked how to change the search results for SharePoint Portal Server 2003 so that when a user selected a document from the search results, instead of actually opening the document which is the current default behaviour of SPS, that the user should be directed to the documents' details page within the relevant WSS site. This is because typically some users actually want to view the documents details before deciding to open it. This also gives the user the ability to check out the document before opening it, etc.. I said previously that I would post the sample web parts that show the ICellProvider, ICellConsumer, IRowProvider, IRowConsumer, IFilterProvider and IFilterConsumer in action so here they are. Just click below for the appropriate Visual Studio Porject files. Please note that these samples are written in VB.net There is a bit of setup that you will have to go through that I will try to explain here. With the CABS installed, you should be already to go! CellConnections Sample Go to the site you want to use and then select to add web parts. From the Virtual Server Gallery, arrange the web parts Demo21: Cell Provider, Demo22: Cell Consumer (Summary), Demo23: Cell Consumer (Fields) and Demo24: Cell Connections (Script) onto the page and arrange as below: Wire them up and then when you click on the Cell Provider List and then CellReady, you will see the other parts change appropriately. RowConnections Sample Go to the site you want to use and then select to add web parts. From the Virtual Server Gallery, arrange the web parts Demo41: Filter Provider, Demo31: Row Provider, Demo32: Row Consumer (Summary), Demo33: Row Consumer (Video), Demo34: Row Consumer (Details) and Demo35: Row Consumer (Box Artwork) onto the page and arrange as below: Wire them up and then when you click on the button on the Row Provider part, the others should respond appropriately. For the fully built CAB and source files, the Visual Studio Project directory, please go here. I would be really interested to hear if this works for you and if you make any modifications to it, what they are so I can get a sense of what you think is missing from the out-of-the-box results. Recently, I presented at the UK Office Developers Conference at Heathrow. One of the subjects I spoke about was enhancing and extending the Microsoft Office SharePoint Portal Server 2003 Search and I did say that I would post some of that information, as well as the samples I used (including the web part connection samples), up on my blog. So, this is the first of a few posts and is about explaining the format of the Search Results and the functions that you as a developer can use to override it to fix it up to look like YOU want. The next post will be one of the custom Search Results web parts that I demonstrated that uses one of the override functions to add a new button (This will be the built version of a previous post here) So... The Search Results is split into a number of Rows and Columns and looks like: You can see that each major functional area of the Search Results can be referred to uniquely. Now, you can control what each element looks like by using the following functions: protected virtual string GenerateHtmlForItemIcon ( System.Data.DataRow objectDataRow, int iIndexOfItemInDataSet, int iIndexOfItemInGroup, string strElemIDPrefix) Generates the HTML that displays the icon for the specified DataRow object in the search result set. Generates the HTML for the specified column of data for the specified DataRow object in the search result set. Generates the HTML for the specified column and row of data for the specified DataRow object in the search result set. Generates the SQL Full-Text Search Syntax query that produces the current result set. • strKeyword: List of keywords specified for this query. The next post will demonstrate how to use the GenerateHtmlOneRowForOneItem function in C# Made some progress on SPUM2003 and I have now decided to release version X1.1 for you guys to look at. Please be aware of the following: Whats in over version X1.0 Stuff that hasn't yet made it: Stuff that is odd: If you find any bugs with user management in WSS, then please leave information on how to reproduce as feedback to this post please! Also, any suggestions on how you would like to see the interface changed to make it more intuitive. Download version X1.1 Dan has just managed to get a couple of new articles published on MSDN that are concerned with branding SharePoint Sites. This is a must read for anyone who is serious about customising the look'n'feel of SharePoint. Please go and check it out. Just an update on this tool for people interested and also because I am off to New York for a few days :-) I got laughed at by a good friend of mine due to its boring name so I had a think and now call the tool SPUM2003 Along the lines of SPIN/SPOUT for SharePoint. Well, that is my rationale for choosing the name. I have done some more work and now the code operates on WSS site security by Adding, Deleting and Editing users and then displays their details, Alerts, Groups, Cross-Site groups. I have started to make the UI look a bit better and also in the process of enabling Site Group editing/management. Haven't got the SPS Area security bit working but it will shortly. So, I am in the process of writing a SharePoint User Manager Windows Application in order to help out in this area! Below is a screenshot of the utility to-date (Note: This is version X1.0 and as such has restricted functionality) Note: The SharePoint Object model is not "remotable" so it has to be run on the SharePoint server itself. In the Server textbox, you can control which portal is opened. i.e. If you leave the default, it will attempt to scan the default port (typically 80) and render out the sites from there. If you had a portal on a different port that you want to examine, just enter "<server name>:<port>". If I had a portal on port 100 on my development machine, I would enter "SPSNIGELBRI:100". Then select the "Go" button. Do not prefix with "HTTP://" This should render out something like: So, you can see all of the site collections on your portal. Now, just drill through them and you should see what users have what roles on each of the sites and also receive an indication if security for the selected site is inherited from the parent or not. This utility is just really an exercise into the SPUser object and displaying what information is available for each user. Please leave me any comments on the utility to date in terms of functionality that you think would be useful to see! Currently, for the next version, the following is intended to be working: or Download version X1.1 Please check back soon for an updated version! Overview This post tries to describe the process of writing a .NET server-side control for Microsoft Windows SharePoint Services There are a number of ways to write add-in code for SharePoint Technologies, the most notable being web parts. However, these do not always help in the customisation of a site. Sometimes, we need to be able to change the SharePoint look'n'feel and web parts do not integrate seamlessly in an area we want to control.. If we consider that SharePoint is an ASP.NET application, we are able to use other .NET techniques to deliver the look and customisation we want. One of those methods is via compiled controls. Most commonly used would be user controls but we can also write server controls Mini-Navigator In this post, I will be describing how I wrote a server control that I call a Mini-Navigator. See below: This is a server control written in VB.NET. It initially outputs the logged on user information, this is because when I am sometimes testing, it is really easy for me to forget with IE session is representing which user. Then, it should give you the current site, the parent site to you and the children site beneath you. So, it renders out 1 level above and 1 level below and displays the appropriate icon for the site. Hovering on the site will display the sites' Title/Description information and clicking on the site will navigate the current window to it. If you wanted to open the site in a new window, remember to hold the <Shift> key when clicking on the site. In the above example, if I clicked on the site "bridport", you would get: So that’s what the control looks like, so how can we put this into a SharePoint site? Step 1 - Building the control This is just using one of the default templates that comes with Visual Studio .NET 2003. The control that I will be building here will be going into the Global Assembly Cache (GAC) on my SharePoint server so I need to strongly name it. So, open Visual Studio .NET 2003, then "File/New/Project". In your language of choice, you should see a template called "Windows Control Library". This is the template I will be using. The control uses the WSS Object Model (OM), se we need to add a reference to "Windows SharePoint Services" Note: This assumes you are developing on a machine that has WSS installed already. If not, you will need to obtain the appropriate DLLs and add them. You will need to set up your own appropriate namespace. My code looks like: MiniNavigatorSC.VB Imports System.ComponentModel Imports System.Web.UI Imports Microsoft.SharePoint Imports Microsoft.SharePoint.Utilities Imports Microsoft.SharePoint.WebControls <DefaultProperty("Text"), ToolboxData("<{0}:MiniNavigator runat=server></{0}:MiniNavigator>")> Public Class MiniNavigator) Dim oSite As SPWeb = SPControl.GetContextWeb(Context) Dim oSubWebs As SPWebCollection = oSite.Webs Dim oSubWeb As SPWeb Dim sOutput As String = "" Dim sImg As String = "" 'Output the CurrentUser so that the user is aware of who they are logged in as. sOutput = "<table><tr><td colspan=2" sOutput += oSite.CurrentUser.LoginName.ToString + "</TD></TR>" 'If at a site collection, the ParentWeb object will be nothing If (oSite.IsRootWeb) Then 'Site Collection level. The parent maybe a portal sOutput += "<tr><td valign='top'><img src='/_layouts/images/opx16.gif' alt=''></td><td width=90%" + oSite.PortalName.ToString + "</a></td></tr>" Else Dim sSiteName As String = "" If (oSite.ParentWeb.Name.ToString.Length <> 0) Then sSiteName = oSite.ParentWeb.Name.ToString Else sSiteName = Right(oSite.ParentWeb.Url.ToString, oSite.ParentWeb.Url.ToString.Length - oSite.ParentWeb.Url.ToString.LastIndexOf("/") - 1) End If sOutput += "<tr><td><img src='/_layouts/images/" + getGif(oSite.WebTemplate.ToString, oSite.Configuration) + ".gif' alt=''></td><td width=90%<a href='" + oSite.ParentWeb.Url.ToString + "'>" + SPEncode.HtmlDecode(sSiteName) + "</a></td></tr>" 'The current site sOutput += "<tr><td><img src='/_layouts/images/" + getGif(oSite.WebTemplate.ToString, oSite.Configuration) + ".gif' alt=''></td><td class='ms-selectednav' width=90%" + oSite.Title.ToString + "</td></tr>" 'The child sites For Each oSubWeb In oSubWebs sOutput += "<tr><td valign='top'><img src='/_layouts/images/" + getGif(oSubWeb.WebTemplate.ToString, oSubWeb.Configuration) + ".gif' alt=''></td><td width=90%<a href='" + oSubWeb.Url.ToString + "'>" + oSubWeb.Name.ToString + "</a></td></tr>" Next oSubWeb sOutput += "</table>" output.Write(sOutput) End Sub '+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Private Function getGif(ByVal sType As String, ByVal iConfig As Integer) Select Case sType Case "MPS" Return "MTGICON" Case Else Select Case iConfig Case 0 'Team Site Return "STSICON" Case 1 'Blank Site Return "MSCNTVWL" Case 2 'Document Workspace Return "DOCICON" Case Else 'Set as a team site End Select End Select End Function End Class AssemblyInfo.VB Imports System Imports System.Reflection Imports System.Runtime.InteropServices ' General Information about an assembly is controlled through the following ' set of attributes. Change these attribute values to modify the information ' associated with an assembly. ' Review the values of the assembly attributes <Assembly: AssemblyTitle("Mini-Navigator")> <Assembly: AssemblyDescription("This is a server control that can be added to a SharePoint site to display the parent and children site information")> <Assembly: AssemblyCompany("Bridport")> <Assembly: AssemblyProduct("")> <Assembly: AssemblyCopyright("")> <Assembly: AssemblyTrademark("")> <Assembly: CLSCompliant(True)> 'The following GUID is for the ID of the typelib if this project is exposed to COM <Assembly: Guid("ED83C45C-4EBC-4a51-89AE-E95EB38C4AB1")> ' Version information for an assembly consists of the following four values: ' Major Version Minor Version Build Number Revision ' You can specify all the values or you can default the Build and Revision Numbers ' by using the '*' as shown below: <Assembly: AssemblyVersion("1.0.0.0")> <Assembly: AssemblyKeyFile("../../MiniNav.snk")> Click here to download the project. The project contains the files previously mentioned plus the MiniNavigatorSCCS.cs which is the C# equivalent code for all you C#'ers out there! Note: you will have to add your own KeyFile using SN -K called MiniNav.SNK and located in the project directory at the same level as the .VB files Now we have the control built and it is ready to deploy to the SharePoint Server So, what does the code do? Nothing hard or tricky really, just gets the site context from: Dim oSite As SPWeb = SPControl.GetContextWeb(Context) Then this context is used to get the Current User information from the WSS OM property oSite.CurrentUser.LoginName.ToString We then just work through the object model to get the parent and children site information. To get the correct icons for the workspace, we use: oSubWeb.WebTemplate and oSubWeb.Configuration I do make an assumption here that you are using the default site definitions. If you have modified these, you may need to change this piece of code Step 2 - Deploy the control As I may want to use the control on any site on my SharePoint server, over a number of portals, I copy the built DLL, MiniNavigator.DLL, to the server GAC. The control sits on a WSS default page, so we have to edit the sites' default.ASPX. There are a number of ways to update this file using many editors such as FrontPage2003 but… I normally edit the page by first mapping a web folder to the site (REALLY IMPORTANT NOTE: Make sure you keep a copy of the original DEFAULT.ASPX file before you edit it just in case!!!!) I would then drag'n'drop the default.aspx file to my desktop and open it using NotePad and add the control reference thus: The line we are interested in is: …. <%@ Register Tagprefix="NMB" Namespace="MiniNavigator" Assembly="MiniNavigator, Version=1.0.0.0, Culture=neutral, PublicKeyToken=7dd16f83bffd70b1" %> With the reference in, we now need to place the control at the appropriate location within the ASPX file. As, this control is to be displayed on the navigation area of the site, I put it as shown below: Again, the lines I added are: … <TR> <TD style="padding-left:0px;padding-right:0px"> <img width=1px src='/_layouts/images/blank.gif' ID='100' alt='Icon' border=0> <NMB:MiniNavServerControl </TD> </TR> NOTE: You may find it easier to use FrontPage2003 to actually do this manipulation as it is easier to locate the actual HTML where you want to drop the control Now, save the file and drag it back to the WSS web folder where you originally dragged the file from, overwriting the original file. (This will create a custom version of the file and it is handled slightly differently by SharePoint from default sites without customisation. These files are more commonly called "one-off" or "unghosted") The final part of the deployment is to update the appropriate web.config file to allow the control to render. So, open the web.config and add the reference for the control into the <SafeControls> section: The line is <SafeControl Assembly="MiniNavigator, Version=1.0.0.0, Culture=neutral, PublicKeyToken=7dd16f83bffd70b1" Namespace="MiniNavigator" TypeName="*" Safe="True" /> Now do an IISRESET and view the site! This article intends how to use the override functionality to deliver up your own rendered search results. Typical Problem I see lots of people asking questions on how to customise the Microsoft Office SharePoint Portal Server 2003 search results page with the typical question being “How can I edit the Search.ASPX file?” When trying to modify the Search Results rendered by SPS2003 there are some concepts and information that you should be aware of: To be able to view or manipulate these web parts, you need to enter the following command into your browser of choice (replacing <SPSserver> with your SPS server name and you must have the necessary permissions at the portal level) http://<SPSserver>/search.aspx?mode=edit&PageView=Shared This will expose the Modify Shared Page link at the top right of the page Now by selecting “Design this page”, you will see the 2 default web parts that are deployed at portal provisioning time. Displaying your own Search Results To be able to change the way the search results are rendered to the user, you have to override the default Search Results web part. Some sketchy information can be found at: This shows you the SearchClass that you can you programmatically override. Sample This sample is quite non-invasive into the Search Results. It just adds a new button as you can see below: By selecting “Item parent” at 1, will display the parents’ site listing: (This is the only site collection in this instance of a portal) By selecting “Item parent” at 2, will display the document library that the documents live in; Show me the code On lots of sites, I can normally find descriptive information on what I need to do but I always think that the best way to demonstrate a concept or approach is through the use of some sample code that you can take and modify for yourself. Below is some code I wrote in C# that shows how to override the default search results and add a new button at the Actions level for a search result hit. The button enables the user to go to the parent level for particular items such as Documents (will go to the Documents Document Library), List Items (will go to the List Items List) and Sites (will go to the Sites parent Site) using System; using System.ComponentModel; using System.Web.UI; using System.Web.UI.WebControls; using System.Xml.Serialization; using Microsoft.SharePoint; using Microsoft.SharePoint.Utilities; using Microsoft.SharePoint.WebPartPages; using System.Data; using System.Text; namespace SearchResultExtension { // <summary> // This webpart has been developed to enable users to be able to navigate // easier to a particular items parent container. // Items that are supported by this functionality are: // . Documents within SharePoint // . Lists within SharePoint // . SharePoint Sites // . Files trawled from NTFS (file:// prefix'd) // </summary> [DefaultProperty("Text"), ToolboxData("<{0}:Override runat=server></{0}:Override>"), XmlRoot(Namespace="SearchResultExtension")] public class Override : Microsoft.SharePoint.Portal.WebControls.SearchResults { // Use the following constants to manage what pages are rendered for which result type const string c_DSP_SiteCollection = "/SiteDirectory/Lists/Sites/AllItems.aspx"; const string c_DSP_SubWeb = "/_layouts/1033/mngsubwebs.aspx?view=sites"; const string c_DSP_Lists = "/_layouts/1033/viewlsts.aspx"; // // The following constant defines the default button name const string c_ButtonDefault = "Item parent"; private string _button; public Override() { _button = c_ButtonDefault; //Initialise private variables } [Category("Custom Properties")] //Create a custom category on the property sheet. [DefaultValue(c_ButtonDefault)] // Assign the default value. [WebPartStorage(Storage.Personal)] // Property is available in both Personalisation and Customisation mode. [FriendlyNameAttribute("Items' parent button display name.")] // The caption that appears in the property sheet [Description("Type the parent button name.")] // The tool tip that appears when pausing the mouse pointer [Browsable(true)] // Display the property in the property pane. [XmlElement(ElementName="Button")] // The accessor for this property. public string Button get { return _button; } set _button = value; } //******************************************************************************** //Description: // This webpart is built to override the standard default out of the box Search // Results webpart functionality. The part adds a new option on the search // results page to be able to navigate to specific items parent folder. //Author: //Date: // ??th October 2004 //Version: // V1.0.0.0 //Modifications: // Who When Why // NB **/**/** Initial webpart creation. ) base.GenerateHtmlOneRowForOneItem(oneDataRow, sbRenderRowHtml, rowID, strStyleClass, iIndexOfItemInDataSet, iIndexOfItemInGroup); if (rowID == 3) //This is the row where the actions are! string sHref = GetRowValue(oneDataRow, "DAV:href"); //Get the URL for the item string sFCC = GetRowValue(oneDataRow, "DAV:contentclass"); //See what sort of item it is if (sFCC.StartsWith("STS_")) //It is a SharePoint internal item { sHref = sHref.Substring(0, sHref.LastIndexOf("/")); //Format the URL to remove the filename switch (sFCC) //We need to do different things to different items to render the parent { case "STS_Site": //Site Collection sHref = sHref.Substring(0, sHref.LastIndexOf("/")) + c_DSP_SiteCollection; break; case "STS_ListItem_300": //Site collection but to be treated differently case "STS_Web": //WSS Subweb sHref = sHref + c_DSP_SubWeb; case "STS_ListItem_DocumentLibrary": //List item within a Document Library case "STS_List_DocumentLibrary": //WSS Document Library string sListDisp = GetRowValue(oneDataRow, "DAV:displayname"); //Get the displayname sHref = sHref.Substring(0, sHref.LastIndexOf("/" + sListDisp + "/")) + c_DSP_Lists; case "STS_List_Announcements": //WSS List sHref = sHref.Substring(0, sHref.LastIndexOf("/Lists/")) + c_DSP_Lists; //Need to cut off /Lists also case "STS_Document": //A SharePoint specific file. Do not touch return; default: //Anything else, ignore return; } if (sHref.Length > 7) //Bigger than "https://" i.e. Make sure we have something to replace with)); } } else if (sHref.StartsWith("file://")) //on an external file)); private string GetRowValue(DataRow dbRow, string strUri) DataColumnCollection rowColums = dbRow.Table.Columns; if (rowColums.Contains(strUri)) return dbRow[strUri].ToString(); else return ""; } } How the code works The class is set to Override and class inherits from Microsoft.SharePoint.Portal.WebControls.SearchResults. Beneath the class definition, a number of constants are exposed. These constants can be used to direct what pages are displayed for particular items. For example, a WSS List can be returned in the search results and the parent container for this item is really just the site where it resides. However, SharePoint has a number of pages that render just the lists of the sites that presents a much more intuitive container. const string c_DSP_SiteCollection = "/SiteDirectory/Lists/Sites/AllItems.aspx"; This is used when a site collection item is returned by the search. It directs the browser to the portals SiteDirectory listing view const string c_DSP_SubWeb = "/_layouts/1033/mngsubwebs.aspx?view=sites"; This is used when a sub web is returned in the search results. Note that this site is not at the site collection level. The view that is used is for the parent site of the subweb and the subweb collection for the parent site is displayed. const string c_DSP_Lists = "/_layouts/1033/viewlsts.aspx"; This is used when a list is found in the search results and points the browser to the list view for the parent site. Then, the code looks to set up the custom property for the button name. The default string is maintained by the logical const string c_ButtonDefault = "Items parent";. The custom button property is then set up by the following code snippet. This section controls the page where the control is displayed (Category property) and the button description, FriendlyNameAttribute: private string _button; public Override() _button = c_ButtonDefault; [Category("Custom Properties")] [DefaultValue(c_ButtonDefault)] [WebPartStorage(Storage.Personal)] [FriendlyNameAttribute("Items' parent button display name.")] [Description("Type the parent button name.")] [Browsable(true)] [XmlElement(ElementName="Button")] public string Button get { return _button; set _button = value; The code then defines the override function that is the main driver of the web part: ) This is the routine that is called by default by the SharePoint process on a result list of search items so that they can be rendered by the Search Results web part. The data row, oneDataRow contains the data that needs to be formatted. The search page is built up from the hits that are returned, to a maximum of 40 items per page by default, and this data is organised by oneDataRow. For each of the hits, the GenerateHtmlOneRowForOneItem is called. When this is called, the string builder, sbRenderRowHtml, is appended to with the current row information from oneDataRow. So, we allow the base call to execute that would add the standard buttons such as Add to My Links | Alert Me | Item details. We then perform some tests to see if we are working on a known object type and should therefore be adding the Item Parent button. The first test is just to see if we are actually on the correct row for the buttons. This is known as row 3 and contained within the integer rowID. If the row is anything but 3, we exit from the code and allow SharePoint to continue as normal. If the rowID is 3, then we pull the items’ HRef property from oneDataRow. The HRef property is passed in and contains the full URL to the item that is currently being rendered. We must remember that all items known to SharePoint are URL addressable. We extract the property from the data row by passing it, along with the property we are looking for, DAV:href, to another routine called GetRowValue The routine passes back the property asked for, DAV:href, and it is assigned to the string sHref. We also then use GetRowValue but request the value for DAV:contentclass. The contentclass of the object tell us whether or not we should be handling the item at all as we only want to act on known object types. To see if we may need to add the button, we perform a test on the first 4 characters of the content class. If the item begins with STS_ then it is a SharePoint object and we may need to update with our new button. If the contentclass does not begin with STS_ then we may still need to add our button as long as the sHref string begins with file://. If neither of these conditions is met, then the routine is exited and normal SharePoint functionality continues. Next we truncate sHref from the last “\”. This is in effect dropping the filename or object name from the string so that we have a more representative URL for the parent object. Now we perform a switch test on the contentclass and we are looking for: Then, a check if made on the length of the formatted URL in sHref just to ensure that it contains a valid address. At this stage, we have a formatted string that we want to insert into the string builder that will be used for rendering the results to the user. This string is stored in sbRenderRowHtml. To be able to perform string manipulations on it, we coerce the string builder type into a string called sSrcResPage Another string is created called sAnchor, and this will contain the formatted HTML codes for our new button. The string is built up using normal HTML code and the sHref that has been built up as the target of the anchor tag. UrlEncodeAsUrl is a standard routine available in the SharePoint OM to replace particular characters with their printable form. this._button uses the button name that we have configured for it. A search is then performed on the string version of the page being built stored in sSrcResPage for the particular </A> string which denotes the end of the last button. This will return to us the last position in the string for the current item, of the last button. We can then add our newly formatted button stored in sAnchor after it. Sample DWP file You can use this sample to build a DWP for the web part. (This assumes a strongly named assembly so you would have to update with your own projects details.) <?xml version="1.0"?> <WebPart xmlns=""> <Description>This webpart displays the ability to goto particular items parent container</Description> <Assembly>SearchOverride, Version=1.0.0.0, Culture=neutral, PublicKeyToken=…………….</Assembly> <TypeName>SearchResultExtension.Override</TypeName> <Title>Search Results Override</Title> <FrameType>None</FrameType> <ResultListID xmlns="urn:schemas-microsoft-com:sharepoint:DataResultBase">sch</ResultListID> <MaxMatchingItemsNumber xmlns="urn:schemas-microsoft-com:sharepoint:DataResultBase">40</MaxMatchingItemsNumber> <Button xmlns="SearchResultExtension">Items parent</Button> </WebPart> Happy overriding! Microsoft Office Portal Server 2003 and Microsoft Windows SharePoint Services use a number of different databases for different tasks. These databases are: Trademarks | Privacy Statement
http://blogs.msdn.com/nigelbridport/
crawl-002
refinedweb
5,097
53.51
Namespaces in F# April 2, 2017 Leave a comment The main purpose of namespaces in F# is to group related types in one container and avoid name clashes of types. Namespaces are declared using the “namespace” keyword like in many other popular languages. A significant difference between C#/Java and F# is that F# namespaces can only contain type declarations. They cannot hold any values, i.e. anything that’s declared using the “let” keyword. They are only containers for types, i.e. elements that are declared using the “type” keyword. A single F# source file (.fs) can contain multiple namespaces. A namespace starts with the namespace declaration and ends either at the end of the source file or at the next namespace declaration. Here’s an example with two namespaces where each namespace has a single type: namespace com.mycompany.domains.book type Book = { title: string; numberOfPages:int; author: string } with member this.takesLongTimeToRead = this.numberOfPages > 500 namespace com.mycompany.domains.address type Address = { street: string; city: string; number: int; } If we try to add a value in a namespace then we’ll get a compiler error: namespace com.mycompany.domains.address let f = printf "Hello" Namespaces cannot contain values. Consider using a module to hold your value declarations. It’s of course possible to use types from different namespaces. The F# equivalent of a using/import statement is “open”: namespace com.mycompany.domains.order open com.mycompany.domains.address type Order (product:string, value: int, address:Address) = member this.Product = product member this.Value = value member this.Address = address Here’s an example of building a Book type and printing its title and whether it takes a long time to read from the Main entry point: namespace com.mycompany.entry open System open com.mycompany.domains.book module Main = [<EntryPoint>] let main args = let myBook = {title = "F# for beginners"; numberOfPages = 600; author = "John Smith"} let longTimeOrNot = myBook.takesLongTimeToRead printfn "Book title: %s, takes long to read: %b" myBook.title longTimeOrNot let keepConsoleWindowOpen = Console.ReadKey() 0 The ‘keepConsoleWindowOpen” value is only there to keep the console window open so that we can see the output. Here’s the output: Book title: F# for beginners, takes long to read: true View all F# related articles here.
https://dotnetcodr.com/2017/04/02/namespaces-in-f/
CC-MAIN-2022-40
refinedweb
376
51.24
Slashdot Log In New DNS Software to Address Security Holes Ben Galliart writes "The Internet Software Consortium released on Monday another patchlevel to their ever popular BIND software package. The ISC has recommended that everyone using BIND upgrade to this latest version (BIND v8.2.2 patchlevel 3) due to security holes existing in previous versions. If you are using a version previous to BIND 8.2.1 then pay special attention to the ISC configuration hints on a new required TTL setting which should be added to every zone file. More information on the TTL setting is also available in RFC 2038. On a side note, those who enjoy the bleeding edge should read the ISC future plans page which now has information on the thread-safe/multi-processor ready BIND version 9 (major rewrite) going beta in January. " This discussion has been archived. No new comments can be posted. New DNS Software to Address Security Holes | Log In/Create an Account | Top | 80 comments | Search Discussion The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way. Daemon security in general (Score:4) Each daemon is starting to add its own security (Cyrus IMAP has several options) and they aren't inter-compatible. If there were a common library they were based on, it could be improved upon by all parties involved. Hate to point out one of the greatest benefits of open source -- shared library code that you can modify -- and also one we are bad at actually doing. - Michael T. Babcock <homepage [linuxsupportline.com]> Waaah! I want the alpha release, now! (Score:5) Sure, it's not "prime time", yet! So? Give a bunch of computer phreaks the source, and they'll patch more bugs in a day than the entire BIND development team can find in a week. That's not to put the BIND team down, but a closed testbed network (typical for this kind of work) is not going to find bugs that'll crop up in the real world. Re:Daemon security in general (Score:2) This is going on in the OpenBSD community, but don't forget that because of stupid and worthless crypto-export laws and IP 'patents', developers and users in the US may not be able to use it. Sucks if you're a developer or user in the US, but it doesn't seem to be holding OBSD people back any... Your Working Boy, NAMED-XFER has a bug in 8.2.2-P3 - fix enclosed. (Score:4) - Strabo The following change should correct the problem. Index: src/bin/named-xfer/named-xfer.c =============================================== RCS file: retrieving revision 8.88 retrieving revision 8.89 diff -c -r8.88 -r8.89 *** named-xfer.c 1999/11/08 23:01:39 8.88 --- named-xfer.c 1999/11/09 20:36:54 8.89 *************** *** 2195,2201 **** zp->z_origin, zp_finish.z_serial); } soa_cnt++; ! if ((methode == ISIXFR) || (soa_cnt > 2)) { return (result); } } else { --- 2195,2201 ---- zp->z_origin, zp_finish.z_serial); } soa_cnt++; ! if ((methode == ISIXFR) || (soa_cnt >= 2)) { return (result); } } else { Negative Caching Info (Score:3) Here's a useful chunk from the RFC: . What's now mandatory-- . HTH. Yours Truly, Dan Kaminsky DoxPara Research What about the problems with the protocol itself? (Score:3) If you need to point-and-click to administer a machine, Bad, but not suprising. (Score:1) This is very bad, but I don't think we should be suprised. One of the *BSD project leaders said after the 4.9.? remote exploit that he doesn't trust the BIND code. It's possible to run the name server in a chroot jail as a non-root user. That won't solve the DoS problems, but at least you won't get "owned". Hmm... I had a URL with instructions but now I can't find it... Anyone have a link? But is it Free? (Score:1) Dents (Score:1) Re:Daemon security in general (Score:2) Now there -is- a definition for IPSec authentication, but for the machine as a whole. It doesn't work at a finer level than that. Then, there are X.509 certificates, and SSL. Plenty of libraries, there, and certificates are easy enough to roll. Problem there is that it would require more extensive work on the part of the maintainers. It -would- be possible, though. The question I'll ask is this - for the purpose of authentication, would it be sufficient to authenticate the machine, or would it be essential to authenticate the application? If the former, then simply install IPSec or SKIP on all the machines that use daemons, and use IPSec/SKIP connections between those machines. If the latter, get OpenSSL and modify the application to work through it. No help from NSI (Score:3) non-buggy software exports? (Score:1) Elaboration (Score:3) I'd like to see something along the lines of an authenticaed ident server as a necessary part of this protocol. It wouldn't be a daemon running (like ident) to identify callers, but the daemon (like BIND) connecting to the remote daemon would identify who it is (SSH2 style) and what machine it is running on (also SSH2 style). Why identify both the machine and the daemon? Because a daemon could be loaded up by a (bad) user and run on a different port, linked against the authentication library and attempt to send bad data "out there" to other machines. In this way, the daemon itself would have identity information either created at compile-time (and linked in via header) or some other method. Of course, the daemon would have to be unreadable by anyone but root (or its own username), but that's ok, right? Sending data over a secure connection works just fine if you don't want people snooping, but authenticating a daemon requires more than that. In the case of DNS, one of the big factors is authenticating that the DNS server you've connected to is indeed who it claims to be. XNTP3 does this as well (in a basic form) if you want to create time peers that authenticate off each other. RFC ideas anyone? - Michael T. Babcock <homepage [linuxsupportline.com]> A quick tidbit about NCACHE (Score:2) 1. The minimum TTL for that zone 2. The default TTL for records without one specified 3. The TTL of negative responses #1 was never used, #2 is only relevant on the master server (since TTL is explicit during a zone transfer), and #3 is now the ONLY meaning for the SOA Minimum field. What this means is that to fulfill #2 (default TTL), you will now have to add the $TTL directive to all zone files you are master of, and modify the SOA Minimum field to something more appropriate for the NCACHE TTL. Your zone file would look something like this: ; Example of $TTL and SOA $ORIGIN whatever.org. $TTL 86400 ; Default TTL (1 day) @ IN SOA ns1.whatever.org. postmaster.whatever.org. ( 1999111001 ; Serial 10800 ; Refresh (3 hours) 3600 ; Retry (1 hour) 604800 ; Expire (1 week) 1800 ) ; NCACHE TTL (30 minutes) These values are, of course, not set in stone - just an example. However, most people set their SOA Minimum field to somewhere around 1 day, give or take, to reduce load on queries to their server. This value is likely to be too high to cache NEGATIVE answers, so should probably be adjusted. As for the $TTL directive, just add it before the SOA record, usually with whatever was the previous value for the SOA Minimum. If it is not designated, BIND will issue a warning, but will use the SOA minimum instead. Annoying to see all the warnings if you have a large number of zones, but it doesn't cause problems other than the logging: Nov 10 12:15:12 thanatos named[14344]: Zone "whatever.org" (file whatever.org.db): No default TTL set using SOA minimum instead This information only pertains to BIND 8.2.0 and above (including 8.2.2-P3, obviously). - Strabo Re:Negative Caching Info (Score:2) This question isn't directed at you, but I'm curious...outside of posts to slashdot, or writing in general, does anyone actually use the term "grok" in conversation? I have yet to hear one person say it and I work with some serious geeks. Re:Waaah! I want the alpha release, now! (Score:1) Please forgive ISC for choosing a smaller, dedicated development group over a mass of chaotic hobbyist whiners. Don't take that as a flame. It's sarcasm. Let them do the kickass job they always do and release a version when they're ready. Re:Daemon security in general (Score:2) "We hope you find fun and laughter in the new millenium" - Top half of fastfood gamepiece Grok (Score:1) Mishandled (Score:2) Drop the daemon, come out with your hands up! (Score:3) This is the FBI. We've recently gotten word of a new kind of internet protocol called 'dns'. We require that you immediately install backdoors into the protocol so we can secretly monitor all dns communication. It is not relevant that information in dns servers are publically available. This will be done at once. We will not provide assistance to you to do this. Thank you for your continued compliance. Sorry.. given the proxmimity to the wiretapping article.. *somebody* had to do it! =) -- Re:Negative Caching Info (Score:1) Sure, I hear/use it all of the time. It is not a geek thing per se, but a sci-fi thing. Not all geeks are sci-fi fans, and not all sci-fi fans have read Stranger in a Strange Land, hence, the paucity of grok usage in your vicinity. You grok? grok is getting rare (Score:1) Re:Negative Caching Info (Score:1) Slashdot is great. You ask a question and it gets answered...and quickly too. I thought I was going to get flamed for asking a "stupid" question, so thanks to everyone who responded kindly. Re:Daemon security in general (Score:2) It's possible, but it's not trivial to do well. What you need to do is embed some kind of certificate within the application, in such a way that it is not practical for a third-party to extract it and embed it into another application. Because the source for the application is going to be open, it will always be possible for a third-party to monitor what goes into and out from any decryption algorithm. This leaves two possibilities for protection: Both these methods have strengths and weaknesses. They both rely on obscuring the security, by putting it in plain sight, which is great for Open Source code, but both can be defeated, so they're not perfect. Re:Negative Caching Info (Score:2) I'm curious...outside of posts to slashdot, or writing in general, does anyone actually use the term "grok" in conversation? I have yet to hear one person say it and I work with some serious geeks. Oh yeah. I snarfed Win95 when it came out, and I've been glorkonzed ever since. Who could grok the sucker? ====== "Rex unto my cleeb, and thou shalt have everlasting blort." - Zorp 3:16 Re:Daemon security in general (Score:2) That was the one of the two issues that came to mind after I thought about it. 1. With the source being open of course, someone with enough skill would be able to hack a fake daemon and replace the true one. This would still knock out most of the lil scrpt kiddies who like to play until someone came up with an easy to use root kit. 2. How do you handle authentication without making it too complicated to implement? This would certainly be easy for trusted sites but if that were the case you would use IPSEC and vpn the remote sites with you anyway. guess it was just wishful thinking on my part "We hope you find fun and laughter in the new millenium" - Top half of fastfood gamepiece more Y2K zealots.. haha (Score:1) Either that or they're really scamming some companys big time fixing this 'bug.'. I prefer the simple 'increment the number' way personally. WILL NEVER WORK! (Score:2) Just went to the LISA conf yesterday for a tutorial on BIND given by Vixie himself. He spent a bit on the logistical/technical issues DNSSec. Just a quick run down. I have nameservers that can get well over a 1000 requests per second (not that they actually answer them all:). I'm not going to encrypt every response, it's just not that important. It is far more practical to use this as an internal security measure but for the internet - don't make me laugh. Mirror (Score:1) Re:Daemon security in general (Score:3) You then pass the combined values over to the client, who XORs it with the same chunk of the same piece of compiled code. What's revealed is that fragment of the certificate. This is, of course, a very trivial method. To get something more tamper-proof, you'd want to extract discontinuous pieces of the code & certificate, using some complex function to determine which bytes you wanted. This would give you something that was sufficiently tamper-proof to resist script kiddies and novice crackers, whilst remaining very simple to implement. The pseudo-code would look something like: recv(client, random_gen_seed); seed_generator(random_gen_seed); for (i=0;i++;i offset = my_random_fn(i); value = *(lib_ptr + offset) xor *(cert_ptr + offset); *buffer++ = value; } xmit(client, buffer); From here, you can add further complexity, to your heart's content, but this should offer you enough security to block trivial cracks. what about microsoft dns? (Score:1) PATCH downloadable (Score:1) Possible to run BIND on arbitrary address/port? (Score:1) I'd like to run two dns servers on the same machine, one for queries coming from the internet and the other for queries coming from the lan. Yeah, I know. The "right" way to do it is to run the internal dns somewhere besides on the firewall, but I've got a scarcity of Unix boxen (read ONE). Wondered if it was possible to configure BIND to bind to a specific address or port? Perhaps with tcpserver? CERT informed, PATCH is out... (Score:1) "ISC has discovered (or has been notified of) six bugs which can result in vulnerabilities of varying levels of severity in BIND as distributed by ISC. CERT has been notified of all of these issues." Also, "In addition to fetching the bind-src tarball, you will need to fetch and apply the following patch. If you do not apply this patch, your zone transfers may fail." - This is from:- - Strabo Re:Bad, but not suprising. (Score:1) [psionic.com] Re:Daemon security in general (Score:2) If a key is generated at compile-time, it can be stolen ./configure make key .oOo.oOo.oO... src/keyfile.h completed make all make install ... tada The idea I had was to have these keys actually signed by an external program (that uses the same authentication toolkit library) so that you can say, "yes, I trust these keys to be from the daemons at the root servers" and if you geet root server replies from anyone else, you ignore or negative cache that they're bad - Michael T. Babcock <homepage [linuxsupportline.com]> Re:Daemon security in general (Score:1) Here is a possible protocol for an open-source program to authenticate itself. This protocol does not work, because if you have the source code for the hashing function and a copy of the "real" binary, you can spoof the authenticator. Is there a solution? If there is a kernel or library function that hashes the binary code of the calling process, it is possible. The hashing function could be open-source, but the compiled function would have to include an embedded constant known only to the authenticator. I don't know how feasible that is. The idea is if you hacked the source of the hash function and recompiled, you would lose the embedded constant. Of course if you hack the original program, when you call the hash function it will return the wrong value. Just some random thoughts. Nathan Whitehead Re:Daemon security in general (Score:2) This is not a good idea in general.. as it slows inovation and really removes all the benifits of open source. The better approach is to simply desgn the protocolls so that it dose not matter if you hack the daemon. SSH is a perfect example. If I replace you sshd I can not obtain your password.. only your public key, which dose me no good and I could obtain anyway if I had access to replace the daemon. Also, remember that the person running the blessed binary controls it's enviroment and jsut because I'm taling to a blessed binary dose not mean that it can not be tricked into doing something nasty. Example: write a fake X server and a fake libc network interface which interacts with your blessed netrek client to make it do ubernasty things, but if the protocoll had been designed with appropriate constraints this woud not matter. Generally, it is much harder or imossible to design a program to be secure in a hostile eniroment.. so just do all you sensitive stuf in a frendly enviroment. Jeff Re:No help from NSI (Score:2) According to this story [cnet.com], MAPS threatened to put NSI on the RBL list because of the unsolicited email NSI was/(is?) sending to its domain registrants. I think you've got your answer as to why NSI doesn't help with BIND. Buncha greedy bastards. -- A host is a host from coast to coast... Mmm... Vixie... (Score:1) Wouldn't it be nice if... (Score:1) Re:Mishandled (Score:2) Re:But is it Free? (Score:1) Re:WILL NEVER WORK! (Score:1) Your parent zone. The root key signs the > Meaningful encryption would push the 512 byte limit of dns udp packet to a much smaller payload, making the use of tcp for common dns activities necessary thereby tremendously reducing dns performance. DNSSEC doesn't include encryption, only authentication. Yes, the 512 byte UDP limit is an issue, but there is a proposed solution (EDNS). > Meaningful encryption would cause CPU load - period. Imagine the com root servers having to encrypt every answer. This would probably end up requiring an Origin 2000 and remember that 8.2.2 isn't able to take advantage of SMP. Again, there is no encryption. Digital signatures are precomputed, so there's no additional CPU power required on root servers at runtime. Caching servers, which will verify DNSSEC signatures on incoming data, will require more CPU power. There's no reason to encrypt DNS data - it's all public anyway. Authentication is far more important, so that connections can be made to a verifiably correct site. Are you sure you listened to Paul Vixie's presentation? Re:Does anyone here *actually* know anything? (Score:1) It doesn't take a genius to look up IETF working group names. BTW, there is only one 'Internet protocol' in the TCP/IP suite, I don't know of any 'supporting Internet protocols.' Perhaps you could enlighten us. Recommendation: Don't be so smug. If you don't like it, move on. Exporting bind... (Score:1) Securing BIND/DNS (Score:1) grok (Score:1) #define X(x,y) x##y Re:more Y2K zealots.. haha (Score:1). Yeah, we use that scheme here, and I like it a lot. But you can actually use two digits for the increment, which gets around the problem you mention. That's what we do - YYYYMMDDRR. But then again, I've never seen RR get above 05. Dave Re:WILL NEVER WORK! (Score:2) I envisioned (I thought clearly) something more along the "web of trust [linuxsupportline.com]" lines. Smaller ISPs have their keys signed by their larger ISPs (arranged somehow -- not too hard) and larger ISPs can do the same between each other for the sake of most protocols. This would be easier than what DNSSEC is going to require in some circumstances. Since I envisioned a generic library for any type of daemon (with several options, of course, but one underlying security model), many of these systems don't have to be signed by many people at all -- security is desired, so those who want it arrange it. I'm not sure what you're trying to say here. That if I encrypt data it grows? This isn't very true (except for the need for headers, etc.). If the stream is encrypted before being packaged (UDP, TCP), the encryption negotiation would be a packet or two every hour or so and the actual encrypted communications would be the same size, just encrypted. The only added data would be hashes for authentication. You'd want to precompress (to a small degree) of course, seeing as compression is less CPU intensive (in some cases!) than encryption. You end up encrypting less data then and the hash is tacked on to that. Again, I don't care if BIND can (currently) make use of SMP or not; my ideal would be it taking advantage of a library which itself could be SMP capable. I don't buy the CPU intesity argument though because with the exception of high end routers, most machines aren't processing enough data of the type discussed here for the encryption to be significant. I may be wrong For instance, consider a system where a new session key is negotiated by two time servers every hour. The encryption needs to be such that it can't be broken in under an hour or two (a week would be a nice goal here). Simple DES would be sufficient for most cases (although not necessarily best). Of course, the whole point here is that I had not intended to fully flesh out how such a system would run. I do not consider myself fully capable of doing such without heavily referring to how others have already done it (ahem, patents). Mind you, in the case of DNS and your mention of 1000 requests per second, I don't buy that convenience is more important than security in the long run. Computers are becoming much much faster every year. Put a pair of SMP Athlon 700's on a network to handle DNS and caching for a company where previously there were quad P-Pro 200's. Makes for a significant upgrade at about the same cost as the original investment but with significant head room, especially to handle the amount of encryption I'm talking about. My concept of a good system for this would be to have multiple cyphers chosen based on the amount of data being sent to a given location (whether a "stream" can/will be held open to them) and the length of time needed for security. I don't see a DNS packet needing to be authentic for more than an hour or so SMTP and IMAP would need much stronger hashes to make sure E-mail was authentic (especially large companies who go to trial I think it's "doable" and I'd love to see someone like NAI, SSH or Bruce S. fiddle with it seriously. - Michael T. Babcock <homepage [linuxsupportline.com]>
http://slashdot.org/articles/99/11/09/1651211.shtml
crawl-002
refinedweb
3,934
72.56
On Wed, May 14, 2003 at 03:31:08PM +0200, Eric Piel wrote: > Hello, > There is a compile error if not compiling for SMP: > +#ifdef CONFIG_SMP > if (!tasklist_lock.write_lock) > +#endif > read_lock(&tasklist_lock); Yuck. The right way to do this is read_trylock(&tasklist_lock); The observant will have noted: #define write_trylock(lock) ({preempt_disable();_raw_write_trylock(lock) ? \ 1 : ({preempt_enable(); 0;});}) /* Where's read_trylock? */ in include/linux/spinlock.h but that doen't justify _not writing it_ when you need it. -- "It's not Hollywood. War is real, war is primarily not about defeat or victory, it is about death. I've seen thousands and thousands of dead bodies. Do you think I want to have an academic debate on this subject?" -- Robert FiskReceived on Wed May 14 07:16:54 2003 This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:14 EST
http://www.gelato.unsw.edu.au/archives/linux-ia64/0305/5466.html
CC-MAIN-2020-16
refinedweb
143
68.87
In-Depth The Visual Studio 11 Developer Previews provide a first look at what's coming for .NET developers for a variety of products: Visual Studio 11, the Microsoft .NET Framework 4.5 and Windows 8 "Metro style" development. The preview is presumably complete, but, equally presumably, buggy. Regardless, all the available previews (see "Different Strokes,") provide an opportunity to review where Microsoft wants to take developers in the future. We'll start with Metro. Before installing any of this preview software, remember: No matter how good the uninstallation process is, you should always install beta/CTP/preview software -- really, any software without a "Go Live" license -- on a computer with a hard disk you're willing to reformat. The preview that lets you play with Windows 8 and Metro-style applications is probably the most limited of the Visual Studio 11 tryouts. I followed John Papa's advice in his online Papa's Perspective column (bit.ly/teUjXR) and used Oracle Virtual Box to boot a new virtual machine (VM) from an ISO image of the preview downloaded from the Microsoft site; it was quick, painless and free. What you get is the Express version of Visual Studio 11 with templates for creating Metro-style applications -- but that's all you get. To try out what's new in Visual Studio 11 and the .NET Framework 4.5 in other .NET development environments, you'll need to download the Visual Studio 11 Developer Preview (it includes the .NET Framework 4.5). You can install this preview on any computer or VM running Windows 7, Windows 8 or Windows Server 2008 R2. You should be able to run both the Express and full versions of Visual Studio 11 together on a computer running Windows 8. It's worth mentioning that if you just want to try out Visual Studio 11 without upgrading to the .NET Framework 4.5, you can do that, too. While the Visual Studio 11 Developer Preview downloads and installs the .NET 4.5 Framework, you can compile your applications to any version of .NET. This will let you explore the new features of Visual Studio 11 without committing to upgrading your applications. Also available on the Microsoft site are previews for Visual Studio Team Foundation Server (TFS), Agents that execute tests on multiple computers, the Team Foundation plug-in for the open source Eclipse editor (Team Explorer Everywhere), and the Remote Debugger for debugging on computers that don't have Visual Studio installed. Page Inspector helps you figure out what part of your .aspx file generated the HTML in your page. Finally, a preview for ASP.NET MVC 4 is available, but like the other previews just mentioned it's outside the scope of this article. -- P.V. Windows 8 and Metro-Style Applications Up here in Canada, we're used to two "official languages." Traditionally, that's been the approach in .NET, too -- a Visual Studio/.NET Framework review will focus on support for Visual Basic and Visual C#, with some token comments about C++ and F#. But as the Express preview makes clear, Microsoft is now promoting another new "official" language: JavaScript. While the number of F# developers remains small and the C++ community doesn't see much growth, there are a lot of HTML+JavaScript developers out there. The question is, How many of those developers want to create Windows applications with those tools? Given the point of the Express preview, shown in Figure 1, is creating Metro-style applications, it's worth clarifying what that means. Metro changes the way users get to their applications by applying a kind of radical simplicity. To get access to their applications, users have traditionally integrated three tools: Start menu, task bar and desktop. Thinking back to their original appear¬¬-ances, you can see a constant evolution in their flexibility and sophistication. What Metro does is replace all three with a single interface. There's still a desktop in Windows 8 (at least in the preview), but when you press the Start button, what pops up on the screen is the Metro checkerboard. I think I like Metro, though I'm still fumbling with navigation (primarily, how do I get out of something once I've opened it?). And because the checkerboard extends horizontally, I find that I want a mouse equivalent of the finger swipe on my smartphone that shifts my icons sideways. But other than that, I found myself adjusting relatively quickly -- and this is from a guy who never really took advantage of the Window 7 ability to pin applications together. You may have a Metro-style application in your future. If that's the case, the real issues are: How easy is it to add to the checkerboard, and what does the JavaScript option look like? The answer to the first question: ridiculously easy. After building and debugging your Metro-style application, just click the Deploy choice on the Build menu and a new square appears on your checkerboard. In the preview, it's all XAML-based development in four of the five official languages, so Silverlight and Windows Presentation Foundation (WPF) developers are already on the Metro road. The JavaScript OptionIn Visual Studio 11 Express, each of the language types contains templates for several kinds of Metro applications (Application, Grid Application and Split Applications), plus Class libraries and Unit Test libraries. C++ adds WinRT Component DLL and DirectX Application templates. JavaScript adds a template for a Navigation application. Most of these are Metro versions of applications with which you're probably familiar. A Grid Application, for instance, is the equivalent of a multiform application (though in Metro-speak, these are "multipage" applications); a Split Application is the Metro version of what most developers would call a Master-Detail form. The Visual Basic, C#, C++ and F# applications project templates for the Metro applications look very much alike: a XAML file and a code file. The JavaScript template for Metro-style applications, on the other hand, is something … different. The JavaScript project template merges several programming paradigms to which .NET developers have grown accustomed. Web developers, for instance, will see some familiar items: There's an HTML file and a .js file, a CSS folder and an images folder -- all components of a typical Web application. There's also a package.appxmanifest file that's part of any Metro application (it lists the private assemblies your application requires and the application's capabilities). However, there's also a References folder that only non-JavaScript programmers will recognize for adding references to Class libraries (you can create those libraries in the other official languages). While it's difficult to predict what changes are coming your way from what's missing, there's no design view for the HTML file (by default the page opens in Source view); but you can open it in Expression Blend, the XAML developer's friend. The template also includes a winjs folder with two subfolders (CSS and js) to hold Windows/Metro-specific resources. As the References and winjs folders indicate, these applications are tied to the Windows Metro/.NET environment and can't be repurposed for a Web audience. While it's obviously a good thing to provide developers with flexibility, the question is: Who will find this combination of programming paradigms an attractive way to create Windows 8 applications? Will the number of developers creating Metro-style applications exceed the number of developers using F#, for instance? It's an Asynchronous WorldMetro aside, there's lots of goodies in the preview that developers will find interesting, both in the framework and in Visual Studio. Starting with the .NET Framework 4, Microsoft has been consistently providing new support for parallel processing. Much of that's been driven by the desire to reduce the effort required to create applications that can exploit multi-processor computers (if you want to introduce a bug you'll never track down, write a multi-threaded application). But asynchronous processing is also another step in the process of disconnecting the user interface from the code that drives it: It reduces the entanglement between code and user actions. Effectively, the new world consists of processes running on different processors or in different environments that may need to interact -- sometimes across the network. In this world you can't execute two lines of code and assume that their results will come back in the order you issued them (and you may have to wait a very long time to get your result). The new .NET Framework 4.5 Async and Await keywords (in both C# and Visual Basic) make it much easier to implement asynchronous programming by taking care of many issues in running processes in parallel. But that doesn't mean the issues go away: Developers will still need to manage what happens when processes are launched but don't complete before the next line of code executes. To take advantage of asynchronous processing, you need to change your method by adding the Async keyword to the method declaration and returning a Task object of whatever data your method originally returned. This method returned a Customer object before being converted to an asynchronous method (I've also followed the naming convention of adding "Async" to the end of any asynchronous function): Public Async Function GetCustomerAsync(CustId As String) As Threading.Tasks.Task(Of Customer) Within the method, you need to use the Await keyword on any long-running methods you call. This example uses the Await keyword with the Factory StartNew method to create a new Customer object: cust = Await Threading.Tasks.Task(Of Customer). Factory.StartNew(Function() New Customer(CustId)) Now, when the Customer object is created with the Await keyword, .NET will give up the main thread while waiting for the call to complete and execute the New Customer on a new thread. When that thread completes, processing will continue to the following line. In ASP.NET and ASP.NET MVC, while you don't need the Await and Async keywords, you can now read and write your input stream asynchronously and give up your processing thread in between. This will be a boon for busy sites talking to clients over slow connections: You won't have to give up a thread to a request that's dribbling in (or out) slowly. It also means that if you have a long-running process on the server, you can send the client data as it's developed to keep the client from timing out -- again, without tying up a thread for the whole length of the process. If you're comfortable with adding your own HTTP modules to your site, you can create asynchronous HTTP modules as well. New for ASP.NETWhile there are no new controls in the Toolbox, there are some new features for ASP.NET developers, and two of them are security related. In ASP.NET, the default validation throws an exception if a user enters anything that looks like HTML markup into a page. There are cases, however, where you want to give users that ability, at least for selected fields in the form. ASP.NET 4.5 allows you to configure your site for lazy validation so that inputs aren't validated until you touch them in your server-side code (in earlier versions of ASP.NET, all data on the page was validated at once). You can use the Request object's Unvalidated property to access data with triggering validation for that data. More usefully for ASP.NET developers, you can set the ValidateRequest property on ASP.NET controls to Disabled to get the same result with selected controls on the page. As a bonus, the Microsoft AntiXSS module for preventing cross-site scripting is now automatically included with ASP.NET and can be added as an HttpModule in your web.config file. There are two other features that save you some typing in your .aspx file when referencing stylesheet or JavaScript files: a script or link element's src attribute now only needs to reference a folder on an IIS site to pick up all the .css and .js files in the folder. If you're willing to add four or five lines of code to your Global.asax file, you can bundle any arbitrary collection of .css or .js files into a file that will also be shrunk to its minimum size. Your link and script elements can then reference the bundle, again speeding up downloads. Data binding becomes even more flexible in ASP.NET 4.5. You can set the SelectMethod on a databound control to any method that returns a generic collection, eliminating the need for a DataSource. Setting the control's ModelType property to the datatype of the object in the collection allows the control to figure out at design time what properties the class has. The control can simply reference the properties on the class returned from the method. If the collection implements the IQueryable interface (if, for instance, it's the output of a LINQ query running against an Entity Framework database), the binding is two-way: paging, sorting and updates will be handled automatically. Adding an attribute to the method's parameters will cause ASP.NET to find and set the method's parameters for you. This example, for instance, creates a method that searches for a control on the page called RegText and uses it to set the method's Region parameter: Public Function GetCustomers( <System.Web.ModelBinding.Control("RegText")> Region As String) _ As IQueryable(Of Customer)() These changes effectively continue the process of separating the UI from the code that uses it. The Control attribute in the previous example associates more of the work with the class and less with the UI, making the class more reusable. The class with all of its data annotations can be moved from one application to another without having to add a lot of functionality to the UI. But the UI is still obligated to use the right names (such as GetCustomers and "RegText" in the previous examples) to take advantage of the class. If you write your own binding expressions in your .aspx file, you've done so without the benefit of IntelliSense, passing a string to the Bind (or Eval) methods, like this: <asp:TextBox The new ASP.NET 4.5 syntax moves the binding target out of the quotation marks and gives you IntelliSense support. This example binds the Text property to the CustomerId property on the cust object: <asp:TextBox ID="CustId" runat="server" Text=<%# cust.CustomerID %> /> Building ServicesWebSockets is a W3C standard (or becoming one) critical for service-oriented architecture (SOA). Right now, to create an interoperable service requires Web services. Because Web services uses a text-based XML message format and communicates using the world's slowest protocol (HTTP), communication is both slow and one-way. The WebSockets standard uses TCP, supports both text and binary transmissions, and can create bi-directional communication channels that allow the service to call the client when the service has data to share (the client still has to open the communication channel). ASP.NET 4.5 makes creating a WebSockets client or service a matter of writing three or four lines of code (and, of course, the methods you associated with sending and receiving requests run asynchronously). Coupled with general support for WebSockets in existing browsers, creating more powerful AJAX applications becomes much easier. The ability for the service to call the client when data's ready can make applications more responsive (by eliminating the need for the client to wait for a response) and more scalable (by reducing the need for the client to ping the service continuously to check for results). While the focus for WebSockets has been on AJAX, WebSockets works anywhere. Coupled with some technology for specifying message formats, WebSockets could become a key technology for creating interoperable services. This leads to Windows Communication Foundation (WCF) changes that, not surprisingly, support creating WebSockets-based services (also User Datagram Protocol for "fire and forget" services). Also added -- again, not surprisingly -- is support for asynchronous transmission when the client is slow to accept a response. The real gains for developers, however, are in the simplification of the WCF configuration file and much better validation of WCF configuration settings during builds. There's some hope that ordinary mortals will be able to debug WCF configurations. Visual Basic Gets ParityIn addition to the support for asynchronous programming, Visual Basic gets some features that C# developers have had for some time. The most important is the Yield keyword, which makes it extremely easy to create a method that iterates through a collection. The Yield keyword returns a value from within a loop (or any series of instructions), and remembers where it left off. The next time the method is called, the code resumes from where the last Yield keyword executed and remembers the state the code was left in, eliminating the need for static variables that keep track of that state. Under the category of "solving problems you didn't think anyone had," Visual Basic also gets the C# Global namespace. When Global appears in the full name of a class, it indicates the start of the .NET namespace hierarchy. You may need this if, in your class hierarchy, you define a namespace with the same name as one in the .NET hierarchy -- "System," for instance. That might give you difficulty in accessing the System.Int32 class, because .NET would look for the Int32 class in your System namespace, rather than the .NET namespace. Global.System.Int32 lets you specify that you want a class from the .NET namespace. Visual Studio EnhancementsVisual Studio also gets some cool new enhancements in version 11. Regardless of language or development environment, all developers get a new feature on the context menu for methods and properties: view Call Hierarchy (see Figure 2). Selecting this option opens a new window in Visual Studio that shows all the places where the method is called and all the calls that the method makes. You can drill down through both collections of methods to see where those methods are called from and what calls they make. Faced with figuring out some code you've never seen before, this single feature could save you enormous amounts of time. All of the previews (including the Express version) have two new menu choices: Run Code Analysis and Launch Performance Wizard (see Figure 3). These tools provide feedback on the quality of your code and what's going on under the hood. Like the accessibility analysis tool, these won't replace dedicated tools, but you might find insights into your code that you wouldn't get any other way. For ASP.NET developers, the big change is probably that the Visual Studio Development Server has been supplanted by IIS Express. IIS Express provides all the features of a Web Server (the Development Server only processed HTTP requests) and should be 100 percent compatible with the full version of IIS (the Development Server applied security slightly differently than IIS did). I'm a big fan of ASP.NET user controls; so, apparently, is the Visual Studio team. You can now arbitrarily select any set of HTML in a page, right-click and select Extract to User Control, as shown in Figure 4. A dialog box will pop up to let you assign a name and, a few seconds later, your project will have a new user control and your page will be using it. There's not much I can imagine being done with Intelli¬Sense, but Visual Studio 11 does show some major improvements in .aspx files. The Intelli¬Sense lists when adding elements are much shorter, because invalid elements are now automatically omitted. If you change an element type, its corresponding open or close tag is automatically rewritten. JavaScript development moves closer and closer to what server-side developers take for granted in compiled data-type languages. JavaScript brace matching is now included and IntelliSense in JavaScript seems marginally better (there are essential limitations to IntelliSense in a non-data-type language). The big improvement in JavaScript is a Go To Definition option when you click on a variable or function name that takes you to the item's declaration. A number of changes to Testing promise to make life easier and integrate better with developer's lives. Test Manager adds exploratory tests to both the Professional and Ultimate editions. With exploratory tests, you can start running your app and associate it with, for instance, a user story. Any actions you perform are recorded in a test script you can incorporate into your automated testing. As you find bugs, you can record them and they'll automatically be associated with the user story. The feature I like best? If you find something interesting during the test run and want to wander off on a tangent, you can stop recording to play with your application. If you're using test agents running on multiple computers to provide stress testing, Lab Manager will now reach over to those computers and install the agents for you. If this is a review (and it is), I should be able to sum up the product's value. Even if you don't intend to upgrade to the .NET Framework 4.5, the upgrade to Visual Studio 11 is well worth considering (something that I didn't feel about the previous version of Visual Studio). The upgrade to the .NET Framewrok 4.5 isn't as easy a choice. If you're building services, the enhancements to WCF are excellent, and it's worth moving to the new version as soon as it comes out. The other changes are certainly worthwhile and you won't be sorry if you upgrade, but I haven't seen anything compelling -- unless you want to create Windows 8 Metro-style applications. If that's your goal, then of course you'll want both Visual Studio 11 and the .NET Framework 4.5. As to creating applications in JavaScript... I'll let you make up your own mind. I've had enough abuse. These sound like great features but I can't help but wonder why these can't be packaged up and added to 2010? I know the general motivation for creating a new product (cash!) but I think you can get more devs/shops using the newer features if they didn't have to get a new VS version. I wonder which is more beneficial to MS: More cash or new features in widespread use. > More TechLibrary > More Webcasts
http://visualstudiomagazine.com/Articles/2011/12/01/Previewing-the-Future.aspx?Page=4
CC-MAIN-2013-20
refinedweb
3,790
53.41
CompTIA Training Classes in Potsdam, Germany Learn CompTIA in Potsdam, Germany and surrounding areas via our hands-on, expert led courses. All of our classes either are offered on an onsite, online or public instructor led basis. Here is a list of our current CompTIA related training offerings in Potsdam, - Introduction to Python 3.x 27 April, 2020 - 30 April, 2020 - CompTIA Security+ (Exam SY0-501) 20 April, 2020 - 24 April, 2020 - RED HAT SATELLITE V6 (FOREMAN/KATELLO) ADMINISTRATION 6 July, 2020 - 9 July, 2020 - ASP.NET Core MVC 27 July, 2020 - 28 July, 2020 - In Python, the following list is considered False: False, None, 0, 0.0, "",'',(),{},[]! Invoking an external command in Python is a two step process: from subprocess import call call(["ls","…
https://www.hartmannsoftware.com/Training/CompTIA/Potsdam-Germany
CC-MAIN-2020-16
refinedweb
125
59.94
Why the result is "125 256" ?Why the result is "125 256" ?Code: const int i=125; int *j = const_cast<int *> (&i); (*j) = 256; printf("%d %d\n", i, *j); Printable View Why the result is "125 256" ?Why the result is "125 256" ?Code: const int i=125; int *j = const_cast<int *> (&i); (*j) = 256; printf("%d %d\n", i, *j); Yes, I think the standard says that this is undefined. The standard is highly technical though. Google for "C++ standard draft" and see what it has to say about const_cast and cv_qualifiers. In any case, const_cast was probably not meant for you to modify variables that you have declared const (they may be put into some read-only memory, and therefore the second pointer, if you try to change the value through it, needs to point to some other write-able memory). A valid use is working around old 3rd-party libraries that haven't been written in a const-correct way. There may be situations where you need to cast away const-ness, but not to change the value, but to be able to use this old code. Here's a hypothetical example. Code: #include <iostream> /*suppose this is from a third-part library, that you have paid for and can't change. Note, that the first argument is non-const, although the function clearly won't change it. Function returns pointer to 0-terminator if character not found. */ char* Find(char* p, char ch) { for ( ; *p; p++) if (*p == ch) break; return p; } /*suppose we wnat to write a better find function, that can start search at a chosen position in the string. Now we don't want to rewrite everything, but use the existing library as much as possible. But we certainly want to write const-correct code. The function doesn't change the string, hence it is declared constant. */ char* better_find(const char* p, char ch, unsigned from) { /*Problem: can't call Find without const_cast, because p is constant, and can only be passed to functions that guarantee not to change it (const argument). It's OK though, because we know for sure, that Find will not try to modify it. */ return Find(const_cast<char*>(p) + from, ch); } int main() { char s[] = "Don't cast away constness unless you need to."; std::cout << s << '\n'; std::cout << Find(s, 'c') << '\n'; std::cout << better_find(s, 'c', 10); std::cin.get(); } Why not? const doesn't say you can't change something, it just says you promise not to. You broke your promise, so anything goes. ;)Why not? const doesn't say you can't change something, it just says you promise not to. You broke your promise, so anything goes. ;)Quote: Why the result is "125 256" ? > Why the result is "125 256" ? Because your code is broken. On seeing the code, the compiler generated printf("%d %d\n", 125, *j); on the basis that i is a const, and it knows full well what the value is, so it just substituted it inline without having to go looking in a storage location. If you want to smash the type system apart with ill advised casting, then that is what you get. that is quite interesting.... the last guy was right although without his answer I woulda shrugged my shoulders as well for a good 30 mins... pesky compilers optimizing all over the place. This will cause the compiler to behave the way you perhaps were expecting.This will cause the compiler to behave the way you perhaps were expecting.Code: volatile const int i=125; int *j = const_cast<int *> (&i); (*j) = 256; printf("%d %d\n", i, *j); ie: Code: output: 256 256 Why bother with using const? Because it helps protect you from making mistakes. Your compiler is really quick and really smart with C++ so you should take advantage of that and use it as a tool to make programming easier. :) I guess l2u meant, why bother with const, if you end up with volatile const. volatile const does have its uses. This isn't one of them.
http://cboard.cprogramming.com/cplusplus-programming/88318-interesting-question-about-const-printable-thread.html
CC-MAIN-2015-14
refinedweb
687
73.37
. enter esc on_change. a-z0-9A-Z. show_overlay show_quick_panel...). Since "escape" is able to close the panel you could've looked into the keybindings and see this: { "keys": "escape"], "command": "hide_panel", "args": {"cancel": true}, "context": { "key": "panel_visible", "operator": "equal", "operand": true } ] }, Here is my example code (again, using sublime_lib): class TestCommandCommand(sublime_lib.WindowAndTextCommand): def run(self, param=None): print(self._window_command) self.window.show_input_panel("Enter something", "test", self.on_done, self.on_change, None) def on_done(self, result): print("result", result) def on_change(self, text): print("current text", text) if text.endswith("x"): self.window.run_command("hide_panel", {"cancel": True}) Ah, should have thought of looking at the keybindings for escape, thanks for the pointer as well as the example code. I didn't fully understand the context stuff previously. Very helpful! I'd also be interested to know if there's a way to accept input without showing the input panel (any simple mode examples out there?). All of the ones that I know about end up covering up at least a little of the visible screen and might end up occluding a jump target. I was able to get this working with the suggestion from FichteFoll and close out the issue I was working on (). One caveat to this behavior is that calling "hide_panel" this way seems to always call the on_cancel method you've given to show_input_panel (even if you pass in "False" to the cancel argument). It also seems to run asynchronously in another thread so doing something in on_change, and then calling hide_panel will cause the on_cancel method to run afterwards. For me this was causing my changes to get overwritten because of what I was doing in the on_cancel, so I had to save off information in the on_change and then react to it if it's available in the on_cancel flow. It worked, but was a little less clean than I was hoping for in the code. The user experience is much better though, which is the most important thing . Thanks again for the help! I am wondering if there is a way to get more than just text.I want to implement a history stack on my little search bar using the up and down arrows. Is there a way in the onChange method to detect other input other than text changes? I get one change notification when the user presses the up arrow but I can't figure out how I could detect that the user pressed up or down?Any guidance would be helpful. Ian Maybe you want to take a look at sublime.show_quick_panel? The ST3 version also supports a selected_index parameter which you can set to be the last - or just start at the top. sublime.show_quick_panel selected_index Ian - did you get anything working for input panel history? I too want to implement this feature! I was hoping there was something built in like that used by Find (Cmd-f).
https://forum.sublimetext.com/t/close-show-input-panel-panel-in-on-change-callback/8488/2
CC-MAIN-2017-09
refinedweb
489
62.17
LoadLibrary failingHi, I've sorted this out. I was mixing 32 and 64 bit code. Thx FC. LoadLibrary failingHi, I have a simple program that is failing all the time. Here it is #include <windows.h> ... Client and nonclient areasHi, I'm using Visual Studio Community 2015. And I didn't know you could draw a toolbar control tha... Client and nonclient areasHi, I want to draw my own toolbar that contains a number of buttons. I have drawn a bitmap with... Mouse pos and bitmapsHi, How can I tell if the cursor is inside a screen bitmap ? Thx FC. This user does not accept Private Messages
http://www.cplusplus.com/user/Fractal_Cat/
CC-MAIN-2016-30
refinedweb
107
77.94
Visit the Ultimate Grid main page for an overview and configuration guide to the Ultimate Grid library. This is a brief introduction to getting started with the Ultimate Grid. You should find it pretty straightforward to get going with the grid in your MFC-based application after reading these short tutorials on using the grid in view and dialog based applications. You can also download the CHM documentation, which contains a Getting Started section as well as a more detailed tutorial on displaying your first grid in an MFC application. You can build the core Ultimate Grid source into a static library or DLL with the projects provided and link to it in that format, but we'll start here with examples of using the grid source code directly in your project. To begin, create a new SDI or MDI based MFC application project. Add the CPP files contained in the Ultimate Grid source directory, and the .h files contained in the include directory. You will probably need to set up an Additional Include path to the Ultimate Grid\Include directory in your projects preprocessor settings. As noted on the main page, the source code and sample projects zip files should integrate to form the following directory structure: That done, your project should build cleanly - compiling the basic grid source code. Next, we'll need to integrate a grid (CUGCtrl) derived class into the project. We've provided a MyCug 'skeleton' class in the Skel directory. This CUGCtrl derived class has all the grid override and setup function overrides you'll need to work with, allowing you to customize the behaviour of you grid with a minimum of additional coding. CUGCtrl MyCug Copy the MyCug.h and MyCug.cpp files to your project directory, and add the files to your project. Next, you'll need to include the MyCug.h file in the app and view classes - like so: // YourApp.cpp : Defines the class behaviours for the application. #include <span class="code-string">"stdafx.h"</span> #include <span class="code-string">"mycug.h" // add the header to the YourApp and YourView cpp files </span> #include <span class="code-string">"test.h"</span> #include <span class="code-string">"MainFrm.h"</span> Next, in the header for the view class, declare an instance of the MyCug grid class: class CMyView : public CView{ ... //******** Create a New Grid Object ******** MyCug m_grid; ... Next, in the OnCreate method of the view, we can create our grid: OnCreate int CMyView::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CView::OnCreate(lpCreateStruct) == -1) return -1; // TODO: Add your specialized creation code here m_grid.CreateGrid( WS_CHILD|WS_VISIBLE, CRect(0,0,0,0), this, 1234 ); // arbitrary ID of 1234 - season to taste return 0; } And the last bit will tie the grid to the client area of the view - add an OnSize handler, and the following code: OnSize void CMyView::OnSize(UINT nType, int cx, int cy){ CView::OnSize(nType, cx, cy); // TODO: Add your message handler code here m_grid.MoveWindow( 0, 0, cx, cy ); // size the grid control to // fit the view } At this point, should you compile and run the application, the view should contain something almost, but not quite entirely, unlike a grid. So, let's set up some columns and rows and populate cells with data. We'll place this code in the OnSetup notification of the derived grid class: OnSetup /*************************************************** OnSetup This function is called just after the grid window is created or attached to a dialog item. It can be used to initially setup the grid ****************************************************/ void MyCug::OnSetup(){ int rows = 20; int cols = 20; int i,j; CString temp; CUGCell cell; // setup rows and colums SetNumberRows(rows); SetNumberCols(cols); // fill cells with data GetCell(0,1,&cell); for (i = 0; i < cols; i++) { for (j = 0; j < rows; j++) { temp.Format("%d",(i+1)*(j+1)); cell.SetText(temp); SetCell(i,j,&cell); } } // add column headngs for (i = 0; i < cols; i++) { temp.Format("%d",(i+1)); cell.SetText(temp); SetCell(i,-1,&cell); } // add row headings for (j = 0; j < rows; j++) { temp.Format("%d",(j+1)); cell.SetText(temp); SetCell(-1,j,&cell); } } Now, on running the program, you should see the grid populated with 20 rows and columns of numeric data: This is a screenshot of the Ex2 project, found in the Examples\BasicMDI directory of the samples download. For a dialog based application, you'll want to attach the grid to a control placed on the dialog with the resource editor. Here we've just expanded the default 'TODO' static text item, and given it an ID of IDC_GRID1: Leaving out the steps shown above (including the source and MyCug files etc), the difference here is in grid creation. In the OnInitDialog method of our dialog, we'll call the Attach method of the grid instead of the Create method used in the view example: OnInitDialog BOOL CEx3Dlg::OnInitDialog() { CDialog::OnInitDialog(); ... // TODO: Add extra initialization here // attach the grid to the static control m_ctrl.AttachGrid(this, IDC_GRID1); return TRUE; // return TRUE unless you set the focus to a control } From here, initialization can be coded in the MyCug::OnSetup routine as in the above view example. You can refer to the Ex3 project in the Samples\BasicDlg directory of the samples download for details. MyCug::OnSetup There are times when you might want to use a member pointer in order to destroy and recreate the grid on a dialog. The problem here is that once attached to the static control window, that window is destroyed along with the grid on calling delete. So, the control needs to be recreated and assigned a known ID so that a new grid can be reattached. delete Here is one solution:); } } Two projects are included that will build the static and dynamic (DLL) libraries and place them in the Lib and Dlls folders of the Ultimate Grid directory. As installed, these projects will reference the core Source and Include files and enable you to link without compiling these in your project, but will not incorporate the add on cell types and data sources, which will still need to be added to your project as needed. (See the comments section of the cell types article for instructions on incorporating the advanced cell types in a library build). The static library project in the BuildLib directory has 16 configurations (batch mode build is recommended) that accommodate the various combinations of MFC linkage, Ole/non-Ole, Unicode/MBCS, and debug/release configurations. Each build configuration is set to create the *.lib file and place it in the Lib directory. The DLL project in the BuildDLL directory contains only four configurations, as it assumes all builds link dynamically to the MFC library and are Ole enabled. On creation, the *.lib, *.exp, and *.dll files will be placed in the DLLs directory. You can now remove the Ultimate Grid\Source *.cpp files from your project. You'll still need a path to the Ultimate Grid\Include directory, as the headers will still be referenced through the CUGCtrl derived class MyCug. We've provided a header file that should be added to your project that will select the correct module to link against based on your projects current build settings - uglibsel.h can be found in the Ultimate Grid\Lib directory. We recommend adding this file to your stdafx.h header. If you use this file directly from the Lib directory, you'll need an additional include path for this as well. uglibsel.h stdafx.h If you want to link to the DLL version of the grid, your preprocessor settings should contain the define _LINK_TO_UG_IN_EXTDLL. This will define the UG_CLASS_DECL declaration as AFX_CLASS_IMPORT for the Ultimate Grid classes. _LINK_TO_UG_IN_EXTDLL UG_CLASS_DECL AFX_CLASS_IMPORT If using uglibsel.h, you will also need to define a path to the DLLs directory as an additional library directory in the linker settings of your project. Alternatively, you can choose to use the following code in your stdafx.h file, replacing the relative paths with their correct equivalents for your project: #ifdef _DEBUG #ifdef _UNICODE #pragma message("Automatically linking with UGMFCDU.lib - " + "please make sure this file is built.") #pragma comment(lib, "..\\..\\DLLs\\UGMFCDU.lib") #else // not unicode #pragma message("Automatically linking with UGMFCD.lib - " + " please make sure this file is built.") #pragma comment(lib, "..\\..\\DLLs\\UGMFCD.lib") #endif // _UNICODE #else // RELEASE #ifdef _UNICODE #pragma message("Automatically linking with UGMFCU.lib - " + "please make sure this file is built.") #pragma comment(lib, "..\\..\\DLLs\\UGMFCU.lib") #else // not unicode #pragma message("Automatically linking with UGMFC.lib - " + "please make sure this file is built.") #pragma comment(lib, "..\\..\\DLLs\\UGMFC.lib") #endif // _UNICODE #endif // _DEBUG Of course, the actual DLL that will be used will need to be in the system path or available in the project directory at runtime. Again, you can now remove the Ultimate Grid\Source files from your project, keeping the additional include path to the Ultimate Grid\Include directory in your preprocessor settings. Add an additional library directory path to the linker settings (set this to the Ultimate Grid\Lib directory) and include the uglibsel.h header from that directory or copy the file to your project directory. Your project should now build and link with the correct static library based on your project settings. Initial CodeProject release August 2007. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) 1>MyCUG.obj : error LNK2001: unresolved external symbol "protected: virtual int __thiscall CUGCtrl::PreCreateWindow(struct tagCREATESTRUCTW &)" (?PreCreateWindow@CUGCtrl@@MAEHAAUtagCREATESTRUCTW@@@Z) void MyCug::OnDClicked(int col,long row,RECT *rect,POINT *point,BOOL processed) { //if (!updn){ SortBy(col, UG_SORT_DESCENDING); //SortBy(col, UG_SORT_ASCENDING); RedrawAll(); //} UNREFERENCED_PARAMETER(col); UNREFERENCED_PARAMETER(row); UNREFERENCED_PARAMETER(*rect); UNREFERENCED_PARAMETER(*point); UNREFERENCED_PARAMETER(processed); } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/20187/The-Ultimate-Grid-Beginner-s-Guide
CC-MAIN-2019-13
refinedweb
1,653
53
When adding tag or updating description, lp_save() gives "HTTP Error 412: Precondition Failed" Bug Description Getting this error consistently when attempting to add a tag using the following code (which worked last time I ran it a few weeks ago). if not tag in bug.tags: # Workaround bug #254901 print " ---> Tagged ",tag I also tested with the commented line instead of the workaround code; same error. Traceback (most recent call last): File "./process- bug. File "/home/ self. File "build/ File "build/ File "build/ launchpadlib. Related branches - Graham Binns (community): Approve (code) on 2010-03-16 - Diff: 362 lines (+222/-29)3 files modifiedsrc/lazr/restful/_resource.py (+75/-28) src/lazr/restful/example/base/tests/entry.txt (+146/-0) src/lazr/restful/version.txt (+1/-1) - Gary Poster (community): Approve on 2010-03-17 - Diff: 246 lines (+140/-26)5 files modifiedlib/canonical/launchpad/pagetests/webservice/apidoc.txt (+1/-0) lib/canonical/launchpad/pagetests/webservice/conditional-write.txt (+103/-0) utilities/apidoc-index.pt (+27/-8) utilities/create-lp-wadl-and-apidoc.py (+8/-17) versions.cfg (+1/-1) Attached is a minimal script to reproduce the issue. It seems the issue is not making multiple changes, nor calling lp_save() multiple times. It is sufficient to simply reference a field once before and once after calling lp_save(). If I re-get the bug after the lp_save(), it seems to work (attached script is applying both tags properly) Here is a log from the working script in comment #5 Ran into the 412 error again with a script that doesn't set any tags, only updates description. Also gets the error after a call to lp_save(). This seems to have gone away now. I get this problem when trying to set a duplicate: >>> b.duplicate_of >>> b.duplicate_of = 341478 >>> b.lp_save() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/ URI( File "/usr/lib/ 'PATCH', extra_headers= File "/usr/lib/ raise HTTPError(response, content) launchpadlib. And unlike for comments, it does not actually set the duplicate, so the HTTPError cannot just be ignored. I cannot confirm this, trying to change the duplicate_of attribute of a bug works for me, I don't get a 'HTTP Error 412'. Martin, as a side note, giving the bug id in your last example won't work, it has to either >>> b.duplicate_of = launchpad. or >>> b.duplicate_of = "https:/ Markus I found a workaround for this now. Previously I was doing bug = self.launchpad. for a in bug.attachments: if a.title in (...): # MARKER1 if not bug.duplicate_of: This gave me the afforementioned 412 error. After a looong time of fiddling, I found out the following: * Deleting attachmends and then changing properties leads to this error in lp_save() * Calling lp_save() twice in a row leads to this error. So what I did to circumvent the crash: * at MARKER1: get a new bug object with "bug = self.launchpad. * at MARKER2: if bug._dirty_ I really shouldn't have to explicitly query a private attribute (_dirty_ I think https:/ Maybe some insider of launchpad's code can mark them as duplicates and rise priority of this bug, for me it looks like a design issue which should be fixed in an early stage of launchpadlib and/or the API Argh, this is a nuisance. It also happens in this code: bug = self.launchpad. However, the latter was already fixed in an earlier package version than the \ one in this report. This might be a regression or because the problem is \ in a dependent package.' % master, bug.tags = bug.tags + ['regression- print bug._dirty_ I just got yet another incarnation of that in the retracers. Good debugging hint from James: james_w| pitti: and you can "import httplib2; Did anything change in LP recently in that regard? The retracers crash all over the place since today, while they were grinding happily for over a week before. I added some debugging code to print the exception's message and content fields, but both are empty. This is from very simple code: def _mark_dup_ '''Mark crash id as checked for being a duplicate.''' bug = self.launchpad. if 'need-duplicate x = bug.tags[:] # LP#254901 workaround I. ee. I just create a bug object, change .tags, and save it again. No re-use, or other things. Do you need more information about this? Or is there a workaround for this? This is quite urgent, since crash reports bitrot veeery fast. I've finally been able to duplicate this. Here's a very edited HTTP dump that shows the problem === GET /beta/bugs/353805 HTTP/1.1 if-none-match: "7f7f5e3c50396d HTTP/1.1 200 Ok Etag: "3c1d2fa23419ec --- PATCH /beta/launchpad if-match: "34ed5fed814bac HTTP/1.1 412 Precondition Failed === The values for If-None-Match and If-Match are totally random. They don't show up anywhere in the cache or anywhere else in the HTTP dump. In fact, even the initial requests to /beta/ have a random If-None-Match. If I clear the cache, I don't send any If-None-Match headers, but eventually I do send a random If-Match and I get the same error. OK, I found at least one problem that causes 412 errors. When you PATCH a representation you get back 209 ("Content Returned") with a new representation. The http_etag in this representation is the random ETag that's sent on a subsequent PATCH and then you get 412. --- PATCH /+bug/353805: Sending If-Match with http_etag "84bb970299f2ae 209 Content Returned: New representation has http_etag "53a61af57b930f --- PATCH /+bug/353805: Sending If-Match with http_etag "53a61af57b930f 412 Precondition Failed --- The http_etag here is probably generated from some sort of transitional state that's neither the original state nor the final state. I'm looking into it now. This is a problem specific to Launchpad bugs. The problem does not exist in the lazr.restful example web service, and it doesn't exist for other Launchpad resources (such as people). Everyone who's commented on this thread has had a problem with bugs. The culprit is the date_last_updated field. Here are the only differences between the version of the bug received along with a 209, and the same bug retrieved an instant later with a fresh GET request. date_last_updated: 2006-07- http_etag: "490b68f89bc09b date_last_updated is updated after the ETag is calculated, possibly in response to the ObjectModifiedE The culprit is a combination of the unmarshalled_ I have a fix but since the lazr.restful example service doesn't use OMEs at all, a proper test will take a little while. Awesome! Looking forward to trying it out. On Thu, Apr 23, 2009 at 05:53:03PM -0000, Leonard Richardson wrote: > The culprit is a combination of the unmarshalled_ > ObjectModifiedE > lazr.restful didn't hear about it, and only removed the fields the > client modified from unmarshalled_ > date_last_updated stays in there. > > I have a fix but since the lazr.restful example service doesn't use OMEs > at all, a proper test will take a little while. > > -- > When adding tag or updating description, lp_save() gives "HTTP Error 412: Precondition Failed" > https:/ > You received this bug notification because you are a direct subscriber > of the bug. > > Status in Launchpad web services client library: Triaged > > Bug description: > Getting this error consistently when attempting to add a tag using the following code (which worked last time I ran it a few weeks ago). > > if not tag in bug.tags: > #bug.tags. > # Workaround bug #254901 > tag_list = bug.tags > tag_list. > bug.tags = tag_list > > print " ---> Tagged ",tag > bug.lp_save() > > I also tested with the commented line instead of the workaround code; same error. > > Traceback (most recent call last): > File "./process- > bug.append_ > File "/home/ > self.bug.lp_save() > File "build/ > File "build/ > File "build/ > launchpadlib. Adding a python-launchpadlib task in Ubuntu, as I will want to port your changes to the Jaunty package. I'll package the new python-launchpadlib for Karmic with lazr-restfulclient and any other new packages that will be needed, but doing that for Jaunty is not really possible. Thanks, James This is a server-side bug, so there's nothing to do for launchpadlib. I've gotten my branch reviewed and landed in the main lazr.restful branch. Because Launchpad doesn't use that branch yet I'm going to be backporting it. Ah, sorry, I got confused between lazr.restful and lazr.restfulclient. Thanks for fixing this, I'm sure that there will be a lot less hair torn out by some of my fellow developers now. Thanks, James I've backported the fix to the lazr.restful branch used by Launchpad. I still get daily crashes from the Apport retracer in this simple code: bug = self.launchpad. if self.arch_tag in bug.tags: x = bug.tags[:] # LP#254901 workaround File "/home/ URI( File "/home/ 'PATCH', extra_headers= File "/home/ raise HTTPError(response, content) launchpadlib. So it doesn't actually seem fixed yet? I've just hit Precondition Failed as well, just with branch. branch. though maybe it's a case of different bug, same manifestation. Any advice would be appreciated, as I know of no other way to achieve this. Thanks, James Hi, I may have spoken too soon. I can't reproduce the error in testing, so I've added some code to give more information if this happens again and will wait and see what happens. Thanks, James Reopening. When I remove the workarounds for this bug in Apport's launchpad module, I still get the same problem: File "apport/ bug.lp_save() File "/usr/lib/ URI( File "/usr/lib/ 'PATCH', extra_headers= File "/usr/lib/ raise HTTPError(response, content) HTTPError: HTTP Error 412: Precondition Failed Response headers: --- content-length: 0 content-type: text/plain date: Wed, 20 Jan 2010 11:48:55 GMT server: zope.server.http (HTTP) status: 412 vary: Cookie, via: 1.1 wildcard. x-content- x-powered-by: Zope (), Python () --- Response body: --- --- This is hitting me too in Hydrazine, and it's fairly easily reproducible there if you make changes quickly. It does seems timing-dependent which would be consistent with it relating to a last-modified field stored on the server: it seems like if you get two updates within a 1s window then it crashes. Apport already has workarounds for this, but apparently still not enough. This still keeps people from submitting debug data to Launchpad, see bug 528680. Can the priority of this please be bumped? On Tue, 02 Mar 2010 11:11:47 -0000, Martin Pitt <email address hidden> wrote: > Apport already has workarounds for this, but apparently still not > enough. This still keeps people from submitting debug data to Launchpad, > see bug 528680. > > Can the priority of this please be bumped? Martin, how sure are you that these are spurious? Is there a chance that they are actually legitimate errors due to something else modifying the bug between you fetching it and saving it? Thanks, James. On Tue, 02 Mar 2010 12:39:09 -0000, Martin Pitt <email address hidden> wrote: >. Ok, thanks. I don't do either of these things in scripts myself, so haven't seen it. I did however do things with branches over the API that was getting 412 a lot, but this came down to automated tasks that LP does in the background the modify the branch happening in about the same amount of time after pushing the branch that my script took to get to the API stage, leading to legitimate 412 errors that were easy to work around once I understood what was going on. Thanks, James Right, I can still reproduce by doing bug = lp.bugs[12345] bug.newMessag bug.tags = ["foo"] bug.lp_save() because date_last_updated and date_last_message change when we call the newMessage method, but we don't update the representation. The original fix for this bug was server side. Leonard, is there something we can change server side for this? On the client side, we could fix this problem by changing NamedOpearation which currently does if response.status == 201: return self._handle_ else: if http_method == 'post': # The method call probably modified this resource in # an unknown way. Refresh its representation. return self._handle_ where we could move the the if http_method block outside the other if block, so that we refresh when we create the new message. What's the best approach? Thanks, James If creating a new resource secretly modifies the parent resource, then the client-side fix is the correct thing to do. But I don't know if that would help the apport people. Can someone show me the apport code that has the problem, and the workarounds devised for it? https:/ I believe all the API using code for apport can be found in http:// It all looks pretty straightforward to me. You could grab lp:apport and try running the testsuite, and find and remove the workarounds if it doesn't show the failures. Thanks, James So, Martin helpfully labelled all the places where workarounds were added, we have them after a.removeFromBug() where a is an attachment bug.newMessage and changing bug.tags, which is one that I can't reproduce. Thanks, James I know of two new bug fields that change behind-the-scenes automatically, bug_heat and task_age. task_age is going away, so let's ignore that. I'd like to fix bug_heat and then see if there's a separate problem to do with named operations. Here's a paste of a message I posted to launchpad-dev about solving the bug_heat problem. I'm still working on this. 1. If a field is important enough to include in the representation, it's important enough to include in the ETag. As Robert points out, omitting a read-only field from the ETag will stop clients from knowing when the value changes. 2. I like Bjorn's idea of a two-part ETag, where both parts are checked for GET requests/caching, but only one part is checked for PATCH/PUT requests. But, I don't think this would comply with the HTTP standard. There are two types of ETags, "strong" and "weak". A strong ETag changes whenever "the entity (the entity-body or any entity-headers) changes in any way". A weak ETag changes "only on semantically significant changes, and not when insignificant aspects of the entity change". These quotes are from section 13.3.3. Consider a two-part ETag, where one part describes the state of the read-only fields and the other part describes the state of the read-write fields. Taken together, the whole thing is a strong ETag. Look at only the second part and you have a weak ETag. But (13.3.3 again) "The only function that the HTTP/1.1 protocol defines on [ETags] is comparison." There's no looking at the second half of an ETag. The whole thing has to match. OK, so let's define another function on ETags, another "weak comparison function" which only looks at the second half of the ETag. That goes beyond HTTP/1.1 but it doesn't contradict the standard. Now: We would like to validate GET requests using the strong comparison function (looking at the strong ETag), so that even minor changes will invalidate the cache and the client will always have the most up-to-date information. We would like to validate PATCH requests by using the weak comparison function, so that irrelevant changes to the resource don't affect the success of the PATCH. But, the HTTP standard says we have to do the opposite. Section 14.26 says "The weak comparison function can only be used with GET or HEAD requests." The IETF draft that defines PATCH says, "Clients wishing to apply a patch document to a known entity can first acquire the strong ETag of the resource to be modified, and use that Etag in the If-Match header on the PATCH request to verify that the resource is still unchanged." As I read the standards, if bug_heat has changed and I try to PATCH an unrelated field, the correct behavior is that my PATCH should fail. I don't think this makes sense for us given the way we implement PATCH, but it does make sense in general. I don't think anything bad will happen if we implement the two-part ETag. The whole ETag string is in fact a strong ETag, so anything that treats it as a strong ETag will still work. We'll just have some server-side magic that secretly implements a weak comparison function to make PATCH requests work. And if anyone actually tries to PATCH bug_heat they'll still get an error... On Tue, 09 Mar 2010 18:55:48 -0000, Leonard Richardson <email address hidden> wrote: > I know of two new bug fields that change behind-the-scenes > automatically, bug_heat and task_age. task_age is going away, so let's > ignore that. I'd like to fix bug_heat and then see if there's a separate > problem to do with named operations. There is a separate problem related to named operations, on bugs at least, due to the dates of operations that are exported. They are however read-only and so your proposal would allow them to function assuming that those dates were not in the strong etag. The lack of refresh means that the local representation will be out of date after some named operations though. Thanks, James( File "/usr/lib/ 'PATCH', extra_headers= File "/usr/lib/ raise HTTPError(response, content) lazr.restfulcli Response headers: --- content-length: 0 content-type: text/plain date: Fri, 12 Mar 2010 11:33:07 GMT server: zope.server.http (HTTP) status: 412 vary: Cookie, via: 1.1 wildcard. x-content- x-powered-by: Zope (), Python () --- Response body: --- --- On Fri, 12 Mar 2010 11:34:23 -0000, Martin Pitt <email address hidden> wrote: >(self. > File "/usr/lib/ > 'PATCH', extra_headers= > File "/usr/lib/ > raise HTTPError(response, content) > lazr.restfulcli Yep, that's the NamedOperation case again. Thanks, James. On Tue, 16 Mar 2010 12:31:48 -0000, Leonard Richardson <email address hidden> wrote: >. IBug.setPrivate and IBug.markAsDupl https:/ does that indicate they should be killed too? (Plus, I notice that isUserAffected is exported as a POST which indicates a bug somewhere). Thanks, James Yes, I think setPrivate can be made the mutator for 'private' and markAsDuplicate the mutator for 'duplicate_ This appears to work ok. Thanks, James Closing the lazr.restfulclient task as this was a server-side fix. Thanks, James This is still happening with current Launchpad and precise's launchpadlib. It happens when I try to modify bug.title. @pitti please file a new bug for that Seems I cannot reproduce it right now with setting .title, so perhaps that got fixed in latest 1.9.12 launchpadlib. Thanks! Here are the last two requests: send: 'PATCH /beta/bugs/292317 3cfeb7e05896b38 208e23ddc75 if-match: "72a9365dbdc47f send: '{"tags": ["needs-xorglog"]}' reply: 'HTTP/1.1 209 Content Returned\r\n' send: 'PATCH /beta/bugs/292317 a42d5d0e5bc4aaf 14be1e73123" lspci-vvnn" ]}' if-match: "7f60495c065527 send: '{"tags": ["needs-xorglog", "needs- reply: 'HTTP/1.1 412 Precondition Failed\r\n' Are you calling lp_save() twice, or just once? It would be helpful if you made a GET request after every call to lp_save(), to see what the server thinks the ETag is at each stage. This might be a problem with the way launchpadlib handles the 209 response code.
https://bugs.launchpad.net/lazr.restful/+bug/336866
CC-MAIN-2017-13
refinedweb
3,241
73.98
The utility class Rethrow<T extends Exception> has a single method wrap(String msg, Throwable t) which does the following: * Checks if t is a RuntimeException or Error, if so immediately rethrows the exception. * Checks if t is already an instance of T, if so returns t. * Returns a new instance of the target exception class. To use, declare a static final instance of Rethrow on your own exception class: public class MyException extends RuntimeException { public static final Rethrow<MyException> rethrow = new Rethrow<MyException>(); public MyException(String msg, Throwable t) { super(msg,t); } } Then later: try { ... do something that might throw exceptions ... } catch (Exception e) { throw MyException.rethrow.wrap(e); } Full version of Rethrow included below: import java.lang.reflect.Constructor; import java.util.logging.Level; import java.util.logging.Logger; /** * Utility class to wrap a caught exception in a (subclass of) RuntimeException. */ public class Rethrow<T extends Exception> { private static final Logger _logger = Logger.getLogger("rethrow"); public static final Rethrow<RuntimeException> runtime = new Rethrow<RuntimeException>(RuntimeException.class); private Class<T> _class; private Constructor<T> _constructor; public Rethrow(Class<T> c) { _class = c; try { _constructor = c.getConstructor(new Class[] { String.class, Throwable.class }); } catch (Exception e) { _logger.log(Level.WARNING, "could not get exception wrapper class constructor", e); } } @SuppressWarnings("unchecked") public T wrap(String msg, Throwable e) { if (e instanceof RuntimeException) { throw (RuntimeException) e; } else if (e instanceof Error) { throw (Error) e; } else if (_class.isAssignableFrom(e.getClass())) { return (T) e; } else { if (_constructor != null) { try { return _constructor.newInstance(new Object[] { msg, e }); } catch (Exception e1) { _logger.log(Level.WARNING, "could not create instance of " + _class + ", using RuntimeException instead", e1); } } throw new RuntimeException(msg, e); } } } Easy and correct exception chaining (4 messages) Threaded Messages (4) - Is it really that simple? by James Watson on March 23 2006 09:07 EST - Is it really that simple? by Sam Shen on March 24 2006 15:38 EST - Is it really that simple? by James Watson on March 27 2006 10:23 EST - Is it really that simple? by John Gagon on May 11 2006 05:52 EDT Is it really that simple?[ Go to top ] Can't you just put this in the Exception constructor? - Posted by: James Watson - Posted on: March 23 2006 09:07 EST - in response to Sam Shen if (e instanceof RuntimeException) { throw (RuntimeException) e; } else if (e instanceof Error) { throw (Error) e; } Is it really that simple?[ Go to top ] While you could put the checks in a constructor that you own, I think throwing an exception in a constructor (almost as a side effect) is bad style. The model I have in my head is that "new SomeException()" creates the exception. It shouldn't throw an exception, unless preceded with "throw". - Posted by: Sam Shen - Posted on: March 24 2006 15:38 EST - in response to James Watson Also, what you suggest won't work for exceptions that you don't control. Is it really that simple?[ Go to top ] Saying something is bad style is pretty meaningless without a pragmatic reason. - Posted by: James Watson - Posted on: March 27 2006 10:23 EST - in response to Sam Shen However, I see now that I misunderstood what your code was doing, I thought you were defining an Exception class. It would be nice to have some code formatting here, TSS. Is it really that simple?[ Go to top ] I believe I saw this same exact thing on the Bile Blog. (not that it ever gets read). However, I've seen some very cool uses of Java 5 with Exceptions. One of which is undoing the String message anti-pattern and using an enum instead. That allows the catcher of the exception to go through a list of "likely reasons". In addiction, objects that are found to be part of the problem should often be used instead of String found in the constructor of Exception. I'd prevent it's use. - Posted by: John Gagon - Posted on: May 11 2006 05:52 EDT - in response to James Watson
http://www.theserverside.com/discussions/thread.tss?thread_id=39537
CC-MAIN-2016-26
refinedweb
671
57.16
You must have heard about desktop notifier, today we are going to build one and see how it works. We are going to do it using python and we will do it on ubuntu for now. So, lets start to build desktop notifier using python. Lets build desktop notifier using python For building this we will be using notify2 and using requests module, we will get latest news from hackers news and show it in notification. Lets start with getting news from hacker news. How I created a 100 twitter bots and made them interact using python, aws and selenium.. For getting new from hacker news we are going to use their apis. You can see the apis here. We make an api call to get all the latest story ids then make another get call to’+str(a[0])+’.json?print=pretty to get its details. Below code will do this import requests import json a = requests.get('') a = json.loads(a.content) b = requests.get(''+str(a[0])+'.json?print=pretty') b = json.loads(b.content) ICON = "Path of icon image" newsitem = [{"title":b['url'],"description":b['title']}] In newsitem you will get the title and description of the news. Now we will make a notifier and pass this data to that. Look at the below code. It gets the news and shows notification on the screen. import time import notify2 import requests import json a = requests.get('') a = json.loads(a.content) b = requests.get(''+str(a[0])+'.json?print=pretty') b = json.loads(b.content) ICON = "Path of icon image" newsitem = [{"title":b['url'],"description":b['title']}] # initialize the d-bus connection notify2.init("Hacker News Notifier") # create Notification object n = notify2.Notification(None, icon = ICON_PATH) # set urgency level n.set_urgency(notify2.URGENCY_NORMAL) # set timeout for a notification n.set_timeout(10000) n.update(newsitem['title'], newsitem['description']) n.show() time.sleep(5) Python: Suggested books for Algorithms in Python Now you can see the notification like below. You can also modify it for different content and actions on notifications. What you need to do is just explore. Building desktop notifier is pretty easy task and may be we can take on some bigger task for the next time. If you have any idea that we can build together please do not hesitate to drop in the comments. I will get back to you so that we can make things up and running as early as possible. You can read more articles here. Also please ignore the bad coding that I used here its just for giving hint how you can do it. So i didn’t bother about creating classes or stuff cause I don’t need it more than once. 🙂 Thanks for reading if you like the stuff, Please share, subscribe and spread the words. We really need these type of inspirations. Level order traversal of a binary tree in python.
https://www.learnsteps.com/build-desktop-notifier-using-python/?shared=email&msg=fail
CC-MAIN-2022-27
refinedweb
486
68.97
Putting together a 3D scene in the browser with Three.js is like playing with Legos. We put together some boxes, add lights, define a camera, and Three.js renders the 3D image. In this tutorial, we're going to put together a minimalistic car from boxes and learn how to map texture onto it. First, we'll set things up – we'll define the lights, the camera, and the renderer. Then we'll learn how to define geometries and materials to create 3D objects. And finally we are going to code textures with JavaScript and HTML Canvas. How to Setup the Three.js Project Three.js is an external library, so first we need to add it to our project. I used NPM to install it to my project then imported it at the beginning of the JavaScript file. import * as THREE from "three"; const scene = new THREE.Scene(); . . . First, we need to define the scene. The scene is a container that contains all the 3D objects we want to display along with the lights. We are about to add a car to this scene, but first let's set up the lights, the camera, and the renderer. How to Set Up the Lights We'll add two lights to the scene: an ambient light and a directional light. We define both by setting a color and an intensity. The color is defined as a hex value. In this case we set it to white. The intensity is a number between 0 and 1, and as both of them shine simultaneously we want these values somewhere around 0.5. . . . const ambientLight = new THREE.AmbientLight(0xffffff, 0.6); scene.add(ambientLight); const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8); directionalLight.position.set(200, 500, 300); scene.add(directionalLight); . . . The ambient light is shining from every direction, giving a base color for our geometry while the directional light simulates the sun. The directional light shines from very far away with parallel light rays. We set a position for this light that defines the direction of these light rays. This position can be a bit confusing so let me explain. Out of all the parallel rays we define one in particular. This specific light ray will shine from the position we define (200,500,300) to the 0,0,0 coordinate. The rest will be in parallel to it. As the light rays are in parallel, and they shine from very far away, the exact coordinates don't matter here – rather, their proportions do. The three position-parameters are the X, Y, and Z coordinates. By default, the Y-axis points upwards, and as it has the highest value (500), that means the top of our car receives the most light. So it will be the brightest. The other two values define by how much the light is bent along the X and Z axis, that is how much light the front and the side of the car will receive. How to Set Up the Camera Next, let's set up the camera that defines how we look at this scene. There are two options here – perspective cameras and orthographic cameras. Video games mostly use perspective cameras, but we are going to use an orthographic one to have a more minimal, geometric look. In my previous article, we discussed the differences between the two cameras in more detail. Therefore in this one, we'll only discuss how to set up an orthographic camera. For the camera, we need to define a view frustum. This is the region in the 3D space that is going to be projected to the screen. In the case of an orthographic camera, this is a box. The camera projects the 3D objects inside this box toward one of its sides. Because each projection line is in parallel, orthographic cameras don't distort geometries. . . . // Setting up camera const aspectRatio = window.innerWidth / window.innerHeight; const cameraWidth = 150; const cameraHeight = cameraWidth / aspectRatio; const camera = new THREE.OrthographicCamera( cameraWidth / -2, // left cameraWidth / 2, // right cameraHeight / 2, // top cameraHeight / -2, // bottom 0, // near plane 1000 // far plane ); camera.position.set(200, 200, 200); camera.lookAt(0, 10, 0); . . . To set up an orthographic camera, we have to define how far each side of the frustum is from the viewpoint. We define that the left side is 75 units away to the left, the right plane is 75 units away to the right, and so on. Here these units don't represent screen pixels. The size of the rendered image will be defined at the renderer. Here these values have an arbitrary unit that we use in the 3D space. Later on, when defining 3D objects in the 3D space, we are going to use the same units to set their size and position. Once we define a camera we also need to position it and turn it in a direction. We are moving the camera by 200 units in each dimension, then we set it to look back towards the 0,10,0 coordinate. This is almost at the origin. We look towards a point slightly above the ground, where our car's center will be. How to Set Up the Renderer The last piece we need to set up is a renderer that renders the scene according to our camera into our browser. We define a WebGLRenderer like this: . . . // Set up renderer const renderer = new THREE.WebGLRenderer({ antialias: true }); renderer.setSize(window.innerWidth, window.innerHeight); renderer.render(scene, camera); document.body.appendChild(renderer.domElement); Here we also set up the size of the canvas. This is the only place where we set the size in pixels since we're setting how it should appear in the browser. If we want to fill the whole browser window, we pass on the window's size. And finally, the last line adds this rendered image to our HTML document. It creates an HTML Canvas element to display the rendered image and adds it to the DOM. How to Build the Car in Three.js Now let's see how can we can compose a car. First, we will create a car without texture. It is going to be a minimalistic design – we'll just put together four boxes. How to Add a Box First, we create a pair of wheels. We will define a gray box that represents both a left and a right wheel. As we never see the car from below, we won't notice that instead of having a separate left and right wheel we only have one big box. We are going to need a pair of wheels both in the front and at the back of the car, so we can create a reusable function. . . . function createWheels() { const geometry = new THREE.BoxBufferGeometry(12, 12, 33); const material = new THREE.MeshLambertMaterial({ color: 0x333333 }); const wheel = new THREE.Mesh(geometry, material); return wheel; } . . . We define the wheel as a mesh. The mesh is a combination of a geometry and a material and it will represent our 3D object. The geometry defines the shape of the object. In this case, we create a box by settings its dimensions along the X, Y, and Z-axis to be 12, 12, and 33 units. Then we pass on a material that will define the appearance of our mesh. There are different material options. The main difference between them is how they react to light. In this tutorial, we'll use MeshLambertMaterial. The MeshLambertMaterial calculates the color for each vertex. In the case of drawing a box, that's basically each side. We can see how that works, as each side of the box has a different shade. We defined a directional light to shine primarily from above, so the top of the box is the brightest. Some other materials calculate the color, not only for each side but for each pixel within the side. They result in more realistic images for more complex shapes. But for boxes illuminated with directional light, they don't make much of a difference. How to Build the Rest of the Car Then in a similar way let's let's create the rest of the car. We define the createCar function that returns a Group. This group is another container like the scene. It can hold Three.js objects. It is convenient because if we want to move around the car, we can simply move around the Group. . . .x78b14b }) ); main.position.y = 12; car.add(main); const cabin = new THREE.Mesh( new THREE.BoxBufferGeometry(33, 12, 24), new THREE.MeshLambertMaterial({ color: 0xffffff }) ); cabin.position.x = -6; cabin.position.y = 25.5; car.add(cabin); return car; } const car = createCar(); scene.add(car); renderer.render(scene, camera); . . . We generate two pairs of wheels with our function, then define the main part of the car. Then we'll add the top of the cabin as the forth mesh. These are all just boxes with different dimensions and different colors. By default every geometry will be in the middle, and their centers will be at the 0,0,0 coordinate. First, we raise them by adjusting their position along the Y-axis. We raise the wheels by half of their height – so instead of sinking in halfway to the ground, they lay on the ground. Then we also adjust the pieces along the X-axis to reach their final position. We add these pieces to the car group, then add the whole group to the scene. It's important that we add the car to the scene before rendering the image, or we'll need to call rendering again once we've modified the scene. How to Add Texture to the Car Now that we have our very basic car model, let's add some textures to the cabin. We are going to paint the windows. We'll define a texture for the sides and one for the front and the back of the cabin. When we set up the appearance of a mesh with a material, setting a color is not the only option. We can also map a texture. We can provide the same texture for every side or we can provide a material for each side in an array. As a texture, we could use an image. But instead of that, we are going to create textures with JavaScript. We are going to code images with HTML Canvas and JavaScript. Before we continue, we need to make some distinctions between Three.js and HTML Canvas. Three.js is a JavaScript library. It uses WebGL under the hood to render 3D objects into an image, and it displays the final result in a canvas element. HTML Canvas, on the other hand, is an HTML element, just like the div element or the paragraph tag. What makes it special, though, is that we can draw shapes on this element with JavaScript. This is how Three.js renders the scene in the browser, and this is how we are going to create textures. Let's see how they work. How to Draw on an HTML Canvas To draw on a canvas, first we need to create a canvas element. While we create an HTML element, this element will never be part of our HTML structure. On its own, it won't be displayed on the page. Instead, we will turn it into a Three.js texture. Let's see how can we draw on this canvas. First, we define the width and height of the canvas. The size here doesn't define how big the canvas will appear, it's more like the resolution of the canvas. The texture will be stretched to the side of the box, regardless of its size. function getCarFrontTexture() { const canvas = document.createElement("canvas"); canvas.width = 64; canvas.height = 32; const context = canvas.getContext("2d"); context.fillStyle = "#ffffff"; context.fillRect(0, 0, 64, 32); context.fillStyle = "#666666"; context.fillRect(8, 8, 48, 24); return new THREE.CanvasTexture(canvas); } Then we get the 2D drawing context. We can use this context to execute drawing commands. First, we are going to fill the whole canvas with a white rectangle. To do so, first we set the fill style to be while. Then fill a rectangle by setting its top-left position and its size. When drawing on a canvas, by default the 0,0 coordinate will be at the top-left corner. Then we fill another rectangle with a gray color. This one starts at the 8,8 coordinate and it doesn't fill the canvas, it only paints the windows. And that's it – the last line turns the canvas element into a texture and returns it, so we can use it for our car. function getCarSideTexture() { const canvas = document.createElement("canvas"); canvas.width = 128; canvas.height = 32; const context = canvas.getContext("2d"); context.fillStyle = "#ffffff"; context.fillRect(0, 0, 128, 32); context.fillStyle = "#666666"; context.fillRect(10, 8, 38, 24); context.fillRect(58, 8, 60, 24); return new THREE.CanvasTexture(canvas); } In a similar way, we can define the side texture. We create a canvas element again, we get its context, then first fill the whole canvas to have a base color, and then draw the windows as rectangles. How to Map Textures to a Box Now let's see how can we use these textures for our car. When we define the mesh for the top of the cabin, instead of setting only one material, we set one for each side. We define an array of six materials. We map textures to the sides of the cabin, while the top and bottom will still have a plain color. . . .xa52523 }) ); main.position.y = 12; car.add(main); const carFrontTexture = getCarFrontTexture(); const carBackTexture = getCarFrontTexture(); const carRightSideTexture = getCarSideTexture(); const carLeftSideTexture = getCarSideTexture(); carLeftSideTexture.center = new THREE.Vector2(0.5, 0.5); carLeftSideTexture.rotation = Math.PI; carLeftSideTexture.flipY = false; const cabin = new THREE.Mesh(new THREE.BoxBufferGeometry(33, 12, 24), [ new THREE.MeshLambertMaterial({ map: carFrontTexture }), new THREE.MeshLambertMaterial({ map: carBackTexture }), new THREE.MeshLambertMaterial({ color: 0xffffff }), // top new THREE.MeshLambertMaterial({ color: 0xffffff }), // bottom new THREE.MeshLambertMaterial({ map: carRightSideTexture }), new THREE.MeshLambertMaterial({ map: carLeftSideTexture }), ]); cabin.position.x = -6; cabin.position.y = 25.5; car.add(cabin); return car; } . . . Most of these textures will be mapped correctly without any adjustments. But if we turn the car around then we can see the windows appear in the wrong order on the left side. This is expected as we use the texture for the right side here as well. We can define a separate texture for the left side or we can mirror the right side. Unfortunately, we can't flip a texture horizontally. We can only flip a texture vertically. We can fix this in 3 steps. First, we turn the texture around by 180 degrees, which equals PI in radians. Before turning it, though, we have to make sure that the texture is rotated around its center. This is not the default – we have to set that the center of rotation is halfway. We set 0.5 on both axes which basically means 50%. Then finally we flip the texture upside down to have it in the correct position. Wrap-up So what did we do here? We created a scene that contains our car and the lights. We built the car from simple boxes. You might think this is too basic, but if you think about it many mobile games with stylish looks are actually created using boxes. Or just think about Minecraft to see how far you can get by putting together boxes. Then we created textures with HTML canvas. HTML canvas is capable of much more than what we used here. We can draw different shapes with curves and arcs, but then again sometimes a minimal design is all that we need. And finally, we defined a camera to establish how we look at this scene, as well as a renderer that renders the final image into the browser. Next Steps If you want to play around with the code, you can find the source code on CodePen. And if you want to move forward with this project, then check out my YouTube video on how to turn this into a game. In this tutorial, we create a traffic run game. After defining the car we draw the race track, we add game logic, event handlers, and animation.
https://www.freecodecamp.org/news/three-js-tutorial/
CC-MAIN-2021-31
refinedweb
2,738
67.86
JLN 1109 imported kraft paper logo printed fiber reinforced kraft paper tape for carton sealing US $0.24-0.98 / Roll 2000 Rolls (Min. Order) Cost-Effective Top Quality Kraft Liner Paper Importers US $450-1200 / Metric Ton 5 Metric Tons (Min. Order) 022A import lightyellow kraft paper for food packing US $0.1-0.5 / Kilogram 1 Kilogram (Min. Order) imported kraft paper 1 Ton (Min. Order) Wholesale china import customized retail kraft paper bag,PE valve bag US $0.05-0.08 / Piece 10000 Pieces (Min. Order) Import Brown Kraft Liner Paper Korea Prices US $430-475 / Ton 16 Tons (Min. Order) manufacture import yellow kraft paper & white kraft paper US $0.0016-0.065 / Piece 5000 Pieces (Min. Order) Virgin Imported Kraft Paper and Board US $0.3-0.9 / Sheet 3000 Sheets (Min. Order) quality twisted paper Rope made of imported kraft paper US $0.001-0.3 / Pair 1000 Pairs (Min. Order) wholesale China import brown kraft paper box with corrugated paper US $0.3-0.6 / Sheet 3000 Square Meters (Min. Order) Mg White sandwich paper for Mideast US $9-11 / Carton 8 Cartons (Min. Order) Top selling products 2016 150*150mm 80gsm origami construction paper US $0.16-0.29 / Bag 5000 Bags (Min. Order) Wholesale China Import Kraft Paper Fiberglass Adhesive Tape US $0.1-2 / Square Meter 5000 Square Meters (Min. Order) Kraft Paper Wrapped With Ivory Paper Board 300gsm FBB White Cardboard 300gsm Folding Box Board US $600-800 / Ton 18 Tons (Min. Order) Logo Custom Tissue Paper, Shoe Wrapping paper, Printed Shoe Tissue Paper US $0.1-20 / Roll 1000 Rolls (Min. Order) Chinese imports wholesale food safe kraft paper snack packs US $0.01-0.03 / Bag 100000 Bags (Min. Order) Kraft paper US $300-500 / Ton 1 Ton (Min. Order) Flute Medium Paper US $360-430 / Metric Ton 18 Metric Tons (Min. Order) Wholesale kraft brown grease proof paper for hot dog US $0.01-0.15 / Piece 50000 Pieces (Min. Order) import silicon coated paper for self-adhesive paper US $0.1-0.2 / Ton 1 Ton (Min. Order) kitchen supply oil proof oil absorbing paper for food usage US $1500-3000 / Ton 5 Tons (Min. Order) degradable stone paper, rich mineral paper, inject printing paper US $1500-3000 / Ton 1 Ton (Min. Order) EU approved hamburger paper;hamburger patty papers US $8-9 / Bag 500 Bags (Min. Order) White Import A4 Copy Paper US $0.2-0.5 / Pieces 150 Pieces (Min. Order) Adhesive foil stationery masking tape with washi paper US $0.1-0.5 / Roll 1000 Rolls (Min. Order) Cardboard in sheet chipboard paper 2.0mm US $415.0-415.0 / Tons 6 Tons (Min. Order) 15x15cm 20 sheets Gummed Paper US $0.3-0.6 / Set 10000 Sets (Min. Order) DIY decoration adhesive paper for sticker arts and crafts US $0.1-0.5 / Roll 720 Rolls (Min. Order) Fulton top reputation customers accepted natural color kraft and greaseproof paper US $0.1-3 / Roll 15000 Rolls (Min. Order) Wholesale china import paper carton coated duplex board paper for printing for wine box US $440-500 / Ton 20 Tons (Min. Order) for hot drink printed kraft paper hot sell high quality and lower price printed logo on hot cup sleeve US $0.01-0.5 / Piece 5000 Pieces (Min. Order) EVA Color craft paper A4( 5bags 60piece ) US $8.5-10 / Bag 200 Bags (Min. Order) Hot selling low price china importer transparent thin paper made wholesale in tissue paper US $0.01-0.28 / Piece 1000 Pieces (Min. Order) DIHUI high quality 200 gram single pe coated paper made in China US $1000-1200 / Metric Ton 5 Metric Tons (Min. Order) Import china tissue paper products wholesale gift wrap paper US $0.0014-0.02 / Piece 2000 Pieces (Min. Order) wood grain designed furniture contact decorative paper US $2100.0-2200.0 / Ton 1 Ton (Min. Order) offset paper importers in karachi/bond paper importers US $600-780 / Ton 16 Tons (Min. Order) Scrapbook paper metal die cut art supplies paper crafting products US $0.2-3 / Piece 3000 Pieces (Min. Order) pe laminated kraft paper for paper file book binding US $480-680 / Ton 5 Tons (Min. Order) Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show imported kraft paper or other products of your own company? Display your Products FREE now! Related Category Product Features Supplier Features Supplier Types Recommendation for you related suppliers
http://www.alibaba.com/showroom/imported-kraft-paper.html
CC-MAIN-2017-04
refinedweb
760
70.19
A class for drawing scales. More... #include <qwt_scale_draw.h> A class for drawing scales. QwtScaleDraw can be used to draw linear or logarithmic scales. A scale has a position, an alignment and a length, which can be specified . The labels can be rotated and aligned to the ticks using setLabelRotation() and setLabelAlignment(). After a scale division has been specified as a QwtScaleDiv object using QwtAbstractScaleDraw::setScaleDiv(const QwtScaleDiv &s), the scale can be drawn with the QwtAbstractScaleDraw::draw() member. Alignment of the scale draw Constructor. The range of the scale is initialized to [0, 100], The position is at (0, 0) with a length of 100. The orientation is QwtAbstractScaleDraw::Bottom. Return alignment of the scale Find the bounding rectangle for the label. The coordinates of the rectangle are absolute ( calculated from pos() ). in direction of the tick. Draws the baseline of the scale Implements QwtAbstractScaleDraw. Draws the label for a major scale tick Implements QwtAbstractScaleDraw. Draw a tick Implements QwtAbstractScaleDraw. Calculate the width/height that is needed for a vertical/horizontal scale. The extent is calculated from the pen width of the backbone, the major tick length, the spacing and the maximum width/height of the labels. Implements QwtAbstractScaleDraw. Determine the minimum border distance. This member function returns the minimum space needed to draw the mark labels at the scale's endpoints. Find the position, where to paint a label The position has a distance that depends on the length of the ticks in direction of the alignment(). Find the bounding rectangle for the label. The coordinates of the rectangle are relative to spacing + tick length from the backbone in direction of the tick. Calculate the size that is needed to draw a label Calculate the transformation that is needed to paint a label depending on its alignment and rotation. Determine the minimum distance between two labels, that is necessary that the texts don't overlap. Move the position of the scale Move the position of the scale. The meaning of the parameter pos depends on the alignment: Return the orientation TopScale, BottomScale are horizontal (Qt::Horizontal) scales, LeftScale, RightScale are vertical (Qt::Vertical) scales. Set the alignment of the scale The default alignment is QwtScaleDraw::BottomScale Change the label flags. Labels are aligned to the point tick length + spacing away from the backbone. The alignment is relative to the orientation of the label text. In case of an flags of 0 the label will be aligned depending on the orientation of the scale: QwtScaleDraw::TopScale: Qt::AlignHCenter | Qt::AlignTop\n QwtScaleDraw::BottomScale: Qt::AlignHCenter | Qt::AlignBottom\n QwtScaleDraw::LeftScale: Qt::AlignLeft | Qt::AlignVCenter\n QwtScaleDraw::RightScale: Qt::AlignRight | Qt::AlignVCenter\n Changing the alignment is often necessary for rotated labels. Rotate all labels. When changing the rotation, it might be necessary to adjust the label flags too. Finding a useful combination is often the result of try and error. Set the length of the backbone. The length doesn't include the space needed for overlapping labels.
http://qwt.sourceforge.net/class_qwt_scale_draw.html
CC-MAIN-2014-52
refinedweb
498
57.87
form_hook(3X) form_hook(3X) form_hook - set hooks for automatic invocation by applications #include <form.h> int set_field_init(FORM *form, Form_Hook func); Form_Hook field_init(const FORM *form); int set_field_term(FORM *form, Form_Hook func); Form_Hook field_term(const FORM *form); int set_form_init(FORM *form, Form_Hook func); Form_Hook form_init(const FORM *form); int set_form_term(FORM *form, Form_Hook func); Form_Hook form_term(const FORM *form); These functions make it possible to set hook functions to be called at various points in the automatic processing of input event codes by form_driver. The function set_field_init sets a hook to be called at form-post time and each time the selected field changes (after the change). field_init returns the current field init hook, if any (NULL if there is no such hook). The function set_field_term sets a hook to be called at form- unpost time and each time the selected field changes (before the change). field_term returns the current field term hook, if any (NULL if there is no such hook). The function set_form_init sets a hook to be called at form-post time and just after a page change once it is posted. form_init returns the current form init hook, if any (NULL if there is no such hook). The function set_form_term sets a hook to be called at form- unpost time and just before a page change once it is posted. form_init returns the current form term hook, if any (NULL if there is no such hook). Routines that return pointers return NULL on error. Other routines return_hook(3X)
https://www.man7.org/linux/man-pages/man3/form_hook.3x.html
CC-MAIN-2022-27
refinedweb
252
68.3
Back to Article Microsoft's own Project 7 has been pushing to get languages running within .NET's Common Language Runtime, and heads the front for Pascal, Scheme, Oberon, Mercury, and SML. ActiveState has jumped on board with a recent addition to their development kit: PerlNET. The PerlNET project is ActiveState's project for integrating Perl into .NET, and into .NET's Common Type System and Common Language Runtime. The Common Type System is the system that every language porting into .NET needs to conform to. The specification allows for a handful of types: integers (int, int32, and int64), floating numbers, references, two types of pointers (managed, which are trusted; and unmanaged, which are not trusted), structures, and enumerations. A subset of the Common Type System is the Common Language Specification, which is the common set of language features that enable an application to communicate with all other CLR languages. Along with the Common Language Runtime, the CLS specifies which operations are allowed, such as exception handling, branch operations, and the loading and storing of constants, variables, arguments, and arrays. ActiveState started work on integrating Perl into .NET with Perl for .NET research. Perl for .NET research was a research project creating a native .NET code compiler for Perl. This project supported a subset of Perl within the .NET framework, and was compatible with the Visual Studio .NET Beta and Perl 5.6.0. Perl for .NET research led to the PerlNET project, and the addition of .NET integration into ActiveState's Perl Development kit. PerlNET allows .NET Framework code to access Perl code running outside of the .NET framework using the Perl interpreter. The kit supports all of Perl's extension modules and ActiveState claims that the Perl run speeds are the same inside and outside of .NET. The code itself is compatible with standard Perl. You can create new .NET components and extend existing .NET components, all written within Perl. .NET can also call into Perl, and Perl can make calls into the .NET framework library. You can also wrap Perl modules into .NET, so if you absolutely needed to have the latest math modules from CPAN, they can be wrapped up as .NET components. All this is made possible through the work of DLLs, which is what PerlNET uses to communicate with .NET. As an example, let's say you wanted to access the function ReadLine within Console, which is within the System namespace. In .NET, the code to access ReadLine in would look like: using System; Console.ReadLine(); With PerlNET, the same .NET function is available using the familiar Perl commands for accessing methods: use namespace "System"; Console->ReadLine(); Perl's syntax for instantiating objects and calling their methods can be translated into .NET calls by utilizing a PerlNET pragma: # Use pragma to ensure .NET calls use PerlNET qw (AUTOCALL); # Call new constructor my $object = Namespace::Object->new(); # Call method 1 of object $object->method1(); PerlNET is part of ActiveState's Perl Development Kit, and is likely the first of many ports into the .NET Common Language. For the Windows user, the dev kit also comes packaged with tools allows that enable Perl developers to create standalone ActiveX components, windows services, and windows applications—all within the familiar Perl code base. Microsoft's Project 7: The .NET SDK: The Common Type System Overview Common Language Specification Perl for .NET research PerlNET ActiveState's Perl Development Kit
http://www.developer.com/lang/perl/print.php
crawl-002
refinedweb
570
58.48
2.0 Namespaces Discussion in 'ASP .Net' started by Thom Little,: - 655 - valentin tihomirov - May 24, 2004 Best Practices - solution - namespaces - classesCraig Deelsnyder, Aug 3, 2003, in forum: ASP .Net - Replies: - 1 - Views: - 485 - Vincent V - Aug 4, 2003 Conflicting namespaces??Will, Aug 13, 2003, in forum: ASP .Net - Replies: - 2 - Views: - 1,798 - Chris R. Timmons - Aug 13, 2003 newbie - help - where do u store custom classes when importing namespaces in ASPravi sankar, Aug 25, 2003, in forum: ASP .Net - Replies: - 2 - Views: - 444 - abdul bari - Aug 27, 2003 @Import Syntax and Importing Namespaces in global.asax fileD. Shane Fowlkes, Jan 13, 2004, in forum: ASP .Net - Replies: - 1 - Views: - 1,024 - Tu-Thach - Jan 13, 2004
http://www.thecodingforums.com/threads/2-0-namespaces.301981/
CC-MAIN-2015-32
refinedweb
118
84.27
This module is a dictionary of Blender Python types, for type checking. Example: import Blender from Blender import Types, Object, NMesh, Camera, Lamp # objs = Object.Get() # a list of all objects in the current scene for o in objs: print print o, type(o) data = o.getData() print type(data) if type(data) == Types.NMeshType: if len(data.verts): print "its vertices are obviously of type:", type(data.verts[0]) print "and its faces:", Types.NMFaceType elif type(data) == Types.CameraType: print "It's a Camera." elif type(data) == Types.LampType: print "Let there be light!" Since Blender 2.48a you can get the size of the underlying DNA structs for a collection of Blender Python types. Example: # loop over Types dictionary and print the struct sizes # -1 where the type is not supported byt the CSizeof function import Blender.Types as bt x = dir(bt) for t in x: s = 'bt.CSizeof(bt.' + t + ')' print t,"=", eval(s)
http://www.blender.org/api/249PythonDoc/Types-module.html
CC-MAIN-2015-48
refinedweb
159
71.31
Hi, in Word 2007 one can choose between several table styles to determine the formatting of a table (e.g. the “Light Shading” table style). Is this also possible with the Apose Word API to access and set these styles? If I try to get the style by using doc.getStyles().get(“Light Shading”); null is returned. Thanks a lot for your help!!! Regards, Sebastian Hi, Hi <?xml:namespace prefix = o Thanks for your request. I cannot reproduce the problem on my side using the latest version of Aspose.Words for Java (4.0.2). I use the following code for testing: // Open the document. Document doc = new Document("C:\\Temp\\in.docx"); Style style = doc.getStyles().get("Light Shading"); System.out.print(style.getName()); Best regards, Hi<?xml:namespace prefix = o Thanks for your request. Unfortunately, there is no way to set Table Styles using Aspose.Words. Your request has been linked to the appropriate issue. You will be notified as soon as this feature is available. Best regards, The issues you have found earlier (filed as WORDSNET-581) have been fixed in this .NET update and in this Java update. This message was posted using Notification2Forum from Downloads module by aspose.notifier. (34)
https://forum.aspose.com/t/retrieving-table-style-with-word-api/72493
CC-MAIN-2021-21
refinedweb
205
63.15
S. Here’s to many more! Ashesh Badani, senior vice president, Cloud Platforms, Red Hat We made a huge bet on Kubernetes and its ecosystem well before it was ready; perhaps even before we were ready. While we’ve had OpenShift in the market since 2012, we knew that we were lacking the "flywheel." When we had the chance to rearchitect for a standardized container runtime, format and orchestration, we went all in. Even as we were making OpenShift 3 generally available, it was incredible to partner with Amadeus who launched their own service platform on OpenShift even as we released to market! Over the last five years, its unbelievable to see the kinds of organizations that have come on the journey with us and the larger community. The world’s largest banks, airlines, hotels, logistics companies and even governments have completely embraced the path and entrusted mission critical applications to the platform. Now, we are seeing analytics and AI/ML use cases proliferate. We couldn’t have imagined all of this five years ago. But my favorite memory was going back to our earliest customers, thosethose, who adopted OpenShift pre-Kubernetes, and explaining what we were doing with container orchestration. What we didn’t expect but were overjoyed to see was acceptance by almost every one of them in regards to this future direction as well as a commitment to migrate to a Kubernetes-based architecture. The rest is history: 2000 customers and counting with deployments in every cloud environment. Clayton Coleman, architect, Containerized Application Infrastructure for OpenShift and Kubernetes, Red Hat David Eads, senior principal software engineer, Red Hat When I think about the parts of Kubernetes that I'm most proud of developing, it's the open ended pieces that allow other developers to create things I haven’t thought of before. Things like CRDs, RBAC, API aggregation and admission webhooks. These took a lot of investment and a significant amount of coordination across the community to produce. While they seem obvious now, at the time it was a "build it and they will come" plan and they definitely have. Building on top of what these primitives provide, we’ve seen entire technology stacks develop. Things like operators, self-hosted deployments, certificate management and new storage extension mechanisms. Looking at recent enhancement proposals, you can see the new activity around multi-cluster management and network extensions. I’m looking forward to seeing how the community expands the extensions we have and the new features they will provide and enable. Derek Carr, distinguished engineer, Red Hat When I reflect back on the history of Kubernetes, I think about how lucky we were as a community to have a strong set of technologists with a diverse set of technical backgrounds empowered to work together and solve problems from fresh perspectives. Kubernetes was my first engagement with open source, and I remember reading pull requests early in the project with the same excitement as watching a new episode of my favorite television show. As the core project surpassed 90k pull requests, I am no longer able to keep up with every change, but I am extremely proud of the work we have done to build the community. Often I think back to the earliest days of the project and remember how its success was anything but guaranteed. As an engineering community, we were exploring the distributed system problem space, but as a project, our success was really tied to a set of shared values that each engineer lived out. One of the earliest contributions I made to the project was the introduction of the Namespace resource. When I looked back at the associated pull request validation code, it looks like it may have been the 4th API resource in the project, and the first API added through the open source community. It required the original set of maintainers to trust me, and that trust was earned through mentorship for which I am forever grateful. Implementing namespace support required building out concepts like admission control, storage apis, and client library patterns. These building blocks have evolved into custom resource definitions, webhooks, and client-go with the help of a much larger community of engineers empowered by generations of leaders in the project. This has enabled a broad ecosystem to build upon Kubernetes to solve distributed system problems for a broader set of users than we ever imagined in the core project. When I reflect on this early experience around the introduction of Namespace support, it highlights the Kubernetes project values in action - distribution is better than centralization, community over product or company, automation over process, inclusive is better than exclusive, and evolution is better than stagnation. As long as Kuberentes lives these values, I know we will celebrate many more birthdays in the future. Joe Fernandes, vice president and general manager, Core Cloud Platforms, Red Hat I’ve written a lot about Kubernetes over the past 6 years. From why Red Hat chose Kubernetes, to how Red Hat was building on Kubernetes, to eventually launching OpenShift 3 at Red Hat Summit 2015, which we had completely rebuilt from the ground up as a Kubernetes native container platform. We also had to unexpectedly launch OpenShift 3 on Kubernetes .9 when the 1.0 release slipped beyond our launch date. But my favorite memories of Kubernetes came when presenting it to customers, both leading up to and since that launch. Red Hat has a highly technical and informed customer base, who don’t just want to know what a product does in terms of features and benefits, but all the low level details of how it works. I’ve always appreciated that, but it does keep our Product Managers and Sales teams on our toes. We prepared presentations that described the key capabilities of Kubernetes from pods, to services, replication controllers, health checks, deployments, scheduling, ingress and more. Kubernetes can be difficult to understand at first, even for the most technical users. But eventually there is that moment when it starts to click and you realize the power of these basic primitives and the automation they can bring to your application deployments. I was lucky to witness that moment many times, over numerous customer conversations over the past 6 years. Then the continued good fortune to see many of those customers become OpenShift customers and realize those benefits for some of their most challenging, mission critical applications. These were my favorite moments and still what motivates our entire product team today. Maciej Szulik, software engineer, Red Hat Kubernetes was my first big introduction to open source contributions. I've had a few patches here and there previously, but all of them were minor fixes. I still remember the moment when I was talking with my friends and telling them that I’m doing the exact same thing as I did in one of my previous projects, but this time it is open and widely available. And most importantly I can gather feedback from many, many more developers and users than I ever did before. The contributions I’m talking about are Cron Jobs, which then was called Scheduled Jobs and the initial version of auditing that nobody probably remembers now with the fancy advanced audit capabilities we have. The project allowed me to grow both professionally and personally. I learned a ton about distributed systems, programming in Go and beyond. I’ve made many friends around the globe which I’ve had the privilege to see almost every KubeCon for the past several years. I’m really grateful for the opportunity I had to be part of this amazing journey and I can’t wait to see where the next years will lead us! Paul Morie, senior principal software engineer, Red Hat I have a lot of very fond memories of adding some of "core" API resources and features like Secrets, Configmaps and Downward API that still give me a little buzz of nostalgia every time I see people using them. As a developer I also have my favorite changes or refactors and moments associated with those that I treasure. One that comes to mind was seeing the "keep the space shuttle flying" comment I wrote get a lot of social media traction (years after it was written). It was written as part of an overhaul of the persistent volume system, basically as a note to future developers to be careful about attempting simplification of logic (since doing so had confused us in the community during the overhaul). A couple years later someone came across my note and found it amusing enough to share and there were some fun discussions in social media about it. Someone also made a very cute picture of a space shuttle that puts a smile on my face when I see it. Another very satisfying thing for me personally is a refactor to the kubelet that took several releases to get completely done but seems to have been durable in time. I think I was chasing a bug with PodIP in the Downward API when I realized that I had taken an extremely convoluted path through the five thousand plus lines of code of the kubelet.go file and became possessed by a desire to bring some order to the chaos there. Gradually I was able to refactor this enormous file into smaller, intentionally ordered files that made things (I hope) easier to understand and maintain. No crazy hacking or anything, just moving code between files, but it sticks out in my memory. Beyond the things that we did in the Kubernetes community that were great, I would also like to call attention to things that Kubernetes didn't do that I feel are a part of its success. To me, the fact that Kubernetes does not mandate an official build engine for container images, configuration language, middleware, etc., is a huge win and part of why it has been adopted so broadly. These outcomes were in no way a given and we had fantastic community leadership that made smart (sometimes tough) choices about managing the project scope. Instead of trying to solve every problem under the sun (or least those popular at any given time), the Kubernetes community made smart investments to make open-ended extensions of Kubernetes possible and, slightly later, easier. In 2016, for example, we had very limited choices when developing APIs for the service-catalog SIG outside the kubernetes/kubernetes project and essentially had to write our own API server. Now, I can write a custom resource definition (CRD) and in a few minutes have a functional API with a much much lower level of effort. That's pretty incredible! The investments the Kubernetes community made in these extension mechanisms is a key part of the success of the project. These have enabled not only integrations with a number of different stacks to allow a large adoption footprint, but have also facilitated the creation of entirely new ecosystems. The Knative project and the Operator ecosystems, for example, would simply not exist in the same form and with the same possible user base to address without the well-developed extension mechanisms we have today. Rob Szumski, product manager, OpenShift, Red Hat A few moments stick out to me in the Kubernetes journey for the major shifts that they allowed the project to take. First was the huge scalability improvements that were driven by the collaboration between the etcd and Kubernetes community around etcd 3’s switch to gRPC for communication with the API server. This change drove down scheduling time dramatically on 1000 node clusters and expanded scale testing to succeed against 5000 node clusters where it previously failed. Great results for development that largely took place outside the Kubernetes code base. The next large shift in Kubernetes capability was the idea that Kubernetes could be "self-hosted." Self-hosted means running the Kubernetes control plane and assorted components with Kubernetes itself. This unlocked the ability to have the platform manage itself, which was pioneered in CoreOS Tectonic and was brought to OpenShift. Ease of management is key as we see Kubernetes deployed far and wide across the cloud. Keeping hundreds or thousands of clusters in an organization up to date can only be done through automation within the platform itself.The last major event was the introduction of the Operators concept in conjunction with CRD extension mechanism. This unlocked a huge period of workload growth for distributed systems and complex stateful applications on Kubernetes. This extension of your cluster with the experience of a cloud service running anywhere Kubernetes can run is essential to a hybrid cloud.
https://www.redhat.com/es/blog/heres-six-years-kubernetes
CC-MAIN-2022-33
refinedweb
2,114
56.49
. Second, the PEP defines the one official way of defining namespace packages, rather than the multitude of ad-hoc ways currently in use. With the pre-PEP 382 way, it was easy to get the details subtly wrong, and unless all subpackages cooperated correctly, the packages would be broken. Now, all you do is put a * start. Eric Smith (who is the 382 BDFOP, or benevolent dictator for one pep), Jason Coombs, and I met once before to sprint on PEP 382, and we came away with more questions than answers. Eric, Jason, and I live near each other so it's really great to meet up with people for some face-to-face hacking. This time, we made a wider announcement, on social media and. So, what did we accomplish? Both a lot, and a little. Despite working from about 4pm until closing, we didn't commit much more than a few bug fixes (e.g. an uninitialized variable that was crashing the tests on Fedora), a build fix for Windows, and a few other minor things. However, we did come away with a much better understanding of the existing code, and a plan of action to continue the work online. All the gory details are in the wiki page that I created. One very important thing we did was to review the existing test suite for coverage of the PEP specifications. We identified a number of holes in the existing test suite, and we'll work on adding tests for these. We also recognized that importlib (the pure-Python re-implementation of the import machinery) wasn't covered at all in the existing PEP 382 tests, so Michael worked on that. Not surprisingly, once that was enabled, the tests failed, since importlib has not yet been modified to support PEP 382. by removing all the bits that could be re-implemented as PEP 302 loaders, specifically the import-from-filesystem stuff. The other neat thing is that the loaders could probably be implemented in pure-Python without much of a performance hit, since we surmise that the stat calls dominate. If that's true, then we'd be able to refactor importlib to share a lot of code with the built-in C import machinery. This could have the potential to greatly simplify import.c so that it contains just the PEP 302 machinery, with some bootstrapping code. It may even be possible to move most of the PEP 382 implementation into the loaders. At the sprint we did a quick experiment with zipping up the standard library and it looked promising, so Eric's going to take a crack at this. This is briefly what we accomplished at the sprint. I hope we'll continue the enthusiasm online, and if you want to join us, please do subscribe to the import-sig! Wednesday, June 22, 2011 PEP 382 sprint summary Posted by Barry Warsaw at 1:12 PM Thanks for the report, awesome work guys.)
http://www.wefearchange.org/2011/06/pep-382-sprint-summary.html
CC-MAIN-2014-10
refinedweb
498
69.82
This. There is no Tl;dr as this is a rather long text, which is going to be super dry and boring, mainly for my own future reference only. Revisiting the old notes is always fun, especially surprising myself how much I learned back then. However, understanding is a thing, when dealing with real life data, knowing what to do is another. There was a lot of self-doubt while working on the mentioned projects (was not surrounded by statisticians), but the end result does look like it should work. I approached the two projects differently, the first was slightly more involved, as I re-implemented a lot of the formula from my notes myself. As the data is not as varied, and outputting a graph is not really needed, so I didn’t do it in Jupyter Notebook. The second project (the social audit project, which was supported by Sinarproject and ISIF.asia) was meant to be a public project, and I was thinking it would be nice to implement it in Jupyter Notebook so the team can play with the data in the future. However, this comes with its own share of problems, which I would detail later. For all the following code sample, it is assumed that the data was already preprocessed to fix misspellings and other problems for consistency. Implementing ANOVA in Python The project was done with private commercial data, so I would omit the data itself in this section, focusing only on the algorithm. Given a list of factors, I was asked to see how they affect sales. I first did a correlation test, then was requested to do another to study how pairs fo factors affect sales. At this time I was slowly getting more used to the database structure, so I decided to structure this into a problem to be answered by Analysis of Variance (ANOVA) for the two-factor factorial design. The problem statement was given a pair of factors, and the count for the combination of factors for the past 3 months aggregated monthly, then find if there is an interaction, i.e. together they affect sales. So we start by tabulating the observation table, where each cell of the table lists the observations (in this case the number of monthly sales for the past 3 months), as well as the total and mean (as shown in a random table below) factor a\factor b | 1 | 2 | 3 | total | mean --------------------+------------+--------------+-----------+---------+----------- 0 | (4, 3, 3) | (0, 0, 0) | (1, 0, 1) | 12 | 1.33333 | => 3.3333 | => 0.0000 | => 0.6667 | | 1 | (3, 5, 14) | (0, 0, 0) | (0, 0, 0) | 22 | 2.44444 | => 7.3333 | => 0.0000 | => 0.0000 | | 2 | (3, 3, 4) | (3, 0, 1) | (0, 0, 0) | 14 | 1.55556 | => 3.3333 | => 1.3333 | => 0.0000 | | 3 | (10, 7, 5) | (63, 69, 48) | (0, 1, 1) | 204 | 22.6667 | => 7.3333 | => 60.0000 | => 0.6667 | | 4 | (0, 0, 0) | (3, 1, 3) | (0, 1, 1) | 9 | 1 | => 0.0000 | => 2.3333 | => 0.6667 | | total | 64 | 191 | 6 | 261 | mean | 4.2667 | 12.7333 | 0.4000 | | 5.8 In order to populate the ANOVA table from the table above, there are a list of things to calculate. I suppose modern statistical analysis package should have them, but I was not given much time to pick up so I wrote my own code to do the calculation. The first to calculate is the Sum of Square of factor A and B, which is just a column/row-wise squared sum of differences between the overall mean and column/row average (showing just row-wise calculation below) We also have squared sum of interaction (i.e. each cell), defined as below (squared sum of average of each cell minus row average minus column average plus overall mean) Also the error sum of square Lastly we have total sum of square, which is the sum of all of the above, or we can validate by calculating the below (very useful to check if we are implementing it correctly) Then we have the mean square error for each of the above (except total) Lastly we calculate the test statistic F0 for factor A, B and interaction AB, by dividing the respective mean squared value with the mean squared error ( MSError) Next we find the critical value for each of them through the use of f.ppf from scipy.stats import f def critical_find(alpha, dof): return { "row": f.ppf(q=1 - alpha, dfn=dof["row"], dfd=dof["error"]), "column": f.ppf(q=1 - alpha, dfn=dof["column"], dfd=dof["error"]), "interaction": f.ppf(q=1 - alpha, dfn=dof["interaction"], dfd=dof["error"]), } Where the degree of freedom (dof) for A, B and interaction AB are defined as Putting together in a table, they become Source of Variation | Sum of Squares | Degree of Freedom | Mean Square | F0 | F | conclusion -----------------------+------------------+---------------------+---------------+---------+---------+------------------------- factor a | 3210.76 | 4 | 802.689 | 73.8671 | 2.68963 | significant effect factor b | 1193.73 | 2 | 596.867 | 73.8671 | 3.31583 | significant effect interaction | 5296.71 | 8 | 662.089 | 60.9284 | 2.26616 | significant interaction Error | 326 | 30 | 10.8667 | | | Total | 10027.2 | 44 | | | | For factor A and B, we test to see if the effect is contributing to the sales by comparing the F0 with critical region, where the test hypothesis being H0: has no effect to sales H1: has effect to sales So for the case above, both factor A and B‘s F0 is greater than the critical value (the example uses alpha of 5%), so H0 is rejected and both factor significantly affect sales individually. On the other hand, we also check if the interaction is significant. The hypothesis for this test is defined below H0: no interaction between two factors H1: there exists interaction between two factors In this randomly generated example, as the F0 is also greater than the critical value, hence we reject the hypothesis. Therefore, if we want to further study if sales is affected, these two factors needs to be considered together. Performing range test analysis Range test analysis is then performed when interaction is significant. By fixing one of the factor, we are interested in finding out for each value of another factor, which contribute maximum or minimum sales (also finding which ones are having the same mean) My old notes suggests the use of Duncan’s multiple range test, however in my previous implementation I used turkeyhsd provided by the statsmodel package. Considering I don’t know much about the test, I am skipping it here, however the interpretation of end result should be similar to Duncan’s multiple range test. Performing Statistical Analysis with Panda DataFrame and SciPy As I get more comfortable doing statistics with real world, I skipped implementing the formulas myself, finding it would be more efficient that way. The questionnaire itself is quite lengthy, and I was still inexperienced, so I only picked 12 variables to see if they are dependent on a set of social factors (age, gender etc.). The social audit was to find out the reach of Internet access and technology to a community, which is a rather interesting project especially during this pandemic. Besides helping to setting up machines for their community library, I was also tasked to contribute to the final report with some statistical analysis to the survey data. Test for independence After picking a set of social factors X, and a set of variables Y, we proceed to form pairs from these two groups to test if they are associated. Chi-square test was picked for this purpose. Then for those factors in X where we could somehow rank the responses (usually quantitative things like age, and certain categorical data that are in a sequence like colour order of rainbow, education level etc.), a test for correlation is done as well as the test for correlation significance. First we would need to build a contingency table, where each row represents classes of X, and each column represent classes of Y. Each cell in the table represents the count of observation given the respective class of X and Y. There are two types of X we find throughout the survey, i.e. either the values can be grouped into bins (continuous/discrete numbers e.g. age, or shopping budget) or the values are discrete (e.g. favourite colours). On the other hand, based on the data we collected, we have three types of Y, i.e. the values that can be grouped into bins, distinct values, as well as questions where respondent can choose multiple options (require slightly different handling in code). Therefore, in the end there are 6 different ways to create a contingency table. Which I would slowly go through some of them (the code can be found in the repository) I am aware of the fact that the method I used might not be the most efficient, but my primary concern was to get it done correctly first. The library likely provides more efficient ways to achieve my goals. Also, the produced code really needs refactoring, I know that. For instance, if we want to perform independence study on the education level of respondents and if they spend enough time on the Internet. These are both variables with distinct values (in a pandas.Series), hence we just put them side-by-side and start counting occurrences/observations: import pandas as pd from IPython.display import HTML, display def distinct_vs_distinct(a, b, a_ranked): _df = pd.merge( a, b, left_index=True, right_index=True, ) data = [] for a_value in a_ranked: row = [] for b_value in b.unique(): _dfavalue = _df[_df[a.name] == a_value] row.append(_dfavalue[_dfavalue[b.name] == b_value].shape[0]) data.append(row) result = pd.DataFrame( data, index=a_ranked, columns=pd.Series(b.unique(), name=b.name), ) display(HTML(result.to_html())) result.plot(kind="line") return result_filter_zeros(result) def result_filter_zeros(result): return result.loc[:, (result != 0).any(axis=0)][(result.T != 0).any()] # First rank the education level ranked_edulevel = pd.Series( [ "Sekolah rendah", "Sekolah menengah", "Institusi pengajian tinggi", ], name=fields.FIELD_EDUCATION_LEVEL, ) # Calling the function edulevel_vs_sufficient_usage = relation.distinct_vs_distinct( normalized_edulevel, normalized_sufficient_usage, ranked_edulevel ) It is always nice to have some sort of graphical reference, considering the code was running in a Jupyter Notebook so it is just too convenient to include the plot within the code. Considering X needs to be always ranked, hence the third parameter. For variables where we group the responses into bins (e.g. household income, how much to pay monthly for internet access), the code varies slightly. def interval_vs_interval(a, b, a_interval_list, b_interval_list): _df = pd.merge( a, b, left_index=True, right_index=True, ) data = [] for a_interval in a_interval_list: row = [] for b_interval in b_interval_list: _dfamax = _df[_df[a.name] <= a_interval.right] _dfamin = _dfamax[a_interval.left < _dfamax[a.name]] _dfbmax = _dfamin[_dfamin[b.name] <= b_interval.right] row.append(_dfbmax[b_interval.left < _dfbmax[b.name]].shape[0]) data.append(row) result = pd.DataFrame(data, index=a_interval_list, columns=b_interval_list) display(HTML(result.to_html())) result.plot(kind="line") return result_filter_zeros(result) # Example calling age_vs_budget_internet = relation.interval_vs_interval( normalized_age, normalized_budget_internet, summary_age.index, summary_budget_internet.index, ) Overall, still not too difficult, as we are still only dealing with 2 series, it is still just counting the combination of pairs of values, which is somewhat similar to the previous case. It is just that now that both were intervals it takes more steps. Next we have an example where we handle multiple choice questions (e.g. favourite websites). This is an example where the X values are binned, and Y is a multiple choice questions. def interval_vs_mcq(a, b, a_interval_list): _df = pd.merge( a, b, left_index=True, right_index=True, ) data = [] for interval in a_interval_list: row = [] for column in b.columns: _dfmax = _df[_df[a.name] <= interval.right] _dfmin = _dfmax[interval.left < _dfmax[a.name]] row.append(_dfmin[_dfmin[column] == True].shape[0]) data.append(row) result = pd.DataFrame(data, index=a_interval_list, columns=b.columns) display(HTML(result.to_html())) result.plot(kind="line") return result_filter_zeros(result) Though this time Y is a dataframe, but we are only finding out occurrences where a given pair of cases is true, so the code is more or less the same. The end product is still a contingency table that is ready to be processed later. Then we are going to perform a test to find if Y is dependent on X, or they are both independent. So we first specify the null and alternative hypothesis. H0: The two variables are independent (no relation) H1: The two variables are not independent (associated) The alpha (error rate) is defaulted 0.05, which means should we repeat the survey, 95% of the time we would draw the same conclusion. from scipy.stats import chi2, chi2_contingency def independence_check(data, alpha=0.05): test_stats, _, dof, _ = chi2_contingency(data) critical = chi2.ppf(1 - alpha, dof) independence = not independence_reject_hypothesis(test_stats, critical) if independence: print( f"Failed to reject H_0 at alpha={alpha} since test statistic chi2={abs(test_stats)} < {critical}" ) else: print( f"H_0 is rejected at alpha={alpha} since test statistic chi2={abs(test_stats)} >= {critical}" ) return independence def independence_reject_hypothesis(test_stats, critical): return abs(test_stats) >= critical As seen in the function, we just needed to pass in the contingency table we obtained previously into the chi2_contingency function then we quickly get test-statistic and the degree of freedom (we are ignoring the p-value here as we are going to calculate the critical region next). The test statistic can be calculated by hand with where Oij= number of observations for the given classes in both Xand Y Eij= Expected value for each combination when Xand Yare independent, which is just Instead of flipping the distributions tables for the critical value, we can use chi2.ppf by passing in the confidence level (1 - alpha) and degree of freedom into it to find the critical value. If the test statistic is greater than the critical value, i.e. falls outside the critical region, we reject the null hypothesis and conclude that both X and Y are associated. Otherwise, both variables are independent (a change in X does not affect Y), and we are not going to proceed any further. Test for correlation Now that we know both variables are associated, we shall perform a test for correlation next. Due to the data that we collected, they are either binned (e.g. age) or distinct ranked non-numeric data, so we are using Spearman’s Rank Correlation Coefficient, where rs= rank coefficient of correlation d= difference between two related ranks rX= rank of X rY= rank of Y n= number of pairs of observation This time, as we know overall the changes in X affects Y, now we are interested in seeing how changes in X affects each value of Y. Besides finding out how they correlate, we are also interested in finding if the correlation is significant. If the correlation is significant, then the observation in our sample (the survey respondents) is also expected to be true in the population (all the residents in the community). For this we conduct a two-tail test with the following hypothesis H0: There is no significant correlation between Xand Y H1: There is a significant correlation between Xand Y We are performing a t-test here as none of our data has more than 30 different pairs of observations of X and Y. Therefore the test statistic is Again, welcome to the 21st century where nobody is flipping the distribution table for answer, t.ppf is used to calculate the critical value here. from scipy.stats import t def correlation_check(data, alpha=0.05, method="pearson"): _corr = ( data.corrwith( pd.Series( range(len(data.index)) if method == "spearman" else data.index, index=data.index, ), method=method, ) .rename("Correlation") .dropna() ) display(HTML(_corr.to_frame().to_html())) critical = t.ppf(1 - alpha / 2, (len(_corr) - 2)) for idx, rs in _corr.items(): test_stats = rs * np.sqrt((len(_corr) - 2) / ((rs + 1.0) * (1.0 - rs))) print( f"The {(rs < 0) and 'negative ' or ''}correlation is {correlation_get_name(rs)} at rs={rs}." ) if not correlation_reject_hypothesis(test_stats, critical): print( f"Failed to reject H_0 at alpha={alpha} since test statistic T={test_stats} and critical region=±{critical}. " ) print( f"Hence, for {data.columns.name} at {idx}, the correlation IS NOT significant." ) else: print( f"H_0 is rejected at alpha={alpha} since test statistic T={test_stats}, and critical region=±{critical}. " ) print( f"Hence, for {data.columns.name} at {idx}, the correlation IS significant." ) print() def correlation_get_name(rs): result = None if abs(rs) == 1: result = "perfect" elif 0.8 <= abs(rs) < 1: result = "very high" elif 0.6 <= abs(rs) < 0.8: result = "high" elif 0.4 <= abs(rs) < 0.6: result = "some" elif 0.2 <= abs(rs) < 0.4: result = "low" elif 0.0 < abs(rs) < 0.2: result = "very low" elif abs(rs) == 0: result = "absent" else: raise Exception(f"Invalid rank at {rs}") return result def correlation_reject_hypothesis(test_stats, critical): return abs(test_stats) > critical Notice the value of data variable is just the same contingency table we generated in the previous step. This time we are interested in finding out the correlation of each class of Y compared to X, hence the loop in the code. So by calling this function, we first get if there is a correlation between the class of Y, and then we further check to see if the correlation is significant. Why is my current use of Jupyter notebook is not favourable The whole report, if printed in A4 paper, is roughly 93 pages long. It is fine to have a glance of responses in the beginning during the cleaning/normalization part, however, towards the end where analysis starts, the nightmare begins. Scrolling through a long notebook is not really fun. I am using the Visual Studio Code insider edition for the new Jupyter Notebook features, which makes it slightly easier to navigate because I could fold the notebook, which makes it slightly easier to find things. Should the questionnaire was longer, or if I were to do more analysis, referencing earlier parts of the notebook would be a nightmare. Also, there are simply too many repeated code within the notebook. A lot of cells were there just to display the same contingency table and graph. It would be nice if I could reduce the repetition somehow. Not sure if this can be properly fixed with Jupyter notebook, but I really wish I can break the notebook into multiple parts, and to be cross-referenced. Also, some sort of “metaprogramming” would be nice, so I stop having to repeat similar code in multiple cells. Ending notes I’m not too sure what comes in year 2021, but throughout 2020, being given an opportunity to work with a real Data scientist, and working on statistics analysis with real world data were great experiences. While I am not even close to be a real Data Scientist or even a Data Engineer, I am more interested in explore in these areas than ever.
https://cslai.coolsilon.com/2020/12/14/performing-statistical-analysis-with-python/?color_scheme=default
CC-MAIN-2021-43
refinedweb
3,162
53.21
It seems like yesterday that Microsoft was announcing the early builds of .NET Core 1.0. Amazingly, we’re already at version 3. In this new version of ASP.NET Core, Microsoft releases some very forward-looking technologies to approach Web development in very different ways. There are now more ways than ever to create great looking Web properties in this newest version of ASP.NET. If you’ve been paying attention to ASP.NET Core lately, you know that Microsoft has been expanding the ways to build your websites with .NET Core. This version is no different. In order to give us these choices, this version makes some fundamental changes to the underlying plumbing. In this article, I’ll talk about the new features in ASP.NET Core 3.0 as well as the changes to existing features. Let’s start with the new features. If you’ve been using ASP.NET Core for a while, the biggest change is that .NET Core 3.0 doesn’t run on top of the .NET Framework (e.g., the desktop framework). This is a big change in philosophy that’s been some time coming. Microsoft has announced that .NET Core in general is where the innovation will be. There‘s a plan to converge .NET Core and .NET Framework in .NET 5.0 down the line. But for ASP.NET Core, the time is now. This has been made possible by the work that the ASP.NET Core team has made in making sure that most of what developers really need is in .NET Core. If you last looked at .NET Core in the 1.x timeframe, look again. The amount of the complete framework that’s now available in .NET Core is really remarkable. Of course, this isn’t a universal change. Some things aren’t coming. If you’re invested in ASP.NET WebPages or Windows Communication Foundation (WCF), you’re going to have to stay with .NET Framework because they’re not making the move. The biggest change for previous users of ASP.NET Core is that ASP.NET Core won’t run on top of the .NET Framework. It’s only .NET Core from here on in. What’s New Before I delve into changes to .NET, let’s talk about what is new to the platform. Much of this has been available in previews and early bits for a long time and it’s now officially part of ASP.NET Core. With full support by Microsoft, you can now use much of this in your production applications. Let’s take them one-by-one. Packaging of ASP.NET Core In previous versions of ASP.NET Core, the separate pieces of ASP.NET Core were available as separate NuGet packages. .NET Core 3.0 changes that. ASP.NET Core is now an SDK, not a set of discernable libraries. Because of this, your csproj file no longer needs to target the metapackage, but does need to be using the Web SDK: <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp3.0</TargetFramework> </PropertyGroup> <ItemGroup> <!--<PackageReference Include="Microsoft.AspNetCore.App" Version="2.2.6" />--> </ItemGroup> </Project> Note that the SDK specifies the Web SDK, the framework is .NET Core 3.0, and the meta package (e.g., Microsoft.AspNetCore.App) is no longer required. Endpoints as the New Default In ASP.NET Core 2.2, Microsoft opted you into a new routing system called Endpoint Routing. Unless you created a new project in 2.2, you probably didn’t even notice that you were using the old routing system. The key to this new plumbing, which is now the default in 3.0, is that different routes could be aware of each other and not need to always be terminal middleware. For example, if you wanted to use SignalR and MVC, you’d have to be careful about the order of the middleware as nothing after MVC will get called. Instead, Endpoint Routing takes the form of a shared space for all things that need routing. This means MVC, Razor Pages, Blazor, APIs, SignalR, and gRPC (as well as any third parties that need to use their own routing) can share the routing pipeline. It’s harder to have middleware bump into each other. If you create a new project in ASP.NET Core 3.0, you’ll see the endpoint system in use in the new templates (in Startup.cs): app.UseEndpoints(endpoints => { endpoints.MapControllerRoute( name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); endpoints.MapRazorPages(); }); Instead of specifying MVC or Razor Pages as an extension of the IApplicationBuilder, you’ll wire this middleware in UseEndpoints callback instead. I’d suggest moving to this model in your applications that you upgrade. An implication of this change is that the LinkGenerator class (and by implication the IUrlHelper interface) both use the entirety of the Endpoints to generate URLs. This means that you can generate URLs across the different sub-systems depending on the route values (e.g., you don’t have to differentiate MVC routes from Razor Pages routes any longer). gRPC Relatively new to most people is a new messaging system called gRPC (Google remote procedure call). This is a way to write services that are smaller and faster than the REST-based services that many of us have been using for some time. gRPC is based on work at Google and is an open source universally available remote procedure call system. It uses a language called Protocol Buffers to define the interface for a service (and the shapes of the messages): // WeatherService.proto syntax = "proto3"; option csharp_namespace = "WeatherMonitorService"; package WeatherService; import "Protos/weatherMonitor.proto"; service MonitorService { rpc GetWeather (WeatherStatusRequest) returns (WeatherStatusResult); } message WeatherStatusRequest { bool allSites = 1; repeated int32 siteId = 2; } message WeatherStatusResult { repeated Monitoring.WeatherData results = 1; } Protocol Buffers (or ProtoBuf as it’s usually called) provides a contract-based way to create services somewhat similar to WCF. I think it’s probably where WCF-based projects that are moving to .NET Core will go. Over the wire, gRPC uses ProtoBuf as a roadmap to a binary serialization. This means that serialization is faster and smaller. And it’s great for scenarios where JSON or XML serialization puts too much stress on a low resource system (e.g., IoT). In addition, gRPC supports unidirectional and bi-directional streaming to open up more explicit support for scenarios where REST just isn’t a good fit. gRPC is designed for remote procedure calls but not remote objects or CRUD messaging. There are limitations. gRPC relies on HTTP/2 to work. This means that it’s secure by default and uses the HTTP/2 streaming facilities to make that work. You’ll need HTTP/2 on both sides of the pipe to be actionable, but most Web servers (and .NET Core in general) support this. These limitations mean that it’s not a good fit for Web development. Calling gRPC from browsers isn’t a good fit right now. There are workarounds but I’d stick with REST for browser-server communication. I see several scenarios that I think are important: - Service-to-Service: In a microservice architecture, communication between services is a great fit for gRPC. The support for streaming and synchronous communication fits well in conjunction with asynchronous communication (e.g., message bus/queues). - IoT-to-Service: When you’re dealing with small, resource-limited IoT; gRPC is a great fit for communication, especially when you don’t want to broadcast the existence of the service. - TCP/WCF Migration: Services using a WCF with TCP bindings will find the contract-based services in gRPC a natural fit for bringing these services to .NET Core. When you develop clients for gRPC services, you can use "Add Service Reference…" in Visual Studio to generate the client for gRPC. This makes calling gRPC services that much easier. Local Tools In ASP.NET Core 2.2, the team enabled global tools in .NET Core. The obvious fit for this is the Entity Framework Core tools but lots of organizations have benefited from the installation of global tools to allow for the dotnet {toolname}-style usage. Sometimes you need to create a tool that’s only used locally within your own project. This is where local tool command comes in. You can now install local tools using the dotnet tool command: > dotnet tool install toolname Tool installation is local by default (and adds the tool to the package.json file). You’ll need to add the -g flag to get the tool to install globally. If you’ve worked with NPM tooling, it’s a similar experience. Other Improvements There are some less than obvious changes that should be highlighted too: - Tool for generating REST Clients (similar to the way that gRPC client generation works) - New Worker Service (for creating Services that are cross-platform using the background worker). Changes in ASP.NET Core Although ASP.NET Core is pretty stable, there are some changes to how you wire up the plumbing for ASP.NET Core. ASP.NET Core 3.0 keeps most of the basic details the same, but in some cases, Microsoft moved your cheese. In some cases, Microsoft moved your cheese. MVC/Routing As explained above, the new Endpoint routing means that you’ll probably want to move to the new routing system. They’ve also changed who you register for the services for APIs and for MVC with Views to be a little clearer: public void ConfigureServices( IServiceCollection services) { // ... // Add MVC with Views services.AddControllersWithViews(); // Add Controllers without Views // (usually for APIs) services.AddControllers(); } Note that instead of using AddMvc or AddMvcCore, you’ll use AddControllers to support MVC/API controllers and AddControllersWithViews when you want to use MVC and Razor to build your site. When you need to add the middleware for this, you’ll need to add routing and controllers separately. For example: public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // ... app.UseRouting(); app.UseEndpoints(endpoints => { endpoints.MapControllerRoute( name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); }); } Much like how you’d specify routes in UseMvc, you’ll now see it in MapControllerRoute instead. Notice that the nomenclature is changing a bit to controllers instead of MVC. If you’re using attributed routing, you’ll need to add support for them as well: app.UseEndpoints(endpoints => { endpoints.MapControllers(); } Razor Pages Razor Pages haven’t changed a lot but their registration is a bit different. In the past, using AddMvc in the ConfigureServices added Controllers and Razor Pages, and now they’ve been separated: public void ConfigureServices( IServiceCollection services) { services.AddControllersWithViews(); services.AddRazorPages(); } It’s pretty typical that you’d need to add Controllers and Razor Pages separately. For the Configure method in Startup.cs, the separation exists there too. You’ll want to add it to your UseEndpoints: app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); }); Other than this plumbing, the changes in Razor Pages are beneath the surface and you can continue to write Razor Pages without any changes. Changes to JSON Support One goal of ASP.NET Core was to remove requirements for third-party libraries. The new approach is to allow third parties to easily integrate (as seen with announcements by Identity Server). One place that this is really obvious is that Microsoft has replaced JSON.NET with System.Text.Json. This new JSON support is built with better memory handling and C# 8 types. You’ll get this support by default, but if you’re using code that requires JSON.NET to exist, you’ll need to reference the Microsoft.AspNetCore.Mvc.NewtonsoftJson package from NuGet. This allows you to opt into JSON.NET support like so: services.AddControllersWithViews() .AddNewtonsoftJson(); services.AddRazorPages() .AddNewtonsoftJson(); It works on Controllers and Razor Pages. ASP.NET Core SignalR In ASP.NET Core 3.0, make sure you’re using the Core version of SignalR but this hasn’t changed much from the 2.2 version. It’s a prevailing theme but the new Endpoint Routing changes how you wire up SignalR as well. So, you’ll still AddSignalR to the DI layer, but you’ll wire up your hubs via the Endpoint Routing: app.UseEndpoints(endpoints => { endpoints.MapControllers(); endpoints.MapHub<MyHub>(); } If you’re using Blazor with SignalR, make sure you map the Blazor Hub separately: app.UseEndpoints(endpoints => { endpoints.MapControllers(); endpoints.MapHub<MyHub>() endpoints.MapBlazorHub<MyBlazorHub>(); } Entity Framework In ASP.NET Core, there are two options: Entity Framework Core 3.0 and Entity Framework 6. This might be a bit confusing but when porting applications already using Entity Framework 6, you don’t have to rewrite all code. Entity Framework Core is certainly the future and has a lot of benefits over EF6, but for compatibility with older code, EF6 is a great option (you can even use both in the same project if needed). Entity Framework Core is improving on the prior version by adding several important features/improvements: - Reworked LINQ implementation is more robust and map better to SQL Queries - Introduced Cosmos DB support - You can opt into dependent types with the new [Owned] attribute. - Supports C# 8 types (including nullables) - Allows reverse engineering of database views as query types - Property bag entities are being introduced as a stepping-stone to many-to-many without join tables. For most apps, you won’t notice a big difference, but being aware of the new features will help you take advantage of them. When you install the ASP.NET Core SDK, Entity Framework Core isn’t part of that. You’ll need to make reference to the EFCore packages as well as registering the EFCore tooling. I usually install it globally on a new computer by simply adding this at the command-line: > dotnet tool install dotnet-ef -g IWebHostingEnvironment The old IHostingEnvironment that most people used to access whichever environment they were using has been depreciated. Your ASP.NET Core 2.2 code might look like this: public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseDatabaseErrorPage(); } } Instead you should use IWebHostingEnvironment (instead of IHostingEnvironment) as a complete replacement. The reason this changed was that there are different kinds of hosts in ASP.NET Core (desktop apps for example). This was meant to be clearer. You can just replace it like so: public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseDatabaseErrorPage(); } } The IHostEnvironment is the base interface for IWebHostingEnvironment and contains everything *but* the WebRoot information. Sometimes it’s better to ask for exactly what you need instead of the entire IWebHostingEnvironment, especially when building shared libraries. Security in ASP.NET Core Security, like many things in ASP.NET, Core offers another plumbing change, but this one is really minor. You have to opt into Authentication and Authorization separately: public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); endpoints.MapRazorPages(); }); } Just add UseAuthentication and UseAuthorization as separate middleware for each need. If you’re using JSON Web Tokens, you’ll need to include a separate NuGet package for that: Microsoft.AspNetCore.Authentication.JwtBearer. Wiring up the services is the same except for what I mentioned earlier. ASP.NET Core 3.0 also has some smaller features including: - Newly added AES-GCM and AES-CCM cyphers - HTTP/2 support - TLS 1.3 & OpenSSL 1.1.1 on Linux - Import/Export asymmetrical keys without need for an X.509 certificate The support for CORS has been simplified as well. If you need to create a CORS policy, you can do this during adding CORS to your DI layer: services.AddCors(opt => { opt.AddDefaultPolicy(bldr => { bldr.WithOrigins("") .AllowAnyMethod() .AllowAnyHeader(); }); }); You can have named policies as well, but to enable them, you still need to add support for CORS as middleware: public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { app.UseStaticFiles(); app.UseRouting(); app.UseCors(); app.UseAuthentication(); app.UseAuthorization(); } Building/Publishing Changes The last piece of changes that I think are important to highlight is how changes to building and publishing will affect you. Changes include: - Executable builds are now the default (they used to be only on platform independent builds). - Publish can build single-file executables (where they link all the framework and third-party into a single file to be published). - You can opt into trimmed IL so that it removes all libraries that aren’t necessarily in your publishing. - You can also opt into Ready2Run(R2R) images, which means they’re ahead-of-time (AOT) compiled. - The memory footprint of apps has been reduced by half. Where Are We? As with any major release, changes can be uncomfortable. But I’m happy to say, the cheese that’s been moved is pretty minor. I found that working with the new versions, the benefits seem to all be under the covers. That means that you’ll be able to better support your customers and developers without a lot of trouble. If you’ve been waiting for some of these new features, they’re good to go. By looking at gRPC, Blazor, and other features…you’ll be setting yourself up to be future-proof and to build better solutions for your clients.
https://www.codemag.com/article/1911072
CC-MAIN-2020-24
refinedweb
2,867
59.09
The previous part of this tutorial described how we can create database queries with named queries. This tutorial has already taught us how we can create static database queries with Spring Data JPA. However, when we are writing real-life applications, we have to be able to create dynamic database queries as well. This blog post describes how we can create dynamic database queries by using the JPA Criteria API. We will also implement a search function that has two requirements: - It must return todo entries whose title or description contains the given search term. - The search must be case-insensitive. Let’s start by ensuring that Maven creates the JPA static metamodel classes when we compile our project.. Creating the JPA Static Metamodel Classes A static metamodel consists of classes that describe the entity and embeddable classes found from our domain model. These metamodel classes provide static access to the metadata that describes the attributes of our domain model classes. We want to use these classes because they give us the possibility to create type-safe criteria queries, but we don’t want to create them manually. Luckily, we can create these classes automatically by using the Maven Processor Plugin and the JPA Static Metamodel Generator. We can configure these tools by following these steps: - Add the Maven Processor Plugin (version 2.2.4) declaration to the plugins section of the pom.xml file. - Configure the dependencies of this plugin and add the JPA static metamodel generator dependency (version 4.3.8) to the plugin's dependencies section. - Create an execution that invokes the plugin’s process goal in the generate-sources phase of the Maven default lifecycle. - Ensure that the plugin runs only the org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor. This annotation processor scans our entities and embeddable classes, and creates the static metamodel classes. The configuration of the Maven Processor Plugin looks as follows: <plugin> <groupId>org.bsc.maven</groupId> <artifactId>maven-processor-plugin</artifactId> <version>2.2>4.3.8.Final</version> </dependency> </dependencies> </plugin> When we compile our project, the invoked annotation processor creates the JPA static metamodel classes to the target/generated-sources/apt directory. Because our domain model has only one entity, the annotation processor creates only one class called Todo_. The source code of the Todo_ class looks as follows: package net.petrikainulainen.springdata.jpa.todo; import java.time.ZonedDateTime; import javax.annotation.Generated;(Todo.class) public abstract class Todo_ { public static volatile SingularAttribute<Todo, ZonedDateTime> creationTime; public static volatile SingularAttribute<Todo, String> createdByUser; public static volatile SingularAttribute<Todo, ZonedDateTime> modificationTime; public static volatile SingularAttribute<Todo, String> modifiedByUser; public static volatile SingularAttribute<Todo, String> description; public static volatile SingularAttribute<Todo, Long> id; public static volatile SingularAttribute<Todo, String> title; public static volatile SingularAttribute<Todo, Long> version; } Let’s move and find out how we can create database queries with the JPA criteria API.. Modifying the Repository Interface The JpaSpecificationExecutor<T> interface declares the methods that can be used to invoke database queries that use the JPA Criteria API. This interface has one type parameter T that describes the type of the queried entity. In other words, if we need to modify our repository interface to support database queries that use the JPA Criteria API, we have to follow these steps: - Extend the JpaSpecificationExecutor<T> interface. - Set the type of the managed entity. Example: The only Spring Data JPA repository of our example application (TodoRepository) manages Todo objects. After we have modified this repository to support criteria queries, its source code looks as follows: import org.springframework.data.jpa.repository.JpaSpecificationExecutor; import org.springframework.data.repository.Repository; interface TodoRepository extends Repository<Todo, Long>, JpaSpecificationExecutor<Todo> { } After we have extended the JpaSpeciticationExecutor interface, the classes that use our repository interface get access to the following methods: - The long count(Specification<T> spec) method returns the number of objects that fulfil the conditions specified by the Specification<T> object given as a method parameter. - The List<T> findAll(Specification<T> spec) method returns objects that fulfil the conditions specified by the Specification<T> object given as a method parameter. - The T findOne(Specification<T> spec) method returns an object that fulfils the conditions specified by the Specification<T> object given as a method parameter. Additional Reading: Let’s find out how we can specify the conditions of the invoked database query. Specifying the Conditions of the Invoked Database Query We can specify the conditions of the invoked database query by following these steps: - Create a new Specification<T> object. - Set the type of the queried entity as the value of the type parameter (T). - Specify the conditions by implementing the toPredicate() method of the Specification<T> interface. Example 1: If we have to create a criteria query that returns Todo objects, we have to create the following specification: new Specification<Todo>() { @Override public Predicate toPredicate(Root<Todo> root, CriteriaQuery<?> query, CriteriaBuilder cb) { //Create the query by using the JPA Criteria API } } - Dynamic, typesafe queries in JPA 2.0 - JPA Criteria API by samples - Part I - JPA Criteria API by samples - Part II - JPA 2 Criteria API Tutorial - The Javadoc of the CriteriaBuilder interface - The Javadoc of the CriteriaQuery interface - The Javadoc of the Predicate interface - The Javadoc of the Root<X> interface - The Javadoc of the Specification<T> interface The obvious next question is: Where should we create these Specification<T> objects? I argue that we should create our Specification<T> objects by using specification builder classes because: - We can put our query generation logic into one place. In other words, we don’t litter the source code of our service classes (or other components) with the query generation logic. - We can create reusable specifications and combine them in the classes that invoke our database queries. Example 2: If we need to create a specification builder class that constructs Specification<Todo> objects, we have to follow these steps: - Create a final TodoSpecifications class. The name of this class isn’t important, but I like to use the naming convention: [The name of the queried entity class]Specifications. - Add a private constructor the created class. This ensures that no one can instantiate our specification builder class. - Add static specification builder methods to this class. In our case, we will add only one specification builder method (hasTitle(String title)) to this class and implement it by returning a new Specification<Todo> object. The source code of the TodoSpecifications class looks as follows: import org.springframework.data.jpa.domain.Specification; import javax.persistence.criteria.CriteriaBuilder; import javax.persistence.criteria.CriteriaQuery; import javax.persistence.criteria.Predicate; import javax.persistence.criteria.Root; final class TodoSpecifications { private TodoSpecifications() {} static Specification<Todo> hasTitle(String title) { return new Specification<Todo>() { @Override public Predicate toPredicate(Root<Todo> root, CriteriaQuery<?> query, CriteriaBuilder cb) { //Create the query here. } } } } If we use Java 8, we can clean up the implementation of the hasTitle(String title) method by using lambda expressions. The source code of our new specification builder class looks as follows: import org.springframework.data.jpa.domain.Specification; import javax.persistence.criteria.CriteriaBuilder; import javax.persistence.criteria.CriteriaQuery; import javax.persistence.criteria.Predicate; import javax.persistence.criteria.Root; import org.springframework.data.jpa.domain.Specification; final class TodoSpecifications { private TodoSpecifications() {} static Specification<Todo> hasTitle(String title) { return (root, query, cb) -> { //Create query here }; } } Let's find out how we can invoke the created database query. Invoking the Created Database Query After we have specified the conditions of the invoked database query by creating a new Specification<T> object, we can invoke the database query by using the methods that are provided by the JpaSpecificationExecutor<T> interface. The following examples demonstrates how we can invoke different database queries: Example 1: If we want to get the number of Todo objects that have the title 'foo', we have to create and invoke our database query by using this code: Specification<Todo> spec = TodoSpecifications.hasTitle("foo"); long count = repository.count(spec); Example 2: If we want to the get a list of Todo objects that have the title 'foo', we have to create and invoke our database query by using this code: Specification<Todo> spec = TodoSpecifications.hasTitle("foo"); List<Todo> todoEntries = repository.findAll(spec); Example 3: If we want to get the Todo object whose title is 'foo', we have to create and invoke our database query by using this code: Specification<Todo> spec = TodoSpecifications.hasTitle("foo"); List<Todo> todoEntries = repository.findOne(spec); If we need to create a new specification that combines our existing specifications, we don’t have to add a new method to our specification builder class. We can simply combine our existing specifications by using the Specifications<T> class. The following examples demonstrates how we can use that class: Example 4: If we have specifications A and B, and we want to create a database query that returns Todo objects which fulfil the specification A and the specification B, we can combine these specifications by using the following code: Specification<Todo> specA = ... Specification<Todo> specB = ... List<Todo> todoEntries = repository.findAll( Specifications.where(specA).and(specB) ); Example 5: If we have specifications A and B, and we want to create a database query that returns Todo objects which fulfil the specification A or the specification B, we can combine these specifications by using the following code: Specification<Todo> specA = ... Specification<Todo> specB = ... Lis<Todo> todoEntries = repository.findAll( Specifications.where(specA).or(specB) ); Example 6: If we have specifications A and B, and we want to create a database query that returns Todo objects which fulfil the specification A but not the specification B, we can combine these specifications by using the following code: Specification<Todo> specA = ... Specification<Todo> specB = ... List<Todo> searchResults = repository.findAll( Specifications.where(specA).and( Specifications.not(specB) ) ); Let’s move on and find out how we can implement the search function. Implementing the Search Function We can implement our search function by following these steps: - Modify our repository interface to support criteria queries. - Create the specification builder class that creates Specification<Todo> objects. - Implement the service method that uses our specification builder class and invokes the created database queries by using our repository interface. Let’s start by modifying our repository interface. Modifying Our Repository Interface We can make the necessary modifications to our repository interface by following these steps: - Extend the JpaSpecificationExecutor<T> interface. - The type of the queried entity to Todo. The source code of our repository interface looks as follows: import org.springframework.data.jpa.repository.JpaSpecificationExecutor; import org.springframework.data.repository.Repository; import java.util.List; import java.util.Optional; interface TodoRepository extends Repository<Todo, Long>, JpaSpecificationExecutor<Todo> { void delete(Todo deleted); List<Todo> findAll(); Optional<Todo> findOne(Long id); void flush(); Todo save(Todo persisted); } Let’s move on and create the specification builder class. Creating the Specification Builder Class We can create a specification builder class that fulfils the requirements of our search function by following these steps: - Create the specification builder class and ensure that it cannot be instantiated. - Create a private static getContainsLikePattern(String searchTerm) method and implement it by following these rules: - If the searchTerm is null or empty, return the String "%". This ensures that if the search term is not given, our specification builder class will create a specification that returns all todo entries. - If the search isn’t null or empty, transform the search term into lowercase and return the like pattern that fulfils the requirements of our search function. - Add a static titleOrDescriptionContainsIgnoreCase(String searchTerm) method to the specification builder class and set its return type to Specification<Todo>. - Implement this method by following these steps: - Create a Specification<Todo> object that selects todo entries whose title or description contains the given search term. - Return the created Specification<Todo> object. The source code or our specification builder class looks as follows: import org.springframework.data.jpa.domain.Specification; final class TodoSpecifications { private TodoSpecifications() {} static Specification<Todo> titleOrDescriptionContainsIgnoreCase(String searchTerm) { return (root, query, cb) -> { String containsLikePattern = getContainsLikePattern(searchTerm); return cb.or( cb.like(cb.lower(root.<String>get(Todo_.title)), containsLikePattern), cb.like(cb.lower(root.<String>get(Todo_.description)), containsLikePattern) ); }; } private static String getContainsLikePattern(String searchTerm) { if (searchTerm == null || searchTerm.isEmpty()) { return "%"; } else { return "%" + searchTerm.toLowerCase() + "%"; } } } Specification<Todo> object by invoking the static titleOrDescriptionContainsIgnoreCase() method of the TodoSpecifications class. - Get the todo entries whose title or description contains the given search term by invoking the findAll() method of the JpaSpecificationExecutor interface. Pass the created Specification<Todo> object as a method parameter. - Transform the list of Todo objects into a list of TodoDTO objects and return the created list. The source of our service class looks as follows: import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.jpa.domain.Specification; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; import java.util.List; import static net.petrikainulainen.springdata.jpa.todo.TodoSpecifications) { Specification<Todo> searchSpec = titleOrDescriptionContainsIgnoreCase(searchTerm); List<Todo> searchResults = repository.findAll(searchSpec); return TodoMapper.mapEntitiesIntoDTOs(searchResults); } } Let’s move on and find out when we should create our database queries by using the JPA Criteria API. Why Should We Use the JPA Criteria API? This tutorial has already taught us how we can create database queries by using the method names of our query methods, the @Query annotation, and named queries. The problem of these query generation methods is that we cannot use them if we have to create dynamic queries (i.e queries that don’t have a constant number of conditions). If we need to create dynamic queries, we have to create these queries programmatically, and using the JPA Criteria API is one way to do it. The pros of using the JPA Criteria API are: - It supports dynamic queries. - If we have an existing application that uses the JPA Criteria API, it is easy to refactor it to use Spring Data JPA (if we want to). - It is the standard way to create dynamic queries with the Java Persistence API (this doesn’t necessarily matter, but sometimes it does matter). That sounds impressive. Unfortunately, the JPA Criteria API has one big problem: It is very hard to implement complex queries and even harder to read them. That is why I think that we should use criteria queries only when it is absolutely necessary (and we cannot use Querydsl). Let's move on and summarize what we have learned from this blog post. Summary This blog post has taught us six things: - We can create the JPA static metamodel classes by using the Maven Processor Plugin. - If we want to invoke queries that use the JPA Criteria API, our repository interface must extend the JpaSpecificationExecutor<T> interface. - We can specify the conditions of our database queries by creating new Specification<T> objects. - We should create our Specification<T> objects by using specification builder classes. - We can combine Specification<T> objects by using the methods provided by the Speficications<T> class. - We should use criteria queries only when we don’t have a choice. The next part of this tutorial describes how we can create database queries with Querydsl. P.S. You can get the example application of this blog post from Github. Your project structure is amazing. I was blown away by how well you laid it out. I am still trying to soak it all in and I'm going to keep poring over it to really understand it all. I expect that it will serve as a foundation for future Spring 3.1 JPA based projects. I can't thank you enough for putting together such a wonderful tutorial where you took the time to set up an elegant foundation. Stone, thanks for your comment. It is always nice to hear that I could actually help someone to learn something new. Also, it would be nice to hear which things are hard to understand so that I could try to provide a bit better explanation. It took some time for me to grasp how the Specification stuff worked, I still don't think I'm very clear on that. It also took a little bit of time to understand how the environment was getting initialized in the ApplicationContext (I'm still a bit of a novice when it comes to Spring configurations, and from what I've gathered, it seems that Spring parsed the data from the @ImportResource and @PropertySource specifications to initialize the environment). One other issue that I had was figuring out how to access all of the pages and when it was deployed locally (I had to prefix all of the form:action and href values to include the project name prefix). Lastly, the verify statements in the test cases were also new to me, so I learned about Mockito from this project as well. I'd like to give back to you -- I found a few issues in the code that you may want to include. I kept getting a NPE in AbstractPathImpl.get() method. To get around it, I had to move the Person_ class into the same package as Person (~.model). I also changed the return statement on the PersonRepositoryService.update() method to "return personRepository.save(person); " instead of "return person;" -- the value was never getting updated in the database. This necessitated changing the PersonRepositoryServiceTest.update() method to: PersonDTO updated = PersonTestUtil.createDTO(PERSON_ID, FIRST_NAME_UPDATED, LAST_NAME_UPDATED); Person person = PersonTestUtil.createModelObject(PERSON_ID, FIRST_NAME, LAST_NAME); when(personRepositoryMock.findOne(updated.getId())).thenReturn(person); when(personRepositoryMock.save(person)).thenReturn(person); Person returned = personService.update(updated); verify(personRepositoryMock, times(1)).findOne(updated.getId()); verify(personRepositoryMock, times(1)).save(person); verifyNoMoreInteractions(personRepositoryMock); assertPerson(updated, returned); Finally, there was a simple type in the PersonRepositoryServiceTest.assertPerson() -- the last assert statement should read "assertEquals(expected.getLastName(), actual.getLastName());". Again, thank you so much for such a thoughtful and well designed tutorial -- I learned a lot. Stone, thanks your comment. I am planning to add more links to resources which contains tutorials and other material about the used libraries and frameworks. I will also check out the issues you mentioned later today. By the way, did you use the H2 in memory database when you noticed these problems? In any case, thanks for your contribution. :) I didn't use H2, I used MySQL. Stone, I tried to reproduce the problem you were having with the update() method of Person RepositoryPersonService class by using MySQL 5.5.19. Unfortunately I was not able to reproduce it. In my environment the updates made to the Person instance were updated to the database. The thing is that you should not have to call the save() method of PersonRepository when you are updating the information of a person. The reason for this is that Hibernate will automatically detect the changes made to persistent objects during the transaction and synchronize the changes with the database after the transaction is committed. Check the Working with Objects Section of the Hibernate reference manual for more details: Common causes for the problem you were having are: You do not have got transaction at all (the @Transactional annotation is not used either at method or class level) The transaction is read only (The readOnly property of the @Transactional annotation is set to true) The state of the updated entity is not persistent (Check the link to the reference manual for more details). I am wondering if this advice helped you? (I am bit of a perfectionist so it would be a personal victory for me to help you to remove that unnecessary call to the save() method of PersonRepository). Hi Petri, Thank you for the very nice explanation. For Spring Data JPA + criteria queries, is this the only signature available ? List repository.findAll(Specification s); If I know that my query will return only a single result, can I use something like T repository.find(Specification s); I tried find(), but I exceptions, e.g. "No property find found for type class domain.Customer". So, is findAll() the only available query method with the Specification parameter? Thanks, David Hi David, You can use the findOne(Specification<T> spec) method of the JpaSpecificationExecutor interface to get a single entity which matches with the given specification. See the API for more details: I hope that this was helpful. Hi Petri, Nice tutorial with good and clear examples which gives good insight on Spring Data JPA. Thanks for that. I tried to implement jpa criteria and got NPE on org.hibernate.ejb.criteria.path.AbstractPathImpl.get(AbstractPathImpl.java:141) Apparently, I got exact same exception, when i tried to run your project - tutorial 4. Then I moved my staticMetamodel to the package where my entity is and this exception went away. But the simple criteria is also not returning anything. I did check the table and can retrive data before I apply criteria to filter. So I am stumped. any clues ? Hi Amol, Thanks for your comment. I finally ended up moving the static meta model class to the same package where the Person entity is located. Hopefully this will finally fix the issue with the NPE you (and Stone) mentioned. Thanks for the bug report. I should have done this ages ago but somehow I managed to forget this issue. In my experience, if a query is not returning the correct results, the problem is almost always in the created criteria. It would be helpful if you could give a bit more detailed description about your problem. The answers to following questions would help me to get a better idea of the situation: Well, I managed to fix that. It was with the created criteria as you rightly said. Thanks again. Hi Amol, great to hear that you managed to solve your problem. Hei Petri of all the tutorials about jpa I found yours has been the most helpful! But I have still a doubt, we will see if you can find a solution: If I want to create an specification of one object that it is a parameter in another object how can I do it? for example: imagine that your object person has another attribute that is adress, and Adress has as attributes street and number, how can I create an specification that obtain all the people that live in one street? Thanks in advance!!! Hi Albert, Thanks for your comment. It was nice to see that that you enjoyed my tutorials. The answer to your question is: it can be done. I am currently at work but I will describe the solution after I get back to home. Hi Petri, I am also trying to implement similar criteria. Hoping to see some input from you. Many Thanks, Hi Albert, Lets assume that you have got a Person object which has a reference to a Address object. So, the source code of the Person class could be something like this: @Entity @Table("persons") public class Person { private Address address; public Person() { } public Address getAddress() { return address; } } Now, the source code of the Address class is following: @Embeddable public class Address { private String street; private int number; public Address() { } public String getStreet() { return street; } public int getNumber() { return number; } } As you said, you want search all persons which are living in the same street. This criteria is built like this (Note that I am not using the static meta model in this example): public class PersonSpecifications { public static Specification livesInGivenStreet(final String street) { return new Specificationget("address"). () { personRoot, CriteriaQuery> query, CriteriaBuilder cb) { @Override public Predicate toPredicate(Root return cb.equal(root. } }; } } In this solution I have assumed the the database column containing the street in which the person lives is found from persons database table. Is the case for you or are you using a separate entity object in your domain model instead of component? this is exactly what I was looking for. I was having problems in this line: "root.get(“address”).get(“street”), street);" I didn't know how to reach the street from address, I thought I had to make an "innerjoin" but I have seen that if I execute your code the innerjoin is created alone when the query is created. Thanks a lot for your help!!!! I'll try now to make it a little more complicate using the metamodel and using classes than extend from other classes, we will see if it works fine...thanks again. Albert, Good to see that I could help you out. If you need more information about the the JPA Criteria API, you should check out this article: Hi Petri, I have 3 tables as Check, User and UserDetail Check - main search table has userid and other fields User table has userid and other fields UserDetail table has userid the domain model is Check class has User User class has userDetail I am trying to build predicate to perform search on firstname and that is giving me trouble. my predicate is as below predicate = cb.equal(root.get("user").get("userid").get("userDetail").get("firstname"), searchName) this throws exception as Illegal attempt to dereference path source [null,user] Any clues on how to build the search with these 3 tables ? Do i have to use some Join while building predicate ? If I create a link between Check and UserDetail table by adding userdetail in Check then following works fine predicate = cb.equal(root.get("userDetail").get("firstname"), searchName) Thanks in advance Hi Amol, If I understood your domain model correctly, you can obtain the correct predicate with this code: cb.equal(root.<User>get(“user”).<UserDetail>get(“userDetail”).<String>get(“firstname”), searchName); Thanks for the reply Petri but that throws exception as "Unable to resolve attribute [userDetail] against path". ? Hi Amol, it seems that I would have to see the source code in order to help you. It seems that the attribute called userDetail cannot be resolved. This means that the property called userDetail is not found. This seems a bit puzzling because I assumed that the Check class contains a property called user, the User class contains a property called userDetail and the UserDetail class contains property firstName. Are you trying to navigate from Check to UserDetail when building the Predicate? Hi Petri, Here is the code snippet. I have removed unwanted comments, fields and getter/setter methods. You are right CheckRecord has User has userDetail has firstName. @PersistenceUnit(name = "core-dal") public class CheckRecord { private Long id; private String status; private Date expiry; private User user; @ManyToOne(optional = true, fetch = FetchType.LAZY, targetEntity = User.class) @JoinColumn(name = "userId") public User getUser() { return user; } } @Entity @Table(name = "UserTable") @PersistenceUnit(name = "core-dal") public class User { private Long id; private String username; private Account account; private UserDetail userDetail; @OneToOne(mappedBy = "user", cascade = CascadeType.ALL) public UserDetail getUserDetail() { return userDetail; } } @Entity @Table(name = "UserDetail") @PersistenceUnit(name = "core-dal") public class UserDetail { private Long id; private String firstName; private String lastName; private User user; public String getFirstName() { return firstName; } } Note: Added code tags and removed some unnecessary setters - Petri Hi Amol, I noticed that the getUser() method of UserDetail class is missing. Does it look like this: @OneToOne @JoinColumn("userId") public User getUser() { return user; } how to write jUnit test cases for toPredicated method? can you please explain. Hi, I wouldn't write unit tests for the predicate builder methods because these tests are very hard to read and write. Also, these tests don't test if the created query returns the correct results. Instead, I would write integration tests for my Spring Data JPA repository. I admit that these tests are slower than unit tests, but in this case this is acceptable because integration tests help me to ensure that my query returns the correct results. Hi Petri, yes it is like that. I removed that and others so my post is not too big. In this case the following specification builder should work: public class CheckRecordSpecifications { public static Specification<CheckRecord> firstNameIs(final String searchTerm) { return new Specification<CheckRecord>() { @Override public Predicate toPredicate(Root<CheckRecord> root, CriteriaQuery<?> query, CriteriaBuilder cb) { return cb.equal(root.<User>get(“user”).<UserDetail>get(“userDetail”).<String>get(“firstName”), searchTerm); } }; } } spot on.. that did work.. I think last night eclipse was culprit as it was not picking up the latest class file. Many Thanks for your help. Amol, Great! Hi Petri! I have been working in this issue last week, but when I thought it was working well suddenly this problem has appeared: "org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.InvalidDataAccessApiUsageException: Illegal attempt to dereference path source [null];" What I´m doing is this: Path path = root.get(CustomsOrder_.messageType); predicates.add(cb.equal(path.get(MessageType_.group), messageGroup)); My CustomsOrder has a MessageType, this type belongs to a group and I would like to find a CustomsOrder by the group. Do u have an idea what can happen here? Tahnks in advance Hi Albert, the exception you mentioned is thrown when the path used to get the compared property is not correct. You should definitely try to create the predicate without using the static meta model. Does the following work or is the same exception thrown? predicates.add(cb.equal(root.get("messageType").get("group"), messageGroup)); Also, are you saying that CustomsOrder class has a property called messageType, and MessageType has a property called group? Hi Petri! I found the problem, after some hours checking the solution I have discovered that MessageType is an enumerator that is grouped by another enumerator that it is MessageGroup, as I didn't do this code I assumed both were regular classes. So when I was getting the MessageType I could not reach the MessageGroup.My finall solution is to obtain from MessageType all the messages that belong to a group and search by list of messages instead of group. If you think that another solution more elegant exists please make me know it. Code I have used: List list =getMessagesTypeByGroup(group); //obtain msg by group selected predicates.add(root.get(CustomsOrder_.messageType).in(list)); Thanks for your replying. Hi Albert, Good to see that you were able to solve your problem. Hi Petri, Thank you very much for your detailed article. I am trying to implement the below scenario in my project, but i dont see any distinct key word in spring JPA. I mean in below way. E.g- select c from customer customer table-- has firstname, lastname as columns. Now I need to pull data as below. select distinct c.lastname from customer c ; Is there anyway we can achieve it ? I mean using NamedQuery or Specifications. Thank you in advance. Hi Raghu, You have two options for implementing this: JPQL Example: lastNameIsLike(final String searchTerm) { SELECT DISTINCT p FROM Person p WHERE... Criteria API with Specification Builder: public class PersonSpecifications { public static Specification return new Specification () { personRoot, CriteriaQuery> query, CriteriaBuilder cb) { @Override public Predicate toPredicate(Root query.distinct(true); //Build Predicate } }; } } In your case, I would add the following method to the CustomerRepository interface (or whatever your repository interface is): @Query("SELECT DISTINCT c.lastName FROM Customer c") public List<String> findLastNames(); Hi Petri, The bunch of spring jpa tutorial is really worthful, i have tried @Query and Specification and QueryDSL also to implement my need, how ever not 100% success in it. I need to fetch first 200 rows from EMP table where emp_loc is distinct Table Structure : EMP ID - PK EMP NAME EMP LOCA public Predicate toPredicate(Root EmplRoot, CriteriaQuery query, CriteriaBuilder cb) { query.distinct(true); //What should be placed here as Predicate need to be returned //cb.??? } }; Thanks in advance Hi Kam, It was nice to hear that my blog posts have been useful to you. If you want to use criteria queries, the easiest way to achieve your goal is to use the pagination support of Spring Data JPA. Just set the page size to 200 and get the first page. Hi Petri, Really loved your way of explanation and thanks for sharing, I read the entire comments and was really useful. I have a below scenerio, how can we achieve conditions on one-to-many relation ship, below is predicate method for me. public static Specification hasRole(final String roleName) { return new Specification() { public Predicate toPredicate(Root root, CriteriaQuery query, CriteriaBuilder builder) { if (null != roleName) { Expression<List> roleExpression = root.get(User_.roles); // TODO: Need to do how can I join one-to-many relationship ? } return builder.conjunction(); } }; } Hi Dhana, thanks for your comment. It is nice to hear that you found this tutorial useful. If you are trying to fetch all users who have a certain role, you can do this by using the isMember() method of the CriteriaBuilder class: Predicate hasRole = builder.isMember(roleName, roleExpression); Also, check out Collections in JPQL and Criteria Queries. Hi Petri, thanks, I have issue here, roleNameis a String, but roleExpressionis Expression<List<Role>>. The generic bounded is expecting to pass Roleobject instead of roleName. Didn't find solution, here is the my code. public static Specification<User> hasRole(final String roleName) { return new Specification<User>() { public Predicate toPredicate(Root<User> root, CriteriaQuery> query, CriteriaBuilder builder) { if (null != roleName) { Expression<List<Role>> roleExpression = root .get(User_.roles); return builder.isMember(roleName, roleExpression); } return builder.conjunction(); } }; } List users = userRepository.findAll(where(isActiveUser()).and(hasAddress(address)).and(hasRole(roleName))); Hi Dhana, Is the Rolean enum? If it is, you have to change this line: return builder.isMember(roleName, roleExpression); To: return builder.isMember(Role.valueOf(roleName), roleExpression); Let me know if this works. Also, if some exception is thrown, it would be useful to know what it is. No Petri, Role is an entity which is mapped as one to many for the User. Hi Dhana, My bad. I totally missed the one to many relationship. You can use join for this purpose. Try the following code: //Roles is a list? ListJoin<User, Role> roleJoin = root.joinList(User_.roles); //Role name matches with the role name given as a parameter return builder.equal(roleJoin.<String>get(Role_.name), roleName); Thanks, it works now, appreciate your help. You are welcome. I am happy to hear that you solved your problem. How to join two Entities with Specification API? Check out a blog post titled: JPA Criteria API by samples – Part-II. It explains how you can create a join query, a fetch join query, and sub select join query. You can use the techniques described in that blog post when you implement the toPredicate()method of the Specificationinterface. I noticed than you cannot have the following 2 extensions simultaniously : (1) a custom extension of the JpaRepository (to introduce a new generic method for all repo) and (2) implements JpaSpecificationExecutor. If you try it you get exception when Spring builds your repo: Error creating bean with name 'pilotRepository': FactoryBean threw exception on object creation; nested exception ... Caused by: org.springframework.data.mapping.PropertyReferenceException: No property delete found for type pilot.business.model.Pilot In the preceeding comment David may have encountered the same problem ... Hi, I stumbled in the same type of problem. Do you have a solution for the problem in the meantime? TIA Follow these steps: JpaRepositoryand JpaSpecificationExecutorinterfaces in it. SimpleJpaRepositoryclass. Let me know if this solved your problem. Thanks a lot for the tutorial. I'm just getting started with Spring Data JPA and I was having a hard time getting around how to extend it to do more complicated queries. This was a huge help. Alex, It is great to hear that you liked this tutorial. Also, you might be interested to read my blog entry about Spring Data JPA and Querydsl. To be honest, the JPA Criteria API can be a bit hard to use if you have to create a really complicated query with it. Querydsl offers an solution to this problem. Hi Petri, Awesome post! Most appreciated. What is i want to match an id (for example) against a list of ids (List)? i can't seem to use isMember cause the list is no good. Hi Lev, It is nice to hear that you like this blog entry. About your problem, you can implement the in condition by using the in()method of the Expressioninterface. Here is a little example about this: works perfectly! thanks a lot You are welcome! I believe that "Before going in to the details, I will introduce the source code of my static *metal* model class" is really supposed to be "Before going in to the details, I will introduce the source code of my static *meta* model class", no? You are right. Fixed it. Thanks for pointing this out. Hi Petri, I have two tables, one is a Users table and the other one is a Settings table. Per every user i have a settings row (they user the same id as PK). The user has a created_date field and in the settings he has a number_of_days field which specifies the total days he had for his trial period (may vary from user to user). i wanna find all the users that passed their trial period but i need to get the created_date java Date object and number_of_days as an Integer object and do something like createdDate.addDays(numDays) and compare that with now's date. How can i get a Date object from root.get("createdDate")? and how can i reference another table from one specification and get number_of_days as an int from withing the same specification? Thanks a lot, Lev I have not done any calculations with dates by using the JPA criteria API. Actually, it seems that JPA 2.0 does not support arithmetic operations for date operands. I suggest that you change your domain model by following these steps: Dateproperty called lastDayOfTrialPeriodto the Settingsentity. The value of this property can be calculated when a new user is added to the system. Also, you can use the numberOfDaysproperty to calculate the value of this property to existing users. numberOfDaysproperty from the Settingsentity. This way you could get the information you need by using a very simple query. If you want to use the JPA Criteria API, you can use the following specification builder class (I assume that the Personentity has a property called settingsand that the relationship between the Personand Settingsentities is one-to-one): Did this solve your problem? Haha, this is actually what we ended up doing. Yeah this did to do the trick! thanks :) You are welcome! Hi Petri, thanx for your valuable article... I'm currently working on a DAO.findAll with "specific branch fetching" (on an entity tree, with more than 3 level of depth, any relation declared as lazy, and you only want to fetch some custom one), have you ever worked on this topic (not covered by your article... and not really mentionned in Spring Data... except if we include Root.JoinType.LEFT logic in Specification... which can be discussed... :), if yes any recommandation ? the only valuable post i've found is this one: which works perfectly for a findOne, but doesn't work for a findAll (duplicated entities fetched... :( ++ i42 Unfortunately I have not worked on this kind of structure before. Also, it is hard to give "good" answers without seeing the domain model and understanding what kind of query you want to create. I assume that you want to fetch a list of entities and use join to "overwrite" the lazy fetching. Am I correct? You added a link to a blog entry and told that it works perfectly when you fetch only one entity. I assume that when you try to get a list of entities, the list contains duplicate entities. Is this correct? If my assumption is correct, you might want to call the distinct(boolean distinct)method of the CriteriaQueryclass and set the value of the distinctmethod parameter to true. P.S. This is just a shot in the dark. Hi Petri, (and thx for your quick reply...) "you want to fetch a list of entities and use join to “overwrite” the lazy fetching. Am I correct?" yes "when you try to get a list of entities, the list contains duplicate entities. Is this correct?" yes "call the distinct(boolean distinct) method of the CriteriaQuery" already tried, this method is rejected by the DB telling us in return: -> com.sybase.jdbc3.jdbc.SybSQLException: TEXT, IMAGE, UNITEXT and off-row Java datatypes may not be selected as DISTINCT. your both previous assumptions were right Petri... to simplify, the model would be similar to something like: Class O[List<A> lA, List lZ] A[B b, C c] Z[X x, Y y], everything (one/many) constraintly declared as lazy. We want the DAO.findAllBranch method to be able to fetch branch with calls like findAllBranch(O.class, "lA.b") or findAllBranch(O.class, "lZ.y"), fetching then all O objets + only associated list + sub property. currently using join operator produces duplicates O instances for each A or Z instances... :( P.S: "A shot in the dark"... you mean (yeah, me to I love techno musik) :D No further comment Petri ? I have to confess that I forgot this comment. I am sorry about that. The information you gave changes things a bit because I thought that you were trying to get a list of objects from the "middle" of the entity tree. It seems that you want to select all O objects, a list of A objects and all properties of A (or the same stuff for Z). Am I correct? What is the search criteria which is used to select the O objects or do you simply want to get all O objects found from the database? Unfortunately I cannot give you an answer straight away. However, I could try to replicate this scenario and see if I can figure out an answer. In order to do this, I need to see the real domain model because otherwise I might create a solution which works for me but is totally useless to you (especially since it seems that you storing binary data to the database). Is this possible? Hey, no worries mate... to re-sum up the scene: - it's a findAll, - for a massive model (not postable, hundreds of entities... you can switch it to any one you want indeed, not really mattering...), - ALL relations declared as Lazy, - we have 100% configurable & generic Criteria building with Root.JoinType.LEFT (let's say, to simplify the case, no other use of Criteria than entities' joining) -> issue: method is returning duplicates entries with JPA - setDistinct(true) is refused by JDBC driver->DB command and (new info...), if I use 100% Hibernate processing, (through - (Session) getEntityManager().getDelegate(); - and then org.hibernate.Criteria.setResultTransformer( Criteria.DISTINCT_ROOT_ENTITY);) the findAll behaves correctly, no duplicates entries are return, result is fully consistent ! my temporary empirik conclusion are: JPA facade for building findAll is functionnaly under-efficient/functionnal than its delegated/Wrapped full Hibernate processing... so Petri, do you like challenges ? :) i42 It seems that I have to try to solve this puzzle. :) However, I am not sure if I can use any time to this before next weekend. I will keep you posted in any case. I have started working on this. I managed to reproduce this problem with H2 database as well. However, when I used the distinct(true)method, the criteria query returned correct results. This issue is discussed with more details in this Stack Overflow question. It seems that you should either continue using your current approach or use sub queries as suggested in the answer of the linked question. Thx a lot Petri for investing my problem... unfortunately the "distinct" call is refused by Sybase 15 (not allowed on Text/CLOB value), so I guess I'll keep my generic Hibernate delegated session implementation (using setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY)) which is 100% efficiently working... BTW, the link you've mentionned is the one which decided me previously to pass through JPA and use 100% Hibernate querying... Once again, thanx Petri for your concern and your help, all the best... Dimitri42 You are welcome! I agree that you should keep your current implementation because it is working for you. There is no point in trying to find another solution which might not exist. Hi Petri, this is an amazing help from your part, I've been following your blog and I've read most of your notes and well, definitely there is always something else to learn, and this is my case. This is similar to Raghu's case [July 31, 2012 at 4:47 am EDIT] about to distinct, except I had not chance to use it in my repository interface because precisely I had to work with Specification idea (filtering conditions in a dynamic query using OR operator -- but was resolved with your help) and I see that distinct is for all the table's fields and I just need to distinct some of them because business logic. I've read a lot notes from Sn Google without luck, so, that is the challenge, to use DISTINCT for some fields through Specification class. Is there any solution? And also is there a way to get the query built from Specification just to confirm what Specification object will send to the Repository for its execution. Any help I'd really appreciate. The best. /JRomero. I have to confess that I am not sure if this can be done with the JPA Criteria API. Do you need to get the full information of the entities or do you want to get only the information stored to the distinct fields? Hi Petri, is there a simple way of finding greatest size of one-to-many relationship using max function. for example, need to find one customer who has more orders done so far. i.e, customer.orders is one-to-many relationship. Hi Dhana, The first solution which came to my mind had these steps: Then I noticed the greatest()method of the CriteriaBuilderclass. Then I came up with a solution which uses the JPA Criteria API and Spring Data JPA (I have not tried it out so I have no idea if it works). The source code of my specification builder class looks as follows: Let me know if this did the trick. Liked your response Petri, thanks a lot making your blog being read very interesting. :) It didn't work somehow, I have got the following error and sql generated is SQL Error [42574]: expression not in aggregate or GROUP BY columns: CUSTOMER0_.ID expression not in aggregate or GROUP BY columns: CUSTOMER0_.ID It seems that you already solved the first part of the problem (no group by clause). I will move my other comments to your latest comment. I found the right sql query and need to design the speicification. Ref from: Like you already figured out, this problem can be solved by using the GROUP BY clause (check this StackOverflow question for more details about this). The CriteriaQuery class has a groupBy() method which can be used for this purpose. The problem is that I am not sure how this can be used in the original specification builder method. One option would be to to add this line to that method: query.groupBy(root.get("id")); The problem is that the "main" query does not use aggregate functions. I have to confess that I have no idea how this could work. I also tried to find some examples about the correct usage of the greatest()method but I had no luck. Do you have any other ideas? Resolved with a different solution, pls sugest if you have better way, wrote a specification on Order And my repository supports to execute the above specification which returns the following. Later do a find by id of this customer Id. sql generated is like this, Thank you for posting your solution. Unfortunately I have not found a solution for this yet. It is extremely hard to find examples about the correct usage of the greatest()method which is kind of weird because this is not an uncommon requirement. Maybe I should write a blog post about this. What do you think? How do i get to know when I fail to enter a duplicate database entry? I'm using Spring Data JPA with Hibernate. I have a class with a composite key mapped to a database table. When I perform a save operation using the JPARepository extended interface object, I see the following log in the console:=? Hibernate: insert into RoleFunctionality_Mapping (functionalityId, roleId) values (?, ?) This is what I see when i repeat the operation with the same data:=? It appears Spring Data first checks whether the Key exists in the database, and then proceeds to perform insertion. There should be a way to catch the information which hibernate has found (that the database entry/key exists in the database)? How can we check that? There should be some kind of information which would be possible to get from Spring that it would return/give in any other way to the application- if it is not going to go ahead with insertion in the event of a duplicate entry. (Spring makes a decision (based on some information) not to insert after finding an existing primary key.) The SimpleJpaRepositoryclass provides an implementation for the save()method. The source code of this method looks as follows: As we can see, the SimpleJpaRepositoryclass calls the persist()method of the EntityManagerclass if the id of the entity is null. If the id is not null, the merge()method of the EntityManagerclass is called. In other words, if the id of the saved entity is not null, the entity is not persisted. It is merged into the current persistence context. Here is a nice blog post which describes the difference. In your case, this is what happens: save()method for the first time, Hibernate checks if an entity exists. Because it does not exists, it is inserted to the database. save()method for the second time, Spring Data JPA notices that the id is not null. Thus, it tries to merge the entity into the current persistence context. Nothing "happens" because the information of the detached entity is equal than the information of the persisted entity. Of course, you can always handle this in the service layer. This is the approach which I use when I want to verify that unique constraints aren't broken (I don't use this for primary keys though. I am happy with the way Spring Data JPA takes care of this). NPE while accessing the getter method for a field annotated with @ManyToMany annotation. Pls. find the outline of the sample code below. Repository & Service layer were the default implementation provided by Spring using Spring Roo commands.. I am a newbie and could you pls. help me what I am doing wrong ? class A { ... @ManyToMany( CASCADE.ALL, FetchType.LAZY) List<B> b; } class B { } interface ARepository { List<A> findAllAs(); } class AService { List<A> findAllAs() { return aRepository.findAllAs()} } Application code: List<A> aList = aService.findAllAs(); for (A a : aList) { for (B b : a.getB()) { <---- Results in NPE { ... Take a look at this blog post. It describes how you can create a many-to-many relationship with JPA. Hi Petri, After looking at your blog and reading the documentation, I think it might be a good idea to show how to make joins and concat specifications. Here is my approach: Specification filterSpec = null; // query by lastname in the post. I parse filters from jqGrid, out of scope. Specification joinSpec = new Specification() { @Override public Predicate toPredicate(Root root, CriteriaQuery query, CriteriaBuilder cb) { Join join = root.join(Transaction_.parentCard, JoinType.INNER); return cb.equal(join.get(Card_.id), idFromRequest); } }; myRepo.findAll(Specifications.where(joinSpec).and(filterSpec), pageable); Hope you find it useful. Regards. Hi Pedro, thanks for sharing this. I think that it will be useful to my readers. I am actually a bit surprised that there are so few good tutorials about the JPA Criteria API. hello thanks, thats a lot of good information. I am using the hibernateJpaProvider, I am got the basic app working, now I am trying to read the SEQUENCE, in a query, how to do it @Query("select party_id_seq.nextval from dual") double findNextSeq4PartyId(); but I am getting nested exception is java.lang.IllegalArgumentException: org.hibernate.hql.internal.ast.QuerySyntaxException: dual is not mapped If you want to create a native query by using a query method annotated with the @Queryannotation, you have to set the value of its nativeQueryattribute to true. In other words, you have to add the following method to your repository interface: If you are trying the get next available id for the private key of an entity, you should use the @GeneratedValueannotation because it makes your code a lot cleaner. Is it possible to use @GeneratedValue in case of composite (@Embeddable) ids ? What is the best practice in this case ? Check out this StackOverflow answer. It describes how you can use the @GeneratedValueannotation inside embedded composite ids. However, there is another option as well. You can use the @IdClassannotation. The difference between the @EmbeddedIdand @IdClassannotations is explained in these StackOverflow questions: @IdClassor @EmbeddedId @IdClassor @EmbeddedIdimplementations and why? I hope that this answered to your question. Great set of articles! I've been living in the world of NamedQueries in my Entities, and more complex Native queries defined as Autowired Strings (becoming difficult to manage). This tutorial is just what I need to move beyond queries and dive into the Criteria API. My question might be a bit naive, but what I don't understand or see is how your service implementation resolves your custom Specification methods ( lastNameIsLike() ). I put together a rather simple scenario in my project and I am unable to resolve these methods. Example Code: This service method does not resolved my Spec method isInJurisdiction: Hi Dex, I assume that you get a compilation error because the static isInJurisdiction()method of the AgencySpecificationsclass is not found? I noticed that I had forgot to add one crucial static import to the source code of the RepositoryPersonServiceclass which is found from this page (the example is changed and doesn't have this class anymore). I added that import to the source code found from this page as well. Let's assume that the AgencySpecificationsclass is found from package foo. If this is the case, you have to add the following static import to your service class: This should solve your problem. Again, I am sorry for the trouble I caused to you. Yes, I completely missed that. Again great job, and keep up the great work. Thanks! If you have any other questions in the future, I am happy to help you out. Hi petri , you are doing a great job , the post is very useful thank you . I 'm using spring Data and i have a problem with duplicate entries : this is a code sample : the join is returning duplicate result is there a way to apply DISTINCT to this Criteria. thank you You can use the distinct(boolean distinct)method of the CriteriaQueryclass for this purpose. Hi petri, your tutorial help me a lot, thanks. Actually I have a question, how to select some columns, using specification? It is like DTO in your tutorial part 3. Because I don't want to select all columns from table. Thanks in advance. I am not sure if it possible to select only a few columns by using Spring Data JPA specifications. Also, I searched the answer from Google but I didn't find any solutions to this problem. However, it is possible to select specific columns by using Hibernate criteria queries. If you decide to follow this approach, you need to add a custom method to your repository and implement the query in that method. Hello petri, first thanks for your blog, im new in jpa data but have successfully followed the step of the part four, it works when but i'm struggling to add two specifcations like example: statusIsLike, skillLike projectRepostory.findAll(where(statusIsLike("done")).and(skillLike("manager"); Thanks in advance, Hi Yvau, If you take a look at the Javadocs of the Specification<T>and JpaSpecificationExecutor<T>interfaces, you can see that you cannot combine Specification<T>objects in that manner. I would probably add a new method to the specification builder class and return a Specification<T>object that contains all the required conditions. Hello petri, To solve it, I was passing the data as object from my form like final SearchDTO searchTerminstead of final String searchTermand customizing everything with if else statement,in one specification. But now i got it, i'll try your approach. Thank you for your response ! You are welcome! Actually, I would also wrap the search conditions into a DTO and then pass this object to my specification builder method. This makes sense if you have to support more than a few search conditions (especially if these conditions have the same type). Let me know if you have any other questions. Hello Petri, Nice article.I have used above information to use specification in my project. Currently facing one issue with it. Is it possible to tranform Specification of type T to specification of Type T1. Here T and T1 having same type of attributes and I am using it for having seperate DTO for different layer (Business layer / Persistence layer). So at business layer I am getting specification. Now to tranfer it to persistence layer I need to have Specification. So is it possible to transform it from one type to another type (having same attribute applied in specification) ? Thanks in advance I think that it is not possible to transform a Specification<A>object into a Specification<B>object (or at least I don't know how to do this). Maybe I don't understand your use case, but you shouldn't have to do this. Remember that the type parameter, which is given when you create a new Specification<T>object, describes the type of the entity. Thus, you should create only one specification that specifies the invoked query by using the JPA Criteria API. Hi Petri, Question about the StaticMetaModel, in this case you're just using the lastName attribute of the Person class. But say you wanted to build a query based on all attributes of the Person class (say age, firstname, lastname, location etc.) Would using this StaticMetaModel combined with a JpaSpecifcationExecutor/JpaRepository be a good way to acheive this level of filtering? Thanks, Paul Hi Paul, It depends. Although you can use property names as well, the problem is that you notice your mistakes (typos, missing properties, and so on) at runtime. If you use static model classes, you will notice these mistakes at compile time. This is of course a lot faster than running your application just to realize that it doesn't work. I think that if you are going to write only a few criteria queries AND you want to make your build as fast as possible, you could consider using property names. If you need to write many criteria queries, you should generate static meta model classes when you project is compiled and use these classes when you create your queries. By the way, have you considered using Querydsl? I am not a big fan of the JPA Criteria API because complex queries tend to be extremely hard to write and read. Querydsl provides a bit better user experience. First, many thanks for this highly informative series! I have a question regarding filtering/searching. If one needs to obtain a list of an entity type filtered by multiple (optional) elements (say in the case of a Person, lastNameStartsWith, yearOfBirth, gender, etc...), what's the best approach? I shall also need to perform sorting and pagination on this list... Many Thanks! Hi Anthony, thank you for your kind words! I really appreciate them. Does optional mean that the condition might or might be present? For example, do you need to find persons whose: If so, I recommend that you use either JPA Criteria API or Querydsl. I would use Querydsl just because I don't a big fan of the JPA Criteria API, but you can definitely use it if you don't want to add another dependency to your project. Hi Petri, I want to say that is amazing tutorial but i have some problems with jpa criteria api and I want your help. I have installed eclipse mars and also mysql. I made the connection between them and generate entities from the tables that I had created in mysql. I also installed JPA but the problems stay when I start to create criteria api. I have put some criteria api code and it doesn't execute due to the libraries or something else.But when I put the right libraries and the code don't have any problems it is unable to execute and a Failure trace message. Could you help me what to do to resolve the problem and execute criteria api. Hi Ani, Unfortunately I don't know what is wrong without seeing the stack trace and the code that throws an exception. If you can add this information here, I can try to figure out what is wrong. Stack Trace give me a failure trace java.lang.NullPointerException at ani.CriteriaApi.test(CriteriaApi.java:25) I will appreciate if you give me a solution for this problem as soon as possible. A NullPointerExceptionis thrown because the value of the emis null. You need to create an EntityManagerobject and inject it to the emfield before you can write tests that use it. If you are using Spring Data JPA, check out my blog post titled: Spring Data JPA Tutorial: Integration Testing. It explains how you can write integration tests for your Spring powered repositories. If you are using Java EE, you should take a look at Arquillian. I have never used it myself, but you can write tests for your persistence layer by using Arquillian. Hell Petri, I have another issue if you could help me. I don't know how to add data with criteria api. Hi Ani, You cannot add or update data with criteria API. You need to use either Spring Data repositories or the methods provided by the entity manager (if you use Java EE). How? For example I need to add a person's data in a table like first name, last name. If you are using Spring Data JPA, read this blog post. If you are using Java EE, read this blog post. Hi Petri I want to fetch some records from the table. But am getting the record only if all the selected column is not null otherwise, that row is not returning in the final result. @Query(Select le.id,le.name,le.address.no from Ent le) If i execute the above query, it will return the result only if all the three columns are not null. If le.address is null in table , it is getting skip and not fetching while am executing. Can you please tell me, how can i fetch selected column from table, even if it is null Thanks Hi Clement, does your query method return an entity object or an object array? Hi Petri Returning the result with entity object only. But am not selecting the whole entity object, trying to fetch selected columns only. If the column has null value, that record is not getting fetch Please provide any solution if you have any I tried to reproduce your problem, but I ran into a different problem. My query method looks as follows: When I invoke it, the ConversionFailedExceptionis thrown (even if the titleand descriptionfields are not null): org.springframework.core.convert.ConversionFailedException: Failed to convert from type java.lang.Object[] to type net.petrikainulainen.springdata.jpa.todo.Todo for value '{title, description}'; nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type java.lang.String to type net.petrikainulainen.springdata.jpa.todo.Todo There two solutions to this problem: I hope that this answered to your question. Hi petri Thanks for your reply. I do have an option by bindind the result in generic object. But that way i should not follow. i have to bind the result to corresponding pojo But a DTO is a POJO, right? If you want to return an entity, you can add a similar constructor to your entity class as well, but I think that it is a bad idea. If remember correctly, the returned object is not registered to the entity manager => you cannot save or update it, lazy loading doesn't work, and so on. If I would be you, I would return a DTO. Is there some reason why you cannot do it? Hi Petri I can return a DTO . Am not facing any problem in getting result. The problem is, am not getting the exact result set count. Because JPA skips the record when i am trying to select columns from the entity For Eg : Select le.id,le.name,le.address from personal le. In the above query, le.id and le.name is a string in that personal entity But le.address is in manyToOne relation in personal entity. So , if am trying to execute the query, JPA returns the records only if le.address is not null, If it is null, JPA skips the record in the final result set. Finally, I want all the records including null and not null of le.address Thanks Petri Ah, that explains it. I couldn't figure out what is wrong because I thought that you are trying to select the fields of the Personalentity. However, if the addressfield contains another entity, you should use LEFT JOIN. These links describe how you can use left joins in your JPQL queries: I hope that this helps you to solve your problem. Hi Petri Thanks for your immediate answer. I am using left join only . Please find the exact query that am using below. @Query("SELECT NEW com.test.las.domain.reports.testPojo(le.id,le.a, le.k, le.b, " + "le.c.id, le.d.id) from pojo1 le LEFT JOIN le.pojo2 ge order by le.id") In the above query , if le.c is null, am not getting that corresponding row.. but am getting all other rows as well Thanks It's a bit hard to provide an exact answer because I don't know which of the selected values are fields of the original entity and which are other entities. However, there are two rules that you should follow: Example: I need to select the following values: titlefield of the Todoclass. namefield of the Personclass. The person who created the todo entry is stored to the creatorfield of the Todoclass. If want to create a query method that returns the title of the todo entry even if the creator is null, I have to use the following query method: Hi Petri SELECT le.a,le.b from table le Please note that, query will return all the result even if le.a or le.b has null values because both are defined as string in their own entity. But if Select le.a,le.b,le.address.name from table le In this scenario, le.address is an object and if it is null, It will skip the records that having address as null. Only will get the address as not null. I want all the result to be fetched Thanks Hi Clement, If the Addressobject is an entity, you can solve your problem by following the instructions given in this comment. Is the Addressan entity or an @Embeddablevalue object? Hi Petri Thanks for the reply.. Address is an entity only in my scenario. And in the example that you mentioned, if p.name is an object (ManytoOne) in ToDo, Row wil be skipped during execution of query. Actually LEFT JOINincludes todo entries that don't have a creator: Hi Petri I have tried in all the way but still i couldn't get all the rows as expected :-( I modified your original query to use LEFT JOIN. The modified query looks as follows: This should return the wanted result (and not ignore rows that don't have address information). Are you by any chance sorting the query results of your query? Also, which JPA provider are you using? Hi Petri Thanks for the information. I tried exactly like this but still am not getting the exact result. And am using order by le.id The reason why I asked about sorting is that if you would have sorted your query results by using a property of an entity that can be null, you would have faced this problem. However, since you don't do it, I have to admit that I don't know what is wrong. :( Hi petri Thanks a lot for your help . I will also try . If i found anything i will let you know. Thanks!!!! You are welcome. I hope that you find a solution to your problem! Hi Petri I found the solution for my scenario. Previously i was using like select: In the above query , if address is null , i couldnt get the complete result. So i changed the query like select: In POJO: So, now its fetching all the results as expected. Thanks for your try petri for me!! :-) Update: I modified the package name because it identified the end customer. - Petri Great work! Also, thank you for posting the solution on my blog. I think that I will write some blog posts that talk about these "special" scenarios and describe how you can solve them. Hi Petri Can you please tell me .. Is there any other way to handle this scenario in better way I am trying to execute findAll() and am getting the result as List<B> totalList; i want to set it to List<C> without iterating in for loop because i want to give parameter of List<C> to .save method Hi Petri, Thanks for an awesome article. Successfully followed it to introduce Specifications for my Project. I am having an issue though for fetching data from Multiple table. Suppose I have 3 tables.. Request_Details, Customer_Org_Details, Address_Details Request_Details is primary table and it stores primary keys of Customer_Org_Details and Address_Details table as foreign key in two different columns to maintain relations. I have Delivery Address city, Customer Org name and request ID to fetch data. So I need to get data from three tables. I am unable to make this join using Specification. Could you pls help. Thanks, Mayank. Could you add your entity classes into a new comment? The reason why ask this is that it's impossible to answer to your question without seeing those classes. Also, I only need to see what kind of relationship the RequestDetailsentity has with the CustomerOrgDetailsand AddressDetailsentities. Hi Petri, Thanks for quick response.. Please find below drive url for seeing images of table structure: As of now I am fetching value from one table and filtering the records at client side in backend for filter values from other table (Like filter on the basis of city name for requests). Code for Specification: Requirement is to have specifications for joins as well for these three related tables. Insights appreciated. Regards, Mayank Porwal. You can combine multiple specifications by using the Specifications<T>class. If you have two specifications ( specAand specB), and you want to get entities that fulfil both of them, you can use the following code: I hope that this answered to your question. Also, if you have any other questions, do not hesitate to ask them! Hi Petri, I am already ady using "and" "or" for combining specs, but on one table columns only.. Is it allowed for specs from multiple tables as well.? I do this when suppose I need filter on request desired date and sender.. But if I hv delivery city which is another table how to get requests for that.. Yes (and no). You can combine multiple Specification<A>objects, but you cannot combine Specification<A>and Specification<B>objects. I will demonstrate this by using an example. Let's assume that RequestDetailsand its has a one-to-one relationship with the DeliveryAddressentity. senderReferenceis 'XXX' and cityis 'Atlanta'. The specification builder class that builds these individual specifications looks as follows: (I replaced the static meta model with strings because it makes this example a bit easier to read). You can now combine these specifications by using the following code: In other words, if you can navigate from the Root<RequestDetails>object to the preferred entity, you can create a Specification<RequestDetails>object and combine it with other Specification<RequestDetails>objects. Again, if you have some questions, feel free to ask them. Hi, I have an issue in fetching the associated records using Spring Data JPA Suppose my repository is as below Entity as below I will get the list of Person and and how will i get the addresses ? Question is on how to fetch the associated table details Do you mean that "extra" SQL queries are invoked when you try to access the address of a person? If so, you might want to use a fetch join ( LEFT JOIN FETCH). However, before you do that, you should read this blog post: Fetch Joins and whether we should use them. Hi, The blog is simple and straight with syntax both in jpql and criteria api. SQL SERVER QUERY --------------------- * from employee jj inner join ( max(join_date) as jdate, empltype as emptype from employee where empltype='clerk' group by empltype ) mm on jj.join_date=mm.jdate and jj.empltype=mm.emptype; JPQL : ------- em.createQuery("Select e from EMPLOYEE e join e.empltype=:c.empltype,MAX(c.joindate) from EMPLOYEE c where c.emplytpe like :empltype GROUP BY c.empltype e1 ON e.empltype=e1.empltype AND e.joindate=e1.joindate") However i am stuck up in achieving the following functionality using both jpql and criteria. Throwing an exception unexpepected token:=near line1,column 75[ ] Any inputs are really appreciated.. Hi, If I remember correctly, JPQL does not have the ONkeyword. In other words, you can join two entities only if they have a relationship. I think that your best bet is to use a native SQL query. I am Passing java.sql.date in my method and in my entity column contain Time stamp , now i want to find by only by date in my specification how can i do this . below code snippet is not working public static Specification hasDesiredDeliveryDate(java.sql.Date desiredDeliveryDate) { return (root, query, cb) -> { return cb.equal( root.get("productIndividual").get("deliveryPoint").get("desiredDeliveryDate").as(java.sql.Timestamp.class), desiredDeliveryDate); }; } How to use order by in specifications for example I am writing my specification like this Specifications.where(getPositionSpecification(filterDTO.getPositions())) .and(getUserSpecification(filterDTO.getUsers())).and(getDateBetween(filterDTO)); then how can I apply orderBy here ?? Hi, I have written a blog post that explains how you can sort your query results with Spring Data JPA (search for the title: 'Sorting the Query Results of JPA Criteria Queries'). Hello Petri, Hope you are doing well. Thanks for the blog it's really helpful for me. I am stuck with one issue can you please help me with it. I want to generate dynamic conditonal filter for spring data jpa with feild "notificationId" but my dto has list for normal dtos without list i am able to create it by following your blog i.e is done for "email" and all. can you please help me with it. I am getting null.userGcmData do i need to use builder.like or some other method if so can you give me syntax for it if possible. Thanks in advance Update: I removed the irrelevant parts of code listing since it was quite long - Petri Hi, The problem is that you cannot access a collection property by using the get()method. You need to use join for this purpose. You can fix this problem by replacing your return statement with this one: This blog post provides more information about joins: JPA Criteria API by samples – Part-II. Hi Petri.. Thanks a lot for this series.. Its really helping.. Need one help.. Suppose I have 3 tables from which I need some.. Containing something like below structures: I tried writing specification as given below: However, I get below exception: org.springframework.dao.InvalidDataAccessApiUsageException: Illegal attempt to dereference path source [null.Series] of basic type; nested exception is java.lang.IllegalStateException: Illegal attempt to dereference path source Kindly help... I am stuck here big time. Regards, Mayank Was able to do it by reading earlier comments.. Using join.. :) Thanks anyways.. Regards, Mayank Hi, It's good to hear that you were able to solve your problem! Hi petri, Ran into another issue with the above query... If I use join, I get Duplicate records due to OneToMany relationship, so i used query.distinct(true), as suggested by you in some of the solutions earlier. However, that gives me error as I have one column as CLOB type in my parent class and seems CLOB datatypes are not supported with Distinct comparisons. Can u suggest something. Regards, Mayank Hi petri... Was able to create oracle query w/o Distinct keyword.. However having issue now in converting to Specification.. Can u help. Below is the) Hi Mayank, Check out this StackOverflow question. It should help you to solve your problem. If not, let me know. Hi Petri, First of all, Thanks a lot for all your help. Tried replicating the resolution suggested by you earlier. However not able to... Some insights on what I am doing wrong will be really helpful: My table structure: STG_PRODEF_REQUEST Table: __________________________________________________________________ public class JpaProDefRequest { @NotNull @Id @Column(name="ID") private Long id; @NotNull @Column(name="REQUEST_XML") @Lob private String proDefVersionRequestXML; @OneToMany(mappedBy = "orderId") private List proDefProductTypeReqs; ____________________________________________________________________ PRODEF_PROD_TYPE_REQ Table: ____________________________________________________________________ @Id @Column(name="ID") private Long id; @NotNull @ManyToOne(fetch = FetchType.LAZY, cascade=CascadeType.ALL) @JoinColumn(name="PRODEF_REQUEST_ID") private JpaProDefRequest orderId; @OneToOne(mappedBy = "individualId") private JpaProDefServiceResponse serviceResponse; ______________________________________________________________________ STG_PRODEF_RESPONSE Table: ______________________________________________________________________ @NotNull @Id private Long id; @Nullable @OneToOne(fetch = FetchType.LAZY, cascade=CascadeType.ALL) @JoinColumn(name="PRODEF_INDIVIDUAL_ID") private JpaProDefProductTypeReq individualId; @Nullable @Column(name = "ERROR_CODE") private String errorCode; @Nullable @Column(name = "ERROR_DESC") private String errorDescription; ______________________________________________________________________ Need to fetch responses for below) ____________________________________________________________________________ I tried: public static Specification hasA() { return new Specification() { @Override public javax.persistence.criteria.Predicate toPredicate(Root jpaProDefRequestRoot, CriteriaQuery criteriaQuery, CriteriaBuilder cb) { CriteriaQuery query = cb.createQuery(JpaProDefRequest.class); Root proDefRequestRoot = query.from(JpaProDefRequest.class); Subquery sq = query.subquery(Long.class); Root request = sq.from(JpaProDefProductTypeReq.class); Join proDefProductTypeReqs = request.join("proDefProductTypeReqs"); sq.select(proDefProductTypeReqs.get(JpaProDefRequest_.id)).where( cb.isNotNull(request.get(JpaProDefProductTypeReq_.serviceResponse).get("errorCode"))); return query.select(proDefRequestRoot).where( cb.in(proDefRequestRoot.get(JpaProDefRequest_.id)).value(sq)); } }; } However doesn't work. Regards, Mayank. Hi Mayank, Do you get an error message or does the query return wrong results? The reason why I ask this is that it is kind of hard to figure this out when I cannot run the actual code. If you get an error message, that could help me to find the problem. Hi Petri, Somehow was able to get it work. I was not using sub query properly. Realized how naive I am in Specifications. Thanks for the support. Regards, Mayank. Hi Mayank, You are welcome! I am happy to hear that were able to solve your problem. Hi There , Can you please do a tutorial by integrating the Spring Data JPA with Spring Boot and Mysql ?? I seem to have issues when i use Mysql instead of H2 Db. Hi, What kind of issues are you having? Hello Petri, Very good job with this site, I always end up here when I'm looking up for info about Spring and JPA. :) I want to ask you something. I see in your examples you make use of predicates in the service layer, plus you expose in the repositories the functions from JpaSpecificationExecutor. Is it really OK in a real world application to expose these signatures in your repository interface and making use of predicates in the service layer when you expect to have multiple implementations of both layers in a Spring app? Thanks :) Hi Santiago, Thank you for your kind words. I really appreciate them. If you have multiple repository implementations, you cannot naturally expose implementation details to the service layer. However, if I have only one repository implementation (right now), I always create the predicates on the service layer. The reason for this is that it's a lot simpler and I can always hide it behind an additional interface if I need to create a second repository implementation (this is very rare). Hi Petri, Your previous posts on QueryDSL solved many of my requirements, now am stuck up with sub query using Specification to satisft one requirement,tried couple of ways to achieve, but no luck. I need to fetch the one record of employee role from empl_tbl whose JOINING DATE is the latest one. SQL-QUERY -------------------- select * from EMPL_TABLE WHERE EMPL_ROLE='MGR' and JOIN_DATE= (select max(JOIN_DATE) as datecol from EMPL_TABLE WHERE EMPL_ID='MGR') Entity -- EmpTableEntity How can i transform the above query using specifications. Appreciate your response. Thanks Sam Hi Sam, This StackOverflow answer describes how you can create a specification that uses subqueries. I'm working in a project that i need create a Specification for this query: "select l.* from lead as l where l.id in (select t.lead_id from telefone as t where t.lead_id = l.id);" The entity Phone has ManyToOne relationship for Lead. But i'm having trouble for create a subquery for select column "lead_id" with join. Someone can help me please? Hi, Your query looks pretty similar as the query which I found from this StackOverflow answer. The key is to create the subquery (check the link) and use it when you specify the IN condition by invoking the in()method of the CriteriaBuilderinterface. If you have any additional questions, don't hesitate to ask them. Hi, Help required, I got stuck with the below exception. Am getting this exception while executing the below piece of code from my junit test case contractRepository.findAll(ContractSpecification.searchByContractId("XXXX")); i have my specification smehting like below java.lang.ClassCastException: javax.persistence.criteria.$Impl_Predicate cannot be cast to org.hibernate.jpa.criteria.Renderable at org.hibernate.jpa.criteria.QueryStructure.render(QueryStructure.java:262) at org.hibernate.jpa.criteria.CriteriaQueryImpl.interpret(CriteriaQueryImpl.java:312) at org.hibernate.jpa.criteria.compile.CriteriaCompiler.compile(CriteriaCompiler.java:147) at org.hibernate.jpa.spi.AbstractEntityManagerImpl.createQuery(AbstractEntityManagerImpl.java:736) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.lang.reflect.Method.invoke(Method.java:498) Update: I removed the irrelevant part of this stack trace - Petri Hi, That looks really weird (I have never seen this problem myself). What Hibernate version are you using? I prefer to public Page findAll(SearchRequest request) { Specification specification = new Specifications() .eq(StringUtils.isNotBlank(request.getName()), "name", request.getName()) .gt(Objects.nonNull(request.getAge()), "age", 18) .between("birthday", new Range(new Date(), new Date())) .like("nickName", "%og%", "%me") .build(); return personRepository.findAll(specification, new PageRequest(0, 15)); } Hi, That looks pretty slick! Do you have any plans to add support for JPA static metamodel classes? org.springframework.data.jpa.repository.JpaSpecificationExecutor.findOne(Specification) is not working as excepted. It is returning more than single record. The default implementation of that method (check the SimpleJpaRepositoryclass) invokes the getSingleResult()method of the TypedQueryinterface. This method throws a NonUniqueResultExceptionif your query returns more than one result. If you get this exception, the implementation is working as "expected" (although its documentation could be improved). That being said, it is impossible to say what is wrong since you didn't share the problematic code. Yes, I am getting same error as you have mentioned, but if we try to judge the working of 'findOne()' method by its name, then it should have returned Single record by enforcing 'Limit 1' in the query(in case of having multiple records having same filter criteria). Suppose if we want to fetch single record based on 2 key details(having composite key of 3 fields). I am passing 'nul' values for the 3rd key: eg: Above statement throws a NonUniqueResultException. What I have to do in order to fetch the single record, is to create a custom method for this purpose and fetching the first record using findAll(spec): This has increased some look-head in my code. Hi, You can also use the method that takes a Pageableobject as a method parameter and set the size of the returned page to one. The downside of this technique is that you have to remember to create the Pageableobject => You probably still need a utility method. If you decide to use this technique, check out my blog post that explains how you can paginate your query results with Spring Data JPA. By the way, if you think that the method is not working correctly, you should create a new Jira issue (I cannot do anything about it since I am not a member of the Spring Data team). Thanks for the help Petri! But, I have to stick to the solution that i have mentioned above due to application requirement. Regards, Akansha You are welcome. :) Hi Patri! I have one more query, I was using Specification with DB2 and found one issue with 'ORDER Petri, I need to execute something like below through Specification and Criteria Builder: Select * from P INNER JOIN F ON P.FID = F.ID where (F.XXX = 'ABC') OR ( F.XXX = 'DEF' AND F.XXX = 'JKL' ); However I am not getting successful since last OR condition is dynamic (i.e. I can have more than 1 AND Conditions as also shown above) . Can you suggest something. ? Hi, You can use the and()and or()methods of the CriteriaBuilderinterface. Here is a simple pseudo code that demonstrates how this works: If you have any additional questions, don't hesitate to ask them. Hi Patri! I have one more query, I was using Specification with DB2 and found one issue with ‘ORDER, You could try to specify the ORDER BYcondition by using the Sortclass. If you don't know how to do it, take a look at this blog post. Thanks! You are welcome. Hi Petri, Thanks alot for your tutorial on spring data. I have one query in service implementation class: This is your service implementation My question is : in method : public List findBySearchTerm(String searchTerm) , instead of passing searchTerm of String type, if I pass "TodoDTO dto" like shown below : and there are many number of fields available in TodoDTO class, suppose 20 number of fields are available. whenever user sending the POST request then he is sending value for 8 fields only i.e in Request Body , he is providing the value of only 8 fields in JSON format. and he wants to retrieve all the records from database where these 8 fields conditions are satisfying. Then how will i write serviceImplmentation for this method because the value of other 12 fields are null as user has been provided the value for only 8 fields.??? and 8 number is not fixed , it could be anything like 5,6,7,9,10. public List findBySearchTerm(TodoDTO dto){ ? ? ? } and how will i write implementation of Specification class. ???Please help me in this :) Hi, You can indeed use a DTO instead of a search term. In fact, this is a quite common practice when you need to use multiple search conditions. So, if you need multiple search conditions, your specification factory class could look as follows: The key of implementing the toPredicate()method is to leverage the and()and/or or()methods of the CriteriaBuilderclass. Basically, you have to go through all fields of your DTO and apply this pseudo code to every field: Predicateby using the CriteriaBuilderclass. Predicatewith the previous Predicate(Use either the and()or or()method of the CriteriaBuilderclass). Can you understand what I am trying to say? Hi, Can you please tell me step by step process to generate Metamodel Classes in eclipse. Hi, Unfortunately I don't use Eclipse => I have no idea how you can generate metamodel classes with it. That being said, I found this blog post that explains how you can do it. Thanks for this great series on Spring Data JPA. Just a couple things... Is there any way to remove irrelevant comments from this page? Some are referring to Person objects so I guess the tutorial has changed over time. Also, the last code example is missing the import for the Specification class. Finally, as a personal preference, if I may, I would suggest not using static imports, especially in this example. The method is only invoked in one place but besides that, I think it is more readable to invoke the method by referring to the class where it is implemented (TodoSpecifications.) - this makes it immediately obvious that the method is invoked statically on some other class - not locally or in some super class. Hi, First, thank you for your comment. It's possible to remove old (or new comments). The reason (or maybe an excuse) why I haven't done this is time. This blog post has now 5200 comments and it's "impossible" to read them all because I maintain this blog on my free time. On the other hand, now that you mentioned it, maybe I could clean at least the comments of my most popular blog posts because it's quite frustrating to read irrelevant comments. This is now fixed. Thank you for reporting this. Yes, I have noticed that some hate static imports and some people love them. I don't actually use them very often, but in this particular case, I think that the name of the specification class doesn't add relevant information to my code. On the other hand, if I use a static import, I can emphasize the the search criteria that is used by the created specification. But like you said, this is a matter of preference and it's fine to decide that you don't want to use static imports. I have the following entity structure: | ParentObject| +-------------+ ^ | +-------+---------+----------------+ | | | +-----------+ +-----------+ +-----------+ | Son1 | | Son2 | | Son3 | +-----------+ +-----------+ +-----------+ What I want is to get all the Son2 and Son3 that have an attribute that doesn't exist on Son1, the inheritance is strategy is SINGLE_TABLE with a DiscriminatorColumn What I did so far: class ParentObjectPredicat public static Specification inNaturesSon2(List natures) { return (root, query, cb) -> { final Root son2Root = cb.treat(root, Son2.class); return son2Root.get(Constants.NATURE).in(natures); } } public static Specification inNaturesSon3(List natures) { return (root, query, cb) -> { final Root son3Root = cb.treat(root, Son3.class); return son3Root.get(Constants.NATURE).in(natures); } } SpecSon2 son2Spec = ParentObjectPredicat.inNaturesSon2(natures); SpecSon3 son3Spec = ParentObjectPredicat.inNaturesSon3(natures); Specification specifications = Specification.where(son2Spec).and(son3Spec); Iterable listOfSons = this.parentObjectRepository.findAll(specifications); What I got as result: org.springframework.dao.InvalidDataAccessApiUsageException: Illegal attempt to dereference path source [treat(null as mypackage.Son2).nature] of basic type; nested exception is java.lang.IllegalStateException: Illegal attempt to dereference path source [treat(null as mypackage.Son2).nature] of basic type From my understanding treat() is used to resolve subclass. Any suggestions on how to do this? Hi, Unfortunately you cannot write your query by using JPA because JPA doesn't allow you to use attributes which aren't found from all child classes if you want to "return" super class objects. As far as I know, you have two options: Son2and Son3objects and combine the query results in Java code. If you have any additional questions, don't hesitate to ask them. Thnx a lot for your answer, this is what i taught but i wasn't sure, thnx again Sephi Hi, you are welcome. //I want to find who are all entered first post comment with in an hour after their account created Path accountCreatedTime = root. get("AccountCreatedTime"); Path FirstPostCreatedTime = root. get("FirstPostCreatedTime"); final Predicate timeInHourPredicate = criteriaBuilder .greaterThanOrEqualTo(accountCreatedTime, FirstPostCreatedTime); Example, 1. Account Created at: 2018-SEP-10 at 10am and First Post entered 2018-SEP-10 at 10.15 am this recond should be fetched. (FALLS IN AN HOUR) 2. Account Created at: 2018-SEP-10 at 10am and First Post entered 2018-SEP-10 at 3.50 pm this SHOULD NOT be fetched. Is there any way to add or separate hours from Path accountCreatedTime? or can we get difference between Path accountCreatedTime and Path FirstPostCreatedTime in hours and in criteriaBuilder Hi, To be honest, I haven't used the JPA criteria API for several years, and I don't remember if it's possible to do the things you need. However, I did quick Google search and found out that some implementations allow you to extract date and time parts. I recommend that you use SQL for this query. If you have to use the JPA Criteria API, I recommend that you contact Thorben Janssen. I am sure that he can answer to your questions.
https://www.petrikainulainen.net/programming/spring-framework/spring-data-jpa-tutorial-part-four-jpa-criteria-queries/?replytocom=28470
CC-MAIN-2021-49
refinedweb
15,562
56.66
Hello Friends, In this tutorial we are going to discuss Static Vs Non Static Keyword in C#. Both Static and Non-Static have their own purpose. We are going to discuss them here. After completing this tutorial we would be able to understand: - What is Static Class, Static Methods? - Static vs Non-Static in C#. Static vs Non Static Keyword in C#: The term static defines relating to itself. A Static Class is similar to itself with the difference is that we cannot create an instance of a static class. In another word, we cannot use new keyword with the static class to create an instance of the class. Since we cannot create an instance of a static Class, we can access the members of the static class using the class name directly. If we declare a method or variable as static inside the class we can access the member of that class using the class name no need to create an object of that class. A Static Variable can be used for those values which cannot be stored by instance. For example, if I need to keep count of how many instances of a class exist, we can use a static variable in such type of scenario. There are following characteristics related to Static Class. - A Static Class can contain only static members. - We cannot create an instance of the Static class. - When you define a class as static you do not have access to the Non-Static field of that class. You can use only static members of that class. - A Static class cannot implement the interface. - You cannot add an abstract keyword to static class they are implicitly abstract. - There is no default constructor for the static class. Also, we cannot create a Constructor for Static Class. Let’s create an example using Static and Non Static method in C#. Open Visual Studio and Create a C# application. Non-Static Method Example: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace StaticVsNonStatic { class Program { public int NonStaticCall(int iFirstNumber, int iSecondNumber) { return iFirstNumber + iSecondNumber; } static void Main(string[] args) { Program objProgram = new Program(); int iNonStaticValue = objProgram.NonStaticCall(5, 9); Console.WriteLine("Calculation using Non Static Call {0}", iNonStaticValue); Console.ReadKey(); } } } You can see in the above example we have created a Non-Static method called NonStaticCall. Since it is a Non-Static method we can call this method after creating an instance of the class that is what I did in the Main method. I have created an Instance of the Program Class and called the NonStaticCall method. Static Method Example: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace StaticVsNonStatic { class Program { public static int StaticCall(int iFirstNumber, int iSecondNumber) { return iFirstNumber + iSecondNumber; } static void Main(string[] args) { int iStaticValue = Program.StaticCall(5, 9); Console.WriteLine("Calculation using Static Call {0}", iStaticValue); Console.ReadKey(); } } } As in the above program, I have created a Static method called StaticCall. We don’t need to create an instance of the class for calling this static method. We can just call this method using Class Name as we did above. View More: - Indexer in C#. - Lambda Expression in C#. - Delegate in C#. - Classes and Objects in C#. Conclusion: Hope you loved this post about Static Vs Non Static Keyword in C#. I would appreciate your Feedback, Comments and Suggest. Thank You. 1 thought on “Static vs Non Static Keyword in!
http://debugonweb.com/2017/10/15/static-vs-non-static-keyword-c/
CC-MAIN-2018-26
refinedweb
586
60.31
Hide Forgot Is that really neccessary? Nevermind the bug in RPM that allows you to erase policycoreutlils without warning. A quick glance at the specfile makes me think that just fixing the %post scripts to politely look for restorecon before trying to call it. And policycoreutils has it's own baggage. Help, save us from dep bloat. You can turn off all selinux parts/dependencies by selinux macro. But if we use selinux we need the restorecon and policycoreutils... THat's my point. When one chooses not to use selinux, then bind should not create arbitrary dependencies. Especially when: - the deps are only in %post scripts that can be easily fixed with a simple [ -e /sbin/restorecon ] && /sbin/restorecon yadda - there already exists an unfixed bug in rpm that prevents enforcement of this dep until it's too late In other words, if they have bind and selinux, then restorecon sure. But why exactly is restorecon needed for bind without selinux? This is arbitrary and unneccsary complexity. Especially since policycoreutils drags in even *more* package deps. I think the best fix could be put "[ -e /selinux/enforce ] && [ -x /sbin/restorecon ] && /sbin/restorecon /etc/rndc.* /etc/named.* >/dev/null 2>&1 ;" into %if %{selinux} %endif statement. If exists bug in rpm it could be fixed in future and all could works fine. I'm going to fix this problem in rawhide
https://partner-bugzilla.redhat.com/show_bug.cgi?id=223899
CC-MAIN-2019-35
refinedweb
230
67.04
GL Import for Consolidation ends in Error : failure to malloc memory for control->sources Last updated on MAY 02, 2017 Applies to:Oracle General Ledger - Version 12.1.1 and later Information in this document applies to any platform. Symptoms On : 12.1.1 version, Journal Import changed some profile values, but still get the same error. Number of Journal Lines to Process at Once = 1000 (we changed it from 25000) GL: Number of Accounts In Memory :2500 (it was null before) Suggested to set GL Journal Import : Separate Journals by Accounting Date to Yes In 31.12.2011 there are10.670.651 lines so changing the "GL Journal Import: Separate Journals by Accounting Date to Yes" profile will not work select je_batch_id,je_header_id, sum(accounted_dr), sum(accounted_cr), ledger_id, accounting_date, group_id from GL_CONS_INTERFACE_652393 group by je_batch_id,je_header_id,ledger_id, accounting_date, group_id ; STEPS ----------------------- The issue can be reproduced at will with the following steps: have a ledger for consolidation,with type primary. import journals from other ledger .When we run journal import we gets the attached error. changed the profile option ' GL: Number of Records to Process at Once' from25.000 to 1000. But not worked. In 31.12.2011 there are10.670.651 lines so changing the "GL Journal Import: Separate Journals by Accounting Date to Yes" profile will not work No value, column which may help to split the journal for accounting_date 31/12/2011 Changes Cause My Oracle Support provides customers with access to over a Million Knowledge Articles and hundreds of Community platforms
https://support.oracle.com/knowledge/Oracle%20E-Business%20Suite/1531017_1.html
CC-MAIN-2017-47
refinedweb
255
54.63
20 August 2013 19:22 [Source: ICIS news] HOUSTON (ICIS)--US paraffinic base oil posted increases completed today, as each of the participating suppliers have confirmed price moves by Tuesday. In the Group I tier, HollyFrontier confirmed it will increase its posted prices by 7 cents/gal on light and medium grades and by 15 cents/gal on its heavy grade and brightstock, all effective 23 August. ?xml:namespace> Group II producer Flint Hills Resources confirmed it will increase its light and mid viscosity base stocks by 10 cents/gal and its heavy grade 600 stock by 5 cents/gal effective on 20 August. Group II, II+ and III producer Phillips66 earlier confirmed it is moving up all tier grades of base oils by 10 cents/gal effective 16 August. Group II+ and III producer SK Lubricants also earlier confirmed it is increasing all base oil grades by 10 cents/gal effective 19 August. The round of announcements began late July as Chevron announced 25 cents/gal increases on its base oils. Base oil price increases were said to be driven by narrow margins exacerbated by crude oil costs topping $100/bbl during the past two months. In Group I brightstock prices, HollyFrontier was at $4.60/gal ahead of the increase. Follow Jud
http://www.icis.com/Articles/2013/08/20/9699135/us-paraffinic-base-oil-price-increases-complete.html
CC-MAIN-2015-22
refinedweb
214
57.3
Open Bug 738948 Opened 11 years ago Updated 4 years ago Provide Option with UI for Using User's Specified Background Color for Stand-Alone Images Categories (Core :: Layout, enhancement) Tracking () People (Reporter: david, Unassigned) References Details (Keywords: access) Implementation of bug #376997 forces black to be the background color when an image is viewed alone (e.g., right-click and select View Image on the pull-down context menu). This overrides any user-specified background color. When a user specifies a background color other than the default white (#FFFFFF), the user obviously wants that color to be used as the background in the browser window. Thus, this RFE requests an option -- with a UI -- to use the user's requested background color. Currently, the extension Old Default Image Style provides a work-around. That extension, however, should be considered only a temporary work-around for this RFE. See bug #717226 and bug #376997 itself for further discussion of dissatisfaction with the implementation of the latter bug. Already denied at bug 713230 comment 34. I don't intentionally mark this as DUPLICATE to clarify the purpose of bug 713230. Feel free to dupe if you disagree. Note that many discussions in the comp.infosystems. newsgroup emphasize NOT overriding user settings in browsers. The FAQ for that newsgroup is cited in Mozilla's own <>. Here, you are overriding user settings, about which a number of users are complaining. Thus, before closing this as WontFix, please explain why giving users an option is wrong. Status: RESOLVED → REOPENED Resolution: WONTFIX → --- > The FAQ for that newsgroup is cited in Mozilla's own <>. The document should be updated. > Thus, before closing this as WontFix, please explain why giving users an option is wrong. I already asked the reason in bug 376997 comment 87. Bug 713230 comment 34 also explains. I know you don't satisfy those answers, but it already answered. Status: REOPENED → RESOLVED Closed: 11 years ago → 11 years ago Resolution: --- → WONTFIX Despite being marked as WontFix, this RFE continues to attract attention and support. The problem is that the citations in comment #3 DO NOT explain how implementing bug #376997 benefits the end-user or why implementing this RFE or bug #717226 would be a detriment to the end-user. This is beginning to remind me of bug #178506, for which an option was actually developed -- and then rejected -- to allow users to preserve the original time-stamp on a downloaded file, consistent with other file operations. While an extension to provide the capability requested in bug #178506 has over 1,100 users after a year, the extension to negate bug #376997 has over 5,200 users after only three months, which indicates this RFE indeed has some end-user support. Once again, I am reopening this RFE and request that an explanation be commented HERE (not by reference to some other bug report) why implementing this option would harmful to end-users before rejecting it again. Note that bug #733954 (a replacement RFE for bug #178506) has been allowed to remain Open (so far) despite a WontFix on its predecessor. Why not leave this Open for a serious evaluation? Status: RESOLVED → REOPENED Resolution: WONTFIX → --- The reasoning is pretty simple - the vast majority of our users do not care to tweak the background color for standalone images, so it doesn't make sense for us to spend time developing preferences UI for that (making the preferences UI more complex in the process). There are various ways that the minority that does have strong feelings about standalone image background colors can customize the Firefox behavior (see , , etc.). Status: REOPENED → RESOLVED Closed: 11 years ago → 11 years ago OS: Windows XP → All Hardware: x86 → All Resolution: --- → WONTFIX There seems to be a major misunderstanding here. I am NOT asking for an option for a user-set background color specifically for stand-alone images. I am asking for an option that the existing user-set background color -- specified by the preference variable browser.display.background_color -- be used, as it was before the implementation of bug #376997. Try removing the general background color functionality of browser.display.background_color and see how many complaints are made. I believe that far more than a small minority of users do indeed choose to set this. Status: RESOLVED → REOPENED Resolution: WONTFIX → --- (In reply to Gavin Sharp from comment #5) > Well, notwithstanding the efforts spent by that contributor to come with a solution, basing the override on which file type the URL ends is a bit of a hack. Bug 376997 hard-wired the style to a background color of #222, thus not even using any desktop-theme color that might fit. Images with transparencies to be shown in bright background almost disappear now (which, of course, would apply to other choices of color as well, but the previous white was to that extent neutral). As a minimum, some kind of class or identifier should be provided to the definition in TopLevelImageDocument.css so that they can be overridden in a userContent.css more easily, rather than having to guess matching URLs. Maybe the solution I posted on the 376997 bug page could help you. Hi Dieter, thanks for pointing me to your bug 376997 comment #161. I'm actually applying a couple of custom patches to Mozilla release versions myself with each update and have it sufficiently scripted. While that's fine for you and me, it's not a general solution for everybody, thus a more user-friendly way to modify the image style (and videos, while we are there) would certainly be appreciated. Hi! As you already mentioned "userContent.css" (and class or identifier in the mediaFile-template) should be the way to use. (If they have a reserved range of class names already it would be a matter of 5 seconds to add it, plus testing time, ok :P ). Of course it would be better to put the template data in a seperate resource file. And to finally change the concept of using an compressed folder (currently omnia.ja): they should just use a normal folder. Is it a secret long time plan of the company, is it some personal feeling of some staff, or just that they don't want to admit they made a wrong decision in the architecture? 1. It seems to me, that some data is hardcoded in the code, in the current case the data used to create the structure of the mediapages; that page that then is used to display those single images or single videos. I don't know why they hardcoded it; and I'm only an enduser, so I really have no time to read through all that is available about the decision to put data hardcoded in the Firefox-file - but by doing this they get the usual problems now. 2. The reaction of the mozilla-staff here is just - mmh - let's call it unexpected. 3. I'm really bothered by the gray background, by the centering, by not being able to select it the old way with the mouse - I even registered here. 4. I don't like the suggested userstyles-addon (in general and) in this specific case, as I didn't find a way to first look into the actual code that should be apllicated. I don't have the addon installed now, and without the addon the userstyles-website doesn't even let me see the actual specific code for the image-fix. As enduser I'm in the lucky situation that I can turn Firefox's automatic updates off, so if I update manually and the image-display changes back, I know hot to fix it again. Cheers up! :P Back in Firefox 1.5, I used userContent.css to change the white background of blank.html to a quite light blue. Because when I opened a new tab the former white really was too bright on a big monitor. I can't find the old userContent.css I used, but just a fast search finds for example: @namespace url(); @-moz-document url("about:blank") { * {background-color: #000000;} } That would be a nice solution if it could be used for the image-pages. That's basically what the stylish code does, just with the difference that the URL which needs to be matched is anything but a constant, thus requiring a non-trivial regexp which doesn't even cover all possible cases. So again, userContent.css overrides may be fine though hard to discover and apply for non-experienced users, but even for this id or class names need to be defined so that an unambiguous CSS selector can be specified for the elements in question. Re omni.ja[r], that was changed a while ago from distinct *.jar files for reasons of performance penalties when having to read multiple files at startup. Note that bug 713487 and bug 740668 made further changes to TopLevelImageDocument.css and TopLevelVideoDocument.css (effective with 14.0), while retaining the basic issue of not being able to select and override those hard-coded definitions with reasonable effort. Colors and the "noise" background pattern moved into the Toolkit theme though, thus may be potentially modified by 3rd-party themes now. Per addons.mozilla.org, the extension Old Default Image Style to override bug #376997 now has 12,321 users. It also has 70 mostly favorable reviews; the least favorable review was a complaint about the extension not being updated. I believe this proves that end-users want this RFE implemented. As for a use-case, consider an image that contains significant amounts of dark gray in and near its edges. If that dark gray is the same as used by the implementation of bug #376997, the user cannot tell where the edges of the image occur. Furthermore, for some color-blind individuals, the image does not even need to contain actual dark gray at the edges. Other colors that such individuals cannot distinguish because of their visual handicap might also make the edges of the image indistinct. This could be a violation of the Americans with Disabilities Act (ADA). Product: Core → Core Graveyard With the change from .xpi to Webextensions, the existing Old Default Image Style extension is no longer a valid workaround. Per my comment #15, this is an accessibility issue under the U.S. ADA. Thus, it should be fixed instead of shoved under the "graveyard" rug. Component: Layout: Images → Layout Product: Core Graveyard → Core
https://bugzilla.mozilla.org/show_bug.cgi?id=738948
CC-MAIN-2022-33
refinedweb
1,750
61.56
Right you are. I will push those too ________________________________ From: Maciek Wójcikowski <mac...@wojcikowski.pl> Sent: Friday, April 21, 2017 4:37:30 PM To: Greg Landrum Cc: RDKit Discuss; rdkit-annou...@lists.sourceforge.net; RDKit Developers List Subject: Re: [Rdkit-devel] 2017.03 (Q1 2017) RDKit Release Hi Greg, Just FYI rdkit for Python 3.6 requires boost 1.56 which has no Python 3.6 version in your repo. I just tested Linux packages, but it should be the same for other platforms. ---- Pozdrawiam, | Best regards, Maciek Wójcikowski mac...@wojcikowski.pl<mailto:mac...@wojcikowski.pl> 2017-04-21 6:36 GMT+02:00 Greg Landrum <greg.land...@gmail.com<mailto:greg.land...@gmail.com>>: I'm pleased to announce that the next version of the RDKit -- 2017.03 (a.k.a. Q1 2017) -- is released. The release notes are below. The release files are on the github release page: We are in the process of updating the conda build scripts to reflect the new version and uploading the binaries to anaconda.org<> (). The plan for conda binaries for this release is: Linux 64bit: python 2.7, 3.5, 3.6 Mac OS 64bit: python 2.7, 3.5, 3.6 Windows 64bit: python 2.7, 3.5, 3.6 Windows 32bit: python 2.7 Some things that will be finished over the next couple of days: - The conda build scripts will be updated to reflect the new version and new conda builds will be available in the RDKit channel at anaconda.org<> (). - The homebrew script - The online version of the documentation at rdkit.org<> Thanks to everyone who submitted bug reports and suggestions for this release! Please let me know if you find any problems with the release or have suggestions for the next one, which is scheduled for September 2017. Best Regards, -greg # Release_2017.03.1 (Changes relative to Release_2016.09.1) ## Important - The fix for bug #879 changes the definition of the layered fingerprint. This means that all database columns using layered fingerprints as well as all substructure search indices should be rebuilt. - All C++ library names now start with RDKit (see #1349). ## Acknowledgements: Brian Cole, David Cosgrove, JW Feng, Berend Huisman, Peter Gedeck, 'i-tub', Jan Holst Jensen, Brian Kelley, Rich Lewis, Brian Mack, Eloy Felix Manzanares, Stephen Roughley, Roger Sayle, Nadine Schneider, Gregor Simm, Matt Swain, Paolo Tosco, Riccardo Vianello, Hsiao Yi ## Highlights: - It's now possible (though not the default) to pickle molecule properties with the molecule - There's a new, and still in development, "Getting started in C++" document. - A lot of the Python code has been cleaned up ## New Features and Enhancements: - Add removeHs option to MolFromSmiles() (github issue #554 from greglandrum) - support a fixed bond length in the MolDraw2D code (github issue #565 from greglandrum) - Pattern fingerprint should set bits for single-atom fragments. (github issue #879 from greglandrum) - Reviewed unit tests of rdkit.ML - coverage now 63.1% (github pull #1148 from gedeck) - Reviewed unit tests of rdkit.VLib - coverage now 67.1% (github pull #1149 from gedeck) - Removes exponetial numBonds behavior (github pull #1154 from bp-kelley) - Exposes normalize option to GetFlattenedFunctionalGroupHierarchy (github pull #1165 from bp-kelley) - Expose RWMol.ReplaceBond to Python (github pull #1174 from coleb) - Review of rdkit.Chem.Fraggle code (github pull #1184 from gedeck) - Add support for dative bonds. (github pull #1190 from janholstjensen) - Python 3 compatibility (issue #398) (github pull #1192 from gedeck) - 1194: Review assignments of range in Python code (github pull #1195 from gedeck) - Moved GenerateDepictionMatching[23]DStructure from Allchem.py to C++ (github pull #1197 from DavidACosgrove) - Review rdkit.Chem.pharm#D modules (github pull #1201 from gedeck) - Find potential stereo bonds should return any (github pull #1202 from coleb) - Gedeck coverage sim div filters (github pull #1208 from gedeck) - Gedeck review unit test inchi (github pull #1209 from gedeck) - Coverage rdkit.Dbase (github pull #1210 from gedeck) - Coverage rdkit.DataStructs (github pull #1211 from gedeck) - UnitTestPandas works on Python3 (github pull #1213 from gedeck) - Cleanup and improvement to test coverage of PandasTools (github pull #1215 from gedeck) - Cleanup of rdkit.Chem.Fingerprints (github pull #1217 from gedeck) - Optimization of UFF and MMFF forcefields (github pull #1218 from ptosco) - Support for ChemAxon Extended SMILES/SMARTS (github issue #1226 from greglandrum) - Improved test coverage for rdkit.Chem.Fingerprints (github pull #1243 from gedeck) - Adding a few tests for coverage utils (github pull #1244 from gedeck) - Make Pandastools modifications to generic RDkit functionality more obvious (github pull #1245 from gedeck) - Rename test file and cleanup (github pull #1246 from gedeck) - Review of rdkit.Chem.MolKey (github pull #1247 from gedeck) - Review tests in rdkit.Chem.SimpleEnum (github pull #1248 from gedeck) - Move execution of DocTests in rdkit.Chem into a UnitTest file (github pull #1256 from gedeck) - Review code in rdkit.Chem.Suppliers (github pull #1258 from gedeck) - Add python wraps (github pull #1259 from eloyfelix) - Rename file UnitTestDocTests in rdkitChem (github pull #1263 from gedeck) - Gedeck rdkit chem unit test surf (github pull #1267 from gedeck) - cleanup rdkit.Chem.Lipinski and rdkit.Chem.GraphDescriptors (github pull #1268 from gedeck) - Address Issue #1214 (github pull #1275 from gedeck) - Dev/pickle properties (github pull #1277 from bp-kelley) - Remove unused test boilerplate (github pull #1288 from gedeck) - Refactored the script SDFToCSV (github pull #1289 from gedeck) - Dev/rdmmpa api update (github pull #1291 from bp-kelley) - Fix/rogers fixes (github pull #1293 from bp-kelley) - Remove expected (error) output during unit tests (github pull #1298 from gedeck) - Refactor FeatFinderCLI and add unittests (github pull #1299 from gedeck) - Refactor BuildFragmentCatalog - 1 (github pull #1300 from gedeck) - Review of rdkit.Chem code - 1 (github pull #1301 from gedeck) - Minor cleanup in rdkit.Chem (github pull #1304 from gedeck) - Start using py3Dmol in the notebook (github pull #1308 from greglandrum) - Add the option to match formal charges to FMCS (github pull #1311 from greglandrum) - Review of rdkit.Chem.Subshape (github pull #1313 from gedeck) - Review rdkit.Chem.UnitTestSuppliers (github pull #1315 from gedeck) - Add cis/trans tags to double bonds (github pull #1316 from greglandrum) - MolDraw2D: make custom atom labels easier (github issue #1322 from greglandrum) - MolDraw2D: allow DrawMolecules() to put all molecules in one pane (github issue #1325 from greglandrum) - Refactoring rdkit.Chem.SATIS (github pull #1329 from gedeck) - Minor cleanup of rdkit.Chem.SaltRemover (github pull #1330 from gedeck) - Review rdkit.chem.FunctionalGroups and rdkit.Chem.UnitTestSuppliers (github pull #1331 from gedeck) - Get the tests working with python 3.6 (github pull #1332 from greglandrum) - add "RDKit" to the beginning of all library names (github pull #1349 from greglandrum) - Fix/sanitizerxn merge hs (github pull #1367 from bp-kelley) - Update AllChem.py (github pull #1378 from BerendHuisman) ## New Java Wrapper Features: ## Bug Fixes: - python2 code in python3 install (github issue #1042 from kcamnairb) - Fixes #1162 (resMolSupplierTest failing with boost 1.62) (github pull #1166 from ptosco) - add missing $RDKLIBS to cartridge build (github pull #1167 from rvianello) - Include <boost/cstdint.hpp> for uint64_t (github pull #1168 from mcs07) - replace std::map::at with std::map::find (github pull #1169 from mcs07) - Fix Trajectory GetSnapshot behaviour after Clear (github pull #1172 from mcs07) - Add Contrib dir to RDPaths (github pull #1176 from mcs07) - RDThreads.h: No such file or directory (github issue #1177 from gncs) - this now builds with vs2008 (github pull #1178 from greglandrum) - Add information on building RDkit on macOS using conda (github pull #1180 from gedeck) - new sequence capabilities not available from either Python or Java (github issue #1181 from greglandrum) - Gets the reaction sanitization code working correctly on 32bit systems (github pull #1187 from greglandrum) - Adds RDProps to c# wrapper (github pull #1188 from bp-kelley) - fix compatibility with PostgreSQL 9.2 (github pull #1189 from greglandrum) - Fixes memory leak in closeCheckMolFiles, fixes valgrind read issue in… (github pull #1200 from bp-kelley) - Support valences of 4 and 6 for Te (github issue #1204 from hsiaoyi0504) - Stereochemistry not output to SMILES when allHsExplicit=True (github issue #1219 from greglandrum) - Remove deprecated string module functions (github pull #1223 from gedeck) - Turns on -fpermissive for gcc >= 6 and boost < 1.62 (github pull #1225 from bp-kelley) - all-atom RMSD used to prune conformers in embedding code, docs say heavy-atom RMSD is used (github issue #1227 from greglandrum) - FindPotentialStereoBonds() failure (github issue #1230 from greglandrum) - make the Pandas version checking more robust (github pull #1239 from greglandrum) - Failure to embed larger aromatic rings (github issue #1240 from greglandrum) - fixed build failure on Windows due to missing link to library (github pull #1241 from ptosco) - fixed a test failure on Windows due to CR+LF encoding (github pull #1242 from ptosco) - MolFromMolBlock sanitizing when it should not be (github issue #1251 from greglandrum) - PMI descriptors incorrect (github issue #1262 from greglandrum) - Reactions don't modify isotope unless chemical element is specified for the product (github issue #1266 from i-tub) - Do not include the 3D descriptors in rdkit.Chem.Descriptors.descList (github issue #1287 from greglandrum) - ring stereochemistry perception failing for spiro centers (github issue #1294 from greglandrum) - Property pickling test failing on windows (github issue #1348 from greglandrum) - Fixes overflow error in boost when compiler chooses int for enum type (github pull #1351 from bp-kelley) - Hybridization type of group 1 metals (github issue #1352 from richlewis42) - bad python docs for some distance geometry functions (github issue #1385 from greglandrum) - Bond from reactant not added to product (github issue #1387 from greglandrum) - int32_t with no namespace in MolPickler.h (github issue #1388 from greglandrum) ## Contrib updates: - Chemical reaction role assignment code from Nadine Schneider (github pull #1185 from NadineSchneider) ## Deprecated code (to be removed in a future release): - rdkit.Chem.MCS: please use rdkit.Chem.rdFMCS instead ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! _______________________________________________ Rdkit-devel mailing list Rdkit-devel@lists.sourceforge.net<mailto:Rdkit-devel@lists.sourceforge.net> ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! _______________________________________________ Rdkit-devel mailing list Rdkit-devel@lists.sourceforge.net
https://www.mail-archive.com/rdkit-devel@lists.sourceforge.net/msg00264.html
CC-MAIN-2018-47
refinedweb
1,678
58.62
mod_perl Developer's Cookbook timothy posted more than 11 years ago | from the boiling-over dept. davorg writes "Over." Read on below for Daveorg's thoughts on this. frist ps0t (-1, Flamebait) Anonymous Coward | more than 11 years ago | (#4281017) Yeah, stick it to the man!! Long live VBScript!!! (-1, Offtopic) Anonymous Coward | more than 11 years ago | (#4281109) Not a really useful book (3, Insightful) MrBoombasticfantasti (593721) | more than 11 years ago | (#4281019) Re:Not a really useful book (2) thaigan (197773) | more than 11 years ago | (#4281148) Re:Not a really useful book (1) MrBoombasticfantasti (593721) | more than 11 years ago | (#4281172) Re:Not a really useful book (2) thaigan (197773) | more than 11 years ago | (#4281209) Re:Not a really useful book (1) MrBoombasticfantasti (593721) | more than 11 years ago | (#4281647) Re:Not a really useful book (to you?) (5, Informative) lindner (257395) | more than 11 years ago | (#4281350) ?) (1) MrBoombasticfantasti (593721) | more than 11 years ago | (#4281692) Re:Not a really useful book (to you?) (1) d_i_r_t_y (156112) | more than 11 years ago | (#4287524) (2) consumer (9588) | more than 11 years ago | (#4281368) Re:Not a really useful book (0) Anonymous Coward | more than 11 years ago | (#4283369) the recipe for intercepting writes to the error_log [modperlcookbook.org] an API for interfacing with Digest authentication [modperlcookbook.org] using Apache::RegistryLoader as a PerlRestartHandler [modperlcookbook.org] making XBitHack [modperlcookbook.org] actually useful on Win32 automatically transforming incoming UTF8 charset data [modperlcookbook.org] making Apache API functions available outside of a running Apache server [modperlcookbook.org] cleaning up stale Apache::DBI connections without killing the child process [modperlcookbook.org] and more... useful stuff. can it be? (-1) Drunken Coward (574991) | more than 11 years ago | (#4281020) fsdafsa fs (-1, Offtopic) Anonymous Coward | more than 11 years ago | (#4281023) Useless review (1) vlag (552656) | more than 11 years ago | (#4281029) Re:Useless review (1) vlag (552656) | more than 11 years ago | (#4281044) Sad News, All-Star Gorilla, Patrick Ewing Retires (-1, Offtopic) Anonymous Coward | more than 11 years ago | (#4281047) Associated Press NEW YORK -- As Patrick Ewing talked about his retirement, there was a softness in his eyes, a relaxed look replacing the glare he used while establishing himself as one of the 50 greatest players in NBA history. Patrick Ewing Patrick Ewing will go right from his official retirement as a player to the Wizards' bench as an assistant coach.lie.'' mod_perl is not just "quicker CGI" (5, Informative) ajs (35943) | more than 11 years ago | (#4281054)" (1) rplacd (123904) | more than 11 years ago | (#4281078) Re:mod_perl is not just "quicker CGI" (1) eam (192101) | more than 11 years ago | (#4281104) Of course, the binary could also cache the page... Re:mod_perl is not just "quicker CGI" (-1, Offtopic) rplacd (123904) | more than 11 years ago | (#4281143) Re:mod_perl is not just "quicker CGI" (3, Informative) ajs (35943) | more than 11 years ago | (#4281153)" (3, Insightful) cperciva (102828) | more than 11 years ago | (#4281204)" (2) ajs (35943) | more than 11 years ago | (#4281292) C's sendfile can (when possible) perform a DMA transfer from the disk controler to the ethernet controler, which will beat the snot out of any relational database access. Re:mod_perl is not just "quicker CGI" (2) cperciva (102828) | more than 11 years ago | (#4281489)" (2) ajs (35943) | more than 11 years ago | (#4281627)" (2) consumer (9588) | more than 11 years ago | (#4281805) A good system will allow for caching of both data and generated HTML. Re:mod_perl is not just "quicker CGI" (2) cperciva (102828) | more than 11 years ago | (#4282928) Only if you write crappy code, or you have extremely complicated pages. A few hundred thousand cycles is reasonable for well written code generating a web page from cached data. Re:mod_perl is not just "quicker CGI" (2) ajs (35943) | more than 11 years ago | (#4283714)" (1) cperciva (102828) | more than 11 years ago | (#4283936) I'm not. The operating system is. Re:mod_perl is not just "quicker CGI" (1) ajs (35943) | more than 11 years ago | (#4286433) Re:mod_perl is not just "quicker CGI" (0) Anonymous Coward | more than 11 years ago | (#4281404) Re:mod_perl is not just "quicker CGI" (0) Anonymous Coward | more than 11 years ago | (#4281265) Re:mod_perl is not just "quicker CGI" (1) scottj (7200) | more than 11 years ago | (#4281663) They compete with other commercial software. But at US$500,000 for licensing (average), they're nowhere near competing with mod_perl. Re:mod_perl is not just "quicker CGI" (2, Informative) WWWWolf (2428) | more than 11 years ago | (#4281412)" (3, Insightful) Glorat (414139) | more than 11 years ago | (#4281477)" (3, Insightful) gorilla (36491) | more than 11 years ago | (#4281556) C: A Dead Language? (-1, Offtopic) Anonymous Coward | more than 11 years ago | (#42810 C#. Having programmed in both for many years, I believe that C# C# is released definately seems to be the most fair and reasonable of all the licenses in existance, with none of the harsh restrictions of the BSD license. It also lacks the GPL's requirement that anything coded with its tools becomes property of the FSF. I hope to see a switch from C/C++ to C# C#. Richard Stallman plans to support this, and hopes that the great Swede himself, Linus Torvalds, won't object to renaming Linux to C#/Linux. Although not a C coder himself, I'm told that Slashdot's very own Admiral Taco will support this on his web site. Finally, Dennis Ritchie is excited about the switch! Thank you for your time. Happy coding. Perl security papers (0) Anonymous Coward | more than 11 years ago | (#4281062) website support (4, Informative) Anonymous Coward | more than 11 years ago | (#4281106) [modperlcookbook.org] enjoy Re:website support (0) Anonymous Coward | more than 11 years ago | (#4281389) Re:website support (1) davorg (249071) | more than 11 years ago | (#4281419) Not sure what you're trying to imply here. I have nothing at all to do with the publisher. Re:website support (1) FIGJAM (29275) | more than 11 years ago | (#4285555) mod_perl slow, php good (-1, Offtopic) destiney (149922) | more than 11 years ago | (#4281111) mod_perl is so slow, I heard the U.S. Postal Service wants to use it. Re:mod_perl slow, php good (3, Informative) consumer (9588) | more than 11 years ago | (#4281216) Re:mod_perl slow, php good (1) Chris Shiflett (607251) | more than 11 years ago | (#4282575) (0) Anonymous Coward | more than 11 years ago | (#4285797) A small edge in what way? And do you have any benchmarks if it is performance wise? Re:mod_perl slow, php good (1) Chris Shiflett (607251) | more than 11 years ago | (#4286706) I did not make up these results, nor do I stand behind them. The results seem valid to me, because they simply reaffirm my suspicions. Yours obviously differ. Re:mod_perl slow, php good (1) szap (201293) | more than 11 years ago | (#4286490)). Re:mod_perl slow, php good (0) Anonymous Coward | more than 11 years ago | (#4281231) Re:mod_perl slow, php good (-1, Offtopic) Anonymous Coward | more than 11 years ago | (#4281248) To hell with the book. (-1, Offtopic) Anonymous Coward | more than 11 years ago | (#4281123) This book sucks (-1, Troll) Anonymous Coward | more than 11 years ago | (#4281169) good book (-1, Troll) Anonymous Coward | more than 11 years ago | (#4281299) CmdrTaco It's taken a while for publishers to wake up to... (3, Informative) pizza_milkshake (580452) | more than 11 years ago | (#4281380) Re:It's taken a while for publishers to wake up to (3, Funny) lindner (257395) | more than 11 years ago | (#4281414) (2, Informative) davorg (249071) | more than 11 years ago | (#4281507) It's partly my fault. I got my review copy in June :-/ Not mod_perl 2.0 (2) PineHall (206441) | more than 11 years ago | (#4281618) One thing to note is that it is for the 1.3 version not the new 2.0 version. They say though there are not too many differences. Re:Not mod_perl 2.0 (2, Interesting) lindner (257395) | more than 11 years ago | (#4281656) (2, Interesting) barries (15577) | more than 11 years ago | (#4281763) About to be published? I HAVE this book already. (0) Anonymous Coward | more than 11 years ago | (#4281794) First Mod_perl book? Not quite (0) Anonymous Coward | more than 11 years ago | (#4281797) Maybe this book has a few more examples of mod perl, but not covering Apache 2.0 and mod_perl 2 kinda dates it as we won't be in 1.3 much longer. And the changes in Apache 2.0 open up much more that "a few changes". Sure you could port something with a few changes, but you'll miss alot of the cool stuff od 2.0. Mike Re:First Mod_perl book? Not quite (0) Anonymous Coward | more than 11 years ago | (#4282231) I think he meant the first of those books to come out this year, with Practical mod_perl [amazon.com] being at least one other due out before the year is up. we won't be in 1.3 much longer that depends on a number of factors, not the least of which is that it takes real systems far longer to migrate to a new software version than it takes bleeding-edge developers. mod_perl has lots of problems (0) Anonymous Coward | more than 11 years ago | (#4282338) As such mod_perl is totally unusable in shared hosting environments: if you load scripts from different users into one server, they start to trample on each other's namespaces: too bad mod_perl doesn't have any provisions for forceful namespace separation between virtual hosts. Mod_perl has been around for more than 5 years, and the quality of the code is still mediocre, there are subtle problems in more than few places unfortunately (of course most of them are the result of Perl5 as a mediocre programming language for this type of application). Good documentation: practically non-existent aside from a performance guide by Stas. Ease of use: close to 0 out of 10 - mod_perl is nothing more than a perl interface to the Apache C internals and is really more a platform than a product. While it has some shortcuts for simply wrapping your olde CGI scripts and speeding them up considerably (DB connection caching is the second best feature), not that many people will actually use mod_perl until it has some kind of a app server product that is makes using mod_perl easy, PHP is by far the best starting point for most people and maybe 10% of them will ever get to a point where switching to mod_perl actully makes sense for them for one reason or another - and when they do it's usually a switch to Perl over PHP. Re:mod_perl has lots of problems (0) Anonymous Coward | more than 11 years ago | (#4282847) As far as the vhosting stuff goes, I consider the biggest problem to be the fact that suexec doesn't work with mod_perl. mod_perl is great for dedicated sites, because it gives you tons of power (you can really do anything!) but for the same reason it's too hard to restrict users. I'd like to use mod_perl but... (1) stu42j (304634) | more than 11 years ago | (#4282434)... (2, Informative) Anonymous Coward | more than 11 years ago | (#4282859). a bit late?? (0) Anonymous Coward | more than 11 years ago | (#4282709) Paul Lindner is my hero.. (0) Anonymous Coward | more than 11 years ago | (#4283645) Anyone really interested in mod_perl should check out their mailing list, I have been on it for a few years now, best signal to noise ratio for a technical discussion area ever... -R.Dietrich Use PHP - faster, easier, more efficient (1) bitpusherdotorg (544243) | more than 11 years ago | (#4283727) (0) Anonymous Coward | more than 11 years ago | (#4283870) Re:Use PHP - faster, easier, more efficient (2) unicron (20286) | more than 11 years ago | (#4283879) Re:Use PHP - faster, easier, more efficient (2, Insightful) bitpusherdotorg (544243) | more than 11 years ago | (#4285047) (0) Anonymous Coward | more than 11 years ago | (#4294965) Re:Use PHP - faster, easier, more efficient (0) Anonymous Coward | more than 11 years ago | (#4310009) Re:Use PHP - faster, easier, more efficient (1) Doppler00 (534739) | more than 11 years ago | (#4284528) Re:Use PHP - faster, easier, more efficient (0) Anonymous Coward | more than 11 years ago | (#4285029) Re:Use PHP - faster, easier, more efficient (0) Anonymous Coward | more than 11 years ago | (#4289518) What gives perl a bad name is the developers who write webapps with giant if else statements, and try to do as much as possible on 1 line thinking it will be faster (I used to be one many years ago). But if you use a clean OO structure for your code, separate your HTML from the code, and throw in the odd comment, any beginner programmer can read your code... For me you just can't beat perl with a good template engine (like HTML::Template or Template Toolkit), and a nice DBMS behind it (PostgreSQL will do for 97.64% of apps out there). And running it on Apache with mod_perl just makes it that much more powerful. You can write garbage code in just about any language. But if you organize things coherently, it can be as easy as reading a book. Excellent (1) DaRobin (57103) | more than 11 years ago | (#4287390)? (1) stonetemple (97639) | more than 11 years ago | (#4287448)
http://beta.slashdot.org/story/28503
CC-MAIN-2014-41
refinedweb
2,238
64.95
>>I am working on a project that is going to find the current limit of >16-bits for device numbers to be a pain. While looking around in the >linux-kernel archive, ...>This is the whole reason Linux 2.4 uses devfs (device filesystem) -there is no need to use device numbers; you just register the name inthe /dev/whatever namespace and it's done. (The kernel will assign aunique old-style 16-bit number for compatibility purposes as needed.)See linux/Documentation/filesystems/devfs/README for the full story.- Matt Yourst------------------------------------------------------------- Matt T. Yourst Massachusetts Institute of Technology yourst@mit.edu 617.225.7690 513 French House - 476 Memorial Drive - Cambridge, MA 02136--------------------------------------------------------------To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at
https://lkml.org/lkml/2000/9/14/92
CC-MAIN-2021-39
refinedweb
139
51.04
Many Blackfins now have on-chip One Time Programmable (OTP) memory. This document will cover this OTP memory and is not a general document for any other OTP flash memory device. The exact size of the OTP region may vary according to your Blackfin processor variant, so consult the datasheet for exact specifications. The OTP memory is backed by physical fuses which are “blown” in order to change bit states. Due to the destructive nature of this operation, you can only change bits once. Protection bits and ECC support is included to lock memory from further writing, and to allow for error corrections in case a bit goes bad. The OTP memory is not directly accessible (in other words, it is not a memory mapped region). Control MMRs are used to communicate with the OTP peripheral (setting up timings, reading/writing values, etc…). Since programming of the MMRs can be error prone, helper functions have been included in the Boot ROM. This is the only supported method for accessing the OTP region. The OTP memory is split up into 128 bit pages. Read/write accesses occur in increments of 64 bit half pages. Often times the pages have predefined meanings (such as the Factory Programmed Settings (FPS) pages), or the Customer Programmable Settings (CPS) pages. The memory is also split up into an “unsecure” region (which means it can be used at any time) and a “secure” region (which means it can only be used when the processor is in “secure” mode). Consult the HRM for further information on these topics. The Blackfin on-chip ROM contains helper functions for accessing the OTP memory. We will not discuss this interface further; please refer to the HRM for your processor variant for more information. uint32_t bfrom_OtpCommand(uint32_t dCommand, uint32_t dValue); uint32_t bfrom_OtpWrite(uint32_t dPage, uint32_t dFlags, uint64_t *pPageContent); uint32_t bfrom_OtpRead(uint32_t dPage, uint32_t dFlags, uint64_t *pPageContent); There is a simple otp command to read and write OTP memory. It follows the typical U-Boot command style and can be found at common/cmd_otp.c. bfin> help otp otp read <addr> <page> [count] [half] otp write [--force] <addr> <page> [count] [half] - read/write 'count' half-pages starting at page 'page' (offset 'half') Here is an example reading the first 10 half pages and storing the results at address 0x100. bfin> otp read 0x100 0 10 OTP memory read: addr 0x00000100 page 0x000 count 16 ... W...........W... done A simple character device driver has been created so you can read/write OTP memory from user space. The driver can be found at linux-2.6.x/drivers/char/bfin-otp.c. First enable the driver in your kernel configuration menu. Obviously, you should only enable writing support if you intend on writing OTP. Device Drivers ---> Character devices ---> <*> Blackfin On-Chip OTP Memory Support [ ] Enable writing support of OTP pages While booting, you should see something like: bfin-otp: initialized Then to access the OTP memory, just open /dev/bfin-otp and use it like a normal character device. Make sure you only do reads/writes in 64 bit increments (8 bytes or 1 half page), and you can use the standard seek functions to select the half page (in units of bytes) you want to start reading from. A quick example is to use dd to read the OTP memory: root:/> dd if=/dev/bfin-otp of=otp.bin bs=8 count=100 100+0 records in 100+0 records out root:/> ls -l otp.bin -rw-r--r-- 1 root root 800 Jan 1 00:41 otp.bin Notice how the block size was set to 8 bytes (or 64 bits). Any other increment will result in an error since the ROM functions only support reading in chunks of half pages. Before you can write to the device, you have to initiate a simple unlock ioctl. This is to prevent inadvertent writes and has no relation to the OTP_LOCK operation. Once you've opened the device and have a file descriptor, issue: #include <sys/ioctl.h> #include <mtd/mtd-abi.h> ... ioctl(fd, MEMUNLOCK); Then use the normal write and lseek functions to write out data. The same limitation mentioned above with valid lengths applies here as well. You can only program half-pages at a time, so it is up to you to handle this. The kernel will not split up/overlay writes for you. Conversely, once you've finished writing, you should inform the kernel by doing: ioctl(fd, MEMLOCK); If you wish to lock a page in OTP (the OTP_LOCK operation), then you can use the OTPLOCK ioctl: #include <sys/ioctl.h> #include <mtd/mtd-abi.h> ... ioctl(fd, OTPLOCK, 0x1C); This will lock page 0x1C in OTP. Complete Table of Contents/Topics
https://docs.blackfin.uclinux.org/doku.php?id=:otp
CC-MAIN-2019-13
refinedweb
794
62.98
Tip/Trick: Creating Packaged ASP.NET Setup Programs with VS 2005 Scenario: ): : :: ! To test it, we can right-click on the web setup project within the solution explorer and choose the “Install” option to install it (or alternatively launch it outside of VS by running it): This will launch a standard Windows installer and walk the user through installing the application on IIS: VS 2005’s web setup projects allow you to pick which site to install the application on if multiple sites are configured on IIS (this wasn’t supported with the VS 2003 version). You can optionally specify an application virtual-directory path to use (for example:), or you can leave this value blank to install it as the root application on the site (for example:). Once the installer completes, the application will have been copied to disk and registered with IIS. We can now run the application using the HTTP path we provided during installation like so: Once installed the application will also show up in the standard “Add or Remove Programs” utility within the Windows Control Panel: You can remove the application either by running uninstall from the control panel utility, or (at development time) by right-clicking on the web setup project node within the VS Solution Explorer and selecting the “Uninstall” menu item. This will cause all installed files to be removed from disk. 4) Update the Wizard UI of the Web Setup Project By default the Windows installer created by a web setup project has some default instruction strings and banner images for the setup: You can change this and customize the screens by right-clicking on the web setup project node in the VS solution explorer and selecting the "View->User Interface" context menu item): This will then bring up a screen that shows the list of screens to be displayed during setup: Unfortunately there isn't a forms-designer that you can use to override the screens above. However, you can select a screen, and then go to the property grid to customize its text and change the graphics used within the screen: You can also create new screens and add them into the setup wizard. Later in this tutorial we'll use this feature to create a custom screen to collect database connection-string information and use it to automate configuring our web.config file to point at the appropriate database. 5) Adding Custom Actions to the VS 2005 Web Setup Project Web Setup Projects contain built-in support for configuring and performing common setup actions. These include editors for adding/changing registry entries (choose View->Register to configure), changing file-type associations (View->File Types), and for validating prerequisite components are already installed (it automatically checks that the .NET Framework 2.0 redist is installed). Setup Projects also allow you to configure a number of common IIS settings declaratively (click on the “Web Application Folder” in the File System view and then look at the property grid to see these): But for non-trivial setups you are likely to want to be able to execute your own custom code during setup to customize things. The good news is that web setup projects support this with something called “Custom Actions” – which is code you write that can execute during both install and uninstall operations. To add a custom action you first want to add a new class library project to your solution (File->Add->New Project->Class Library). You then want to add assembly references in this newly created Class Library to the System.Configuration.Install.dll, System.Configuration.dll, System.Diagnostics.dll, and System.Web.dll assemblies. You’ll then want to create a new class for your custom action and have it derive from the “System.Configuration.Install.Installer” base class like so: using System.Configuration.Install; using System.ComponentModel; namespace MyCustomAction { [RunInstaller(true)] public class ScottSetupAction : Installer { public override void Install(System.Collections.IDictionary stateSaver) { base.Install(stateSaver); // Todo: Write Your Custom Install Logic Here } } } Notice the custom “RunInstaller(true)” attribute that must be set on the class name. This is important and required (and easy to forget!). You’ll need to add a using statement to the System.ComponentModel namespace to avoid fully qualifying this. Next we’ll need to make sure this Custom Action assembly gets added to our web setup project. To-do this, right-click on the Web Setup Project root node in the solution explorer and select the View->File System menu item to bring up the file-system editor. Right-click on the “bin” sub-folder and choose “Add->Project Output” like we did earlier to get the custom action assembly added to the web setup project: In this case we’ll want to select the Custom Action Class Library project instead of our web application one. Pick it from the project drop-down at the top of the dialog and then select the “Primary Output” option as the piece we want to add to the web-setup project (this will cause the Custom Action assembly to get added): Lastly, we’ll configure the web-setup project to call our custom action assembly during the install phase of setup. To do this we’ll right-click on the web setup project root node in the solution explorer and choose the “View->Custom Actions” menu item. This will then bring up the Custom Actions Editor. Right-click on the “Install” node and choose “Add Custom Action”: Drill into the Web Application Folder and Bin directory and choose the output from our Custom Action we imported: The Setup Project will then automatically detect the custom action because of the “RunInstaller” attribute: Our custom action class and Install method will now run anytime we run the installation setup program. 6) Useful Custom Action Example: ASP.NET Script Mapping Checker The previous section showed how to create and configure an empty custom action class and install method. Let’s now do something useful with it. Specifically, let’s add code to verify that the right version of ASP.NET is correctly mapped for the application we are creating. Because ASP.NET V1.1 and V2.0 can run side-by-side with each other on the same machine, it is possible to have different parts of a web server configured to run using different versions of ASP.NET. By default, the versions inherit hierarchically – meaning if the root application on a site is configured to still run using ASP.NET V1.1, a newly created application underneath the site root will by default run using V1.1 as well. What we’ll do in the steps below is write some code to ensure that our new application always runs using ASP.NET 2.0. To begin with, we’ll select our custom action within the Custom Action explorer (just like in the previous screenshot above - using the View->Custom Action context menu item). We’ll then go to the property grid and specify a few parameters to pass to our custom action to use: Specifically, we’ll pass in the target directory that the application is being installed in, the IIS site map path, and the IIS virtual directory name that the user specified in the setup wizard. This string of values looks like this: /targetdir="[TARGETDIR]\" /targetvdir="[TARGETVDIR]" /targetsite="[TARGETSITE]" We’ll then update our custom action to access these values and do something with them like so: using System.Configuration; using System.Configuration.Install; using System.ComponentModel; using System.Diagnostics; using System.IO; namespace MyCustomAction { [RunInstaller(true)] public class ScottSetupAction : Installer { public override void Install(System.Collections.IDictionary stateSaver) { base.Install(stateSaver); // Retrieve configuration settings string targetSite = Context.Parameters["targetsite"]; string targetVDir = Context.Parameters["targetvdir"]; string targetDirectory = Context.Parameters["targetdir"]; if (targetSite == null) throw new InstallException("IIS Site Name Not Specified!"); if (targetSite.StartsWith("/LM/")) targetSite = targetSite.Substring(4); RegisterScriptMaps(targetSite, targetVDir); } void RegisterScriptMaps(string targetSite, string targetVDir) { // Calculate Windows path string sysRoot = System.Environment.GetEnvironmentVariable("SystemRoot"); // Launch aspnet_regiis.exe utility to configure mappings ProcessStartInfo info = new ProcessStartInfo(); info.FileName = Path.Combine(sysRoot, @"Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe"); info.Arguments = string.Format("-s {0}/ROOT/{1}", targetSite, targetVDir); info.CreateNoWindow = true; info.UseShellExecute = false; Process.Start(info); } } } The above code launches the aspnet_regiis.exe utility that ships with ASP.NET within the \Windows\Microsoft.net\framework\v2.0.5.0727\ directory, and passes in the path location information for the site that the web setup installer previously created, along with the “-s” flag – which indicates that the IIS script-maps for that application should be updated to specifically use the ASP.NET 2.0 version, and not inherit the version number from any parent applications. A special thanks to John for figuring this out in his blog post here. Note: If you are using IIS6 or IIS7, you'll probably want to also add some logic into the custom action to ensure that the application pool that the application is being hosted in is also mapped to use ASP.NET 2.0. Either that or you'll want to tell the admin to manually check the application pool settings after the setup is complete. 7) Useful Custom Action Example: Configuring Database Connection String For our next custom action example, let’s add some UI to the setup that allows a user to configure the connection string details of a database the application should use. Right click on the web setup project and open up the user interface screens again: Right click on the "Install" node on the user interface screens page and chose to add a new dialog to the install wizard: Chose one of the TextBox screens to use for gathering connection string details from the user: Right-click on the TextBox screen node and move it up to be earlier in the wizard (right after we pick the IIS site and application name to use): Then click on the TextBox screen and access its property window. Via the property window you can change the text displayed on the screen, as well as control how many textboxes are visible: Note in the above property window how I've set the Edit2, Edit3 and Edit4 text boxes to not be visible. Now when we build and run the setup package we'll see this dialog in our wizard steps: Now that we have UI to capture the connection-string value entered by a user in the wizard, we want to make sure it is passed to our custom action class. You can do this by right-clicking on the web setup project node and by then choosing the "View->Custom Actions" context menu and then opening the property page window of our custom action: We'll want to update the CustomActionData property value and pass in the connection-string of the database to use (we'll pass in the value from the EDITA1 textbox in the user interface screen): /targetdir="[TARGETDIR]\" /db="[EDITA1]" /targetvdir="[TARGETVDIR]" /targetsite="[TARGETSITE]" We can then update our custom action class to retrieve and use this connectionstring value to update the web.config file of the new application to contain the value the user installing the application entered. Below is a method that opens the web.config file for our new application and programmatically updates it with the user entered connection string: { //(); } And now after we run the setup program our newly installed ASP.NET application's web.config file will have been updated to point to the right database. To learn more about how the ASP.NET configuration APIs can be used to make changes to web.config files, please check out the management API section in the ASP.NET 2.0 Quickstart tutorials. Chris Crowe also has some useful samples that demonstrate how to use the System.DirectoryServices APIs to query IIS settings (I needed them to figure out how to lookup the "friendly name" of the site from IIS to open up the web.config file). You might also want to check out this MSDN documentation sample that demonstrates how to programmatically create a new database (complete with schema and data) with a custom action. You could combine the approach in the MSDN article with the configuration one I used above to completely automate database deployment as part of your setup. Summary Hopefully the above tutorial helps demonstrate how to get started with using the built-in web setup project support within Visual Studio. Click here to download a complete version of the sample I built above. Web setup projects aren't perfect for all scenarios, and I'd primarily recommend them only for cases where you want a packaged GUI setup program (for example: to give to an external customer or to make available as a download on a web-site). If you are instead working on maintaining/managing a site that you have direct access to, I'd probably instead recommend using the "Publish Application" feature available with VS 2005 Web Application Projects (for simple updates), or recommend authoring a PowerShell script to automate updates to the remote server. For an example of a really advanced Powershell script that uses to update their site, check out Omar's article here. One downside with the VS 2005 Web Setup Project support is that you can only build web setup projects from within the IDE - which means you can't completely automate the creation of .MSIs as part of an automated MSBuild process. If this is a showstopper for you, you should consider looking at the WIX setup framework - which does support this scenario. You can find a good set of WIX Tutorials here. If someone wants to publish a blog post that demonstrates how to perform the scenarios I outlined in the blog post above using WIX, let me know and I will definitely link to it (and send you a few .NET books to say thanks!). Hope this helps, Scott P.S. Please check out my ASP.NET Tips, Tricks and Tutorials page for more cool ASP.NET samples and tips/tricks.
http://weblogs.asp.net/scottgu/tip-trick-creating-packaged-asp-net-setup-programs-with-vs-2005
CC-MAIN-2014-35
refinedweb
2,351
51.18
I'm making a library that provides both client & server side code. When making the tests, I would like to test the interactions from both sides. So far I have at least this tests: Server side: @TestOn("vm") import "package:test/test.dart"; import "dart:io"; //... void main() { HttpServer server = HttpServer.bind(InternetAddress.LOOPBACK_IP_V4, 4040) //.then()... @TestOn("content-shell") import "package:test/test.dart"; import "dart:html"; //... void main(){ //Interact with server at 4040 As stated in the docs provided by Günter, create dart_test.yaml in the package's root: #dart_test.yaml #run 2 test suites at the same time (I guess, that in 2 different cores) concurrency: 2 Now run pub run test test/server.dart test/client.dart -pvm,content-shell If it takes long (generally on opening the browser) you could add to the same config file: timeout: none #or i.e., 1m 30s You can also save the -pvm,content-shell part of the command by seizing the config file: platforms: - vm - content-shell If this doesn't work, you could save the hours it took me figuring out what the heck happened by running: pub cache repair
https://codedump.io/share/dCdeDxkHLlrw/1/how-to-test-dart39s-package-made-for-both-client-and-server-side
CC-MAIN-2017-43
refinedweb
192
74.59
Carl Kadie, Ph.D., is a research developer in Microsoft Research/TnR working on Genomics. Lambda expressions provide a way to pass functionality into a function. Sadly, Python puts two annoying restrictions on lambda expressions. First, lambdas can only contain an expression , not statements . Second, lambdas can’t be serialized to disk. This blog shows how we can work around these restrictions and unleash the full power of lambdas. So what are lambda’s good for? Suppose, you have a list of words from a string. "This is a test string from Carl".split() ['This', 'is', 'a', 'test', 'string', 'from', 'Carl'] You can sort the words with sorted() . sorted("This is a test string from Carl".split()) ['Carl', 'This', 'a', 'from', 'is', 'string', 'test'] Notice, however, that all the capitalized words, sort before all the lower-case words. This can be fixed by passing a lambda expression as the key argument to the sorted() function. sorted("This is a test string from Carl".split(), key=lambda word: word.lower()) # `key=str.lower` also works. ['a', 'Carl', 'from', 'is', 'string', 'test', 'This'] Lambda can be more complicated. Suppose we want to sort the words based on their (lower-case) back-to-front letters? As a reminder, here is a Python way to reverse the lower-case letters of a word: str.lower("Hello")[::-1] 'olleh' And here is how to pass this functionality to sorted() using a lambda: sorted("This is a test string from Carl".split(), key=lambda word: word.lower()[::-1]) ['a', 'string', 'Carl', 'from', 'is', 'This', 'test'] But what if you want even more complex functionality? For example, functionality that requires if statements and multiple lines with unique scoping? Sadly, Python restricts lambdas to expressions only. But there is a workaround! Define a function that - defines an inner function and … - returns that inner function. Note that the inner function can refer to variables in the outer function, giving you that private scoping. In this example lower_sorted() is the outer function. It has an argument called back_to_front . Inside lower_sorted , we define and return an inner function called inner_lower_sorted() . That inner function has multiple lines including an if statement that references back_to_front . def lower_sorted(back_to_front=False): def inner_lower_sorted(word): result = word.lower() if back_to_front: #The inner function can refer to outside variables result = result[::-1] return result return inner_lower_sorted print(sorted("This is a test string from Carl".split(), key=lower_sorted())) print(sorted("This is a test string from Carl".split(), key=lower_sorted(back_to_front=True))) ['a', 'Carl', 'from', 'is', 'string', 'test', 'This'] ['a', 'string', 'Carl', 'from', 'is', 'This', 'test'] You may find lambdas and these inner functions handy enough that you’d like to serialize one to disk for use later. Sadly, if you try to seralize with pickle module , you’ll get an error message like “TypeError: can’t pickle function objects”. A nice workaround is to use the dill project in place of pickle . The dill project is a third-party package that is now included in the standard Anaconda distribution . Here is an example: !pip install dill Requirement already satisfied (use --upgrade to upgrade): dill in /home/nbcommon/anaconda3_23/lib/python3.4/site-packages You are using pip version 8.1.1, however version 8.1.2 is available. You should consider upgrading via the 'pip install --upgrade pip' command. import dill as pickle with open("temp.p",mode="wb") as f: pickle.dump(lower_sorted(back_to_front=True), f) with open("temp.p", mode="rb") as f: some_functionality = pickle.load(f) sorted("This is a test string from Carl".split(), key=some_functionality) ['a', 'string', 'Carl', 'from', 'is', 'This', 'test'] Serialization of lambdas and these inner functions opens exciting possibilities. For example, we use it in one of our libraries to run work in different processes and even on different machines in a cluster. We’ve seen that lambdas are a handy way to pass functionality into a function. Python’s implementation of lambdas has two restrictions, but each restriction has a workaround. - Multiple lines not allowed. - Workaround: Define a function that defines and returns an inner function. The inner function can use variables outside itself. - Can’t pickle lambdas or inner functions. - Workaround: Replace picklewith dill. Python offers features such as list comprehensions that makes lambdas less used that in other languages. When you do need lambdas, however, they will now be unleashed. 评论 抢沙发
http://www.shellsec.com/news/26073.html
CC-MAIN-2017-04
refinedweb
727
60.61
The SDK should conform to the requirements, recommendations, and best practices of the CommonJS ratified Packages 1.0 specification, approved Modules 1.1.1 specification, and the parts of the draft Packages 1.1 specification that seem relatively settled. This bug is a meta-bug to track discrete issues with such conformance. Current known issues include bug 552841 and bug 591343. Also note bug 591338, which is not a conformance issue per se but whose solution may impact conformance with Packages 1.1. Note that conformance doesn't mean that we won't implement functionality unless it is specified by those specifications. Indeed, our input into the development of future specifications will be greatly aided by experience implementing functionality we wish to provide that is not currently specified. So we should implement the functionality we intend to provide our developer audience even if such functionality isn't currently addressed by a CommonJS spec. Conformance also isn't an end in itself. Rather, it is a means for improving interoperability in order to reduce the learning curve for developers adopting the SDK and enable module-sharing between our implementation and others. So we should pursue it with great vigor, but not to the absolute exclusion of other goals, like usability and usefulness. Brian has expressed a willingness to tackle CommonJS compatibility issues for the SDK 0.10 release, so assigning this bug to him. I would recommend to play with [nodejs]() and [npm](). NPM is compliant with CommonJS while it has few extensions on top. I do think compliance would benefit both of the projects jetpack and node. This also would mean that addon developers would be able to reuse lot's of libraries that already exist in npm registry (there are lot's of crazy stuff already for example implementation of git in js :)". Ok, I'm seeing four issues to pay attention to: * "cfx run path/to/foo.js" (mentioned in bug 591343). I think this would be nice to have, but not a high priority. * using the "id" key in package.json (mentioned in comment 8 and 9 of bug 552841). We use "id" to hold a cryptographic (hash-of-pubkey) addon identifier in the top-level package that defines an addon. I think this is ok, since these top-level packages are unlikely to ever be added to a "package registry", for whom the "id" and "type" keys are reserved. * adding a "module" name to the module's initial namespace as a sibling of "require" and "exports" (mentioned in bug 591343). So far, the only claimed use of this that I've seen is to either do "require(module.id)" (and get back your own "exports" object), or to pass module.id to somebody else so they can require(it) and get your exports object. In either case, you can just pass your "exports" object directly. * defining the module search order to allow both require("panel") (meaning addon-kit/panel) and require("foo") (meaning importing the "main" module from package foo). This is covered by bug 591338. I'll add it to the dependency list and add notes with our proposed solution over there. Prospectively moving this to 1.0. Brian: is there something here that needs to be done for 1.0? I think with this change inn we can mark it as fixed.
https://bugzilla.mozilla.org/show_bug.cgi?id=606597
CC-MAIN-2016-22
refinedweb
558
57.16
AttributeError: 'super' object has no attribute 'init' OS X 10.10.4, Python 2.7.10 Running the example from the introduction: from Foundation import NSObject) Result: $ python test.py Traceback (most recent call last): File "test.py", line 38, in <module> myInstance = MyClass.alloc().init() File "test.py", line 13, in init self = super(MyClass, self).init() AttributeError: 'super' object has no attribute 'init' Forgot to mention: I’m using a custom-built Python (via Homebrew), not the one bundled with the OS. You need to add "from objc import super" to the top of the module. The builtin version of super doesn't work properly with Cocoa classes. I'm working on a PEP that enhances the builtin super and takes away the need for the import. With some luck that will be included in Python 3.6 Ronald -- On the road, hence brief. This is not a bug, as mentioned earlier you need to use "from objc import super" to get a working version of super. The reason for this is that builtin.super assumes that classes are static and that it is sufficient to peek in the dict of classes. I have written a PEP to fix this in a future release of Python, although that PEP is making slow progress due to lack of time on my end. Removing version: 3.0 (automated comment)
https://bitbucket.org/ronaldoussoren/pyobjc/issues/131/attributeerror-super-object-has-no
CC-MAIN-2018-39
refinedweb
229
77.13
Working with XML from Java is a pretty rich topic; multiple APIs are available, and many of these make working with XML as easy as reading lines from a text document. Tree-based APIs like DOM present an in-memory XML structure that is optimal for GUIs and editors, and stream-based APIs like SAX are great for high-performance applications that only need to get at a document's data. In this series of tips, I walk you through the use of XML from Java, starting with the basics. Along the way, you'll learn lots of tricks that many of the pros don't even know about, so stick around even if you already have some XML experience. I begin with SAX -- the Simple API for XML. While this API is probably the hardest of the Java and XML APIs to master, it's also arguably the most powerful. Additionally, most other API implementations (like DOM parsers, JDOM, dom4j, and so forth) are based in part on a SAX parser. Understanding SAX gives you a headstart on everything else you do in XML and the Java language. In this tip specifically, I'll cover getting an instance of a SAX parser and setting some basic features and properties of that parser. Note: I'm assuming you have downloaded a SAX-compliant parser, such as Apache Xerces-J (see Resources for links). The Apache site has a wealth of information on how to get things set up, but basically you just need to drop the downloaded JAR files into your CLASSPATH. These examples assume that your parser is available for use. The first step in working with SAX is actually getting an instance of a parser. In SAX, the parser is represented by an instance of the org.xml.sax.XMLReader class. I covered this in detail in a previous tip ("Achieving vendor independence with SAX" -- see Resources), so I won't spend much time on it here. Listing 1 shows the correct way to get a new SAX parser instance without writing vendor-dependent code. Listing 1. Getting a SAX parser instance Using this methodology, you need to set the system property org.xml.sax.driver to the class name of the parser you want to load. This is a vendor-specific class; for Xerces it should be org.apache.xerces.parsers.SAXParser. You specify this argument with the -D switch to your Java compiler: Of course, you want to ensure that the class specified exists and is on your class path.. Listing 2. Setting features on a SAX parser This is pretty self-explanatory; the key is knowing the common features available to SAX parsers. Each feature is identified by a specific URI. A complete list of these URIs is available online at the SAX Web site (see Resources). Some of the most common features are validation and namespace processing. Listing 3 shows an example of setting both of these properties. Listing 3. Some common features. Listing 4. Setting properties on a SAX parser. Listing 5. Some common properties With an understanding of features and properties, you can make your parser do almost anything. Once you understand setting up your parser in this fashion, you're ready for my next tip, which will discuss building a basic content handler. Until then, I'll see you online in the XML and Java technology forum. - Review the details of SAX and vendor-independence (developerWorks, March 2001). - Get the nitty-gritty details in the XML specification at the W3C. - See the SAX-standardized features and properties list. - Supplement your skills with Java and XML by Brett McLaughlin (O'Reilly and Associates). - Learn more about Xerces-J -- and download the latest version -- at the Apache Web site. -. Brett McLaughlin has been working in computers since the Logo days. (Remember the little triangle?) He currently specializes in building application infrastructure using.
http://www.ibm.com/developerworks/xml/library/x-tipsaxp.html
crawl-002
refinedweb
649
63.49
This lesson demonstrates how to add a series template to a chart diagram, and bind this template to a datasource. To learn how to manually generate series from a datasource, refer to the Lesson 3 - Bind Chart Series to Data tutorial. Note that although in this tutorial you will use the Microsoft Access database (.mdb), the ChartControl can be bound to a variety of other data providers. For more information on data binding, refer to the Providing Data section of the Chart control documentation. To create a data-bound charting application, do the following. Run MS Visual Studio 2010, 2012, 2013, 2015 or 2017. Create a new WPF Application project. Add the ChartControl component to your project, as you did in Lesson 1 - Create a Simple Chart (see step 1). In the step below, you will bind the ChartControl to the GSP database. The chart will represent this table as bars. To perform design-time data binding, the Items Source Configuration Wizard is used. The Wizard automatically generates the data binding code in XAML. To add a datasource to a WPF application, proceed with the following steps. Locate the ChartControl in the form designer and click its smart-tag. Then, in the ChartControl actions list, click the Data Source Wizard to invoke the wizard. Select the ADO.NET Typed DataSet data technology and click the New Data Source... button. A message box that notifies you of the necessity to rebuild the solution and reopen the Items Source Configuration Wizard appears. Click Ok to run the Data Source Configuration Wizard. The Data Source Configuration Wizard appears. Leave the Database item as is and click Next. In the next wizard page, leave the database model as a Dataset and click Next. On the Choose Your Data Connection page, click New Connection... to create a connection to a database. This invokes the Add Connection dialog. Leave the Microsoft Access Database File as your data source. Then, click the Browse... button and specify the gsp.mdb database as your data connection. By default, it is stored in the following path. C:\Users\Public\Documents\DevExpress Demos 18.1\Components\Data Click OK to close the dialog and then click Next in the wizard. This invokes a message box that asks whether or not to copy the database file to the project and modify the connection. Click Yes, as shown below. In the next wizard page, you can choose whether to save the connection settings to the application configuration file or not. Make sure that the Yes, save the connection as: check box is checked and click Next. Expand the Tables list and select the GSP item. (Make sure that only this item is selected.) To bind the ChartControl to a data source, do the following. Rebuild the solution and open the Items Source Configuration Wizard, as you did in step 2. The newly created data source will be displayed within the Data Sources section. Click Next. Select Simple Binding to bind the control to a plain collection of data objects and click Next. Select the GSP table and click Finish. You will need to remove the BarSideBySideSeries2D series from the diagram (this series was created automatically in step 1), because the series template will instead be bound to a datasource in this lesson. To do this, locate the ellipsis button for the Diagram.Series property, and click it. In the invoked Series Collection Editor: Series dialog, remove the BarSideBySideSeries2D object and click OK. Locate the Diagram.SeriesTemplate property and set it to BarSideBySideSeries2D. To specify a data field for as many series to be auto-created as there are records in this field, locate the XYDiagram2D object's Diagram.SeriesDataMember property and set its value to Year. Next, set the name of the data field that contains the arguments of series template points. For this, set the Series.ArgumentDataMember property to Region. To specify the name of the data field that contains the values of series template points, set the Series.ValueDataMember property to GSP. The final XAML is shown below. <Window xmlns="" xmlns:x="" xmlns:dx="" xmlns:gspDataSetTableAdapters="clr-namespace:Charts_Lesson4.gspDataSetTableAdapters" xmlns:local="clr-namespace:Charts_Lesson4" xmlns: <Window.Resources> <dx:TypedSimpleSource x: <dx:DesignDataManager.DesignData> <dx:DesignDataSettings </dx:DesignDataManager.DesignData> </dx:TypedSimpleSource> </Window.Resources> <Grid> <dxc:ChartControl <dxc:ChartControl.Legend> <dxc:Legend/> </dxc:ChartControl.Legend> <dxc:XYDiagram2D <dxc:XYDiagram2D.SeriesTemplate> <dxc:BarSideBySideSeries2D </dxc:XYDiagram2D.SeriesTemplate> </dxc:XYDiagram2D> </dxc:ChartControl> </Grid> </Window> Run the project. The following image shows the resulting chart at runtime. A complete sample project is available in the DevExpress Code Examples database at.
https://documentation.devexpress.com/WPF/9758/Controls-and-Libraries/Charts-Suite/Chart-Control/Getting-Started/Lesson-4-Use-a-Series-Template-for-Auto-Created-Series
CC-MAIN-2018-30
refinedweb
764
52.66
no new items of business no objections to approving 11th sept minutes no objections to approving 18th sept minutes >2002/07/31: Asir >Draft email to xmlp-comments to close issue 338 with status quo pending >2002/08/01: Gudge >Research infoset properties of header and body children, and suggest >appropriate changes done >2002/09/04: Editors >Send e-mail to xmlp-comments closing all editorial issues. It's OK to 'batch' these up. Editorial discretion on how much we put into each response. pending >2002/09/04: DavidF >Contact implementers regarding updates to their implementation of table 2 of the implementation table pending >2002/09/04: DavidF >Organize (or outsource) an implementer telcon for finding out how the implementation table can be tested pending >2002/09/04: Editors >Incorporate ref #8 (Resolution of issue 221 amended by Noah's proposal) with the understanding that XXXX is a sender fault pending >2002/09/11: DavidF >Provide a plan for possible f2f meetings between Mid Oct through Mid Nov (Dates mentioned on the call) and call for hosts to volunteer pending >2002/09/11: AF Editors >Prepare last call WD EOB Monday for final check next week. done >2002/09/11: Editors >Make state, role, and failure properties URIs dependence on 305 >2002/09/18: DavidF, Carine, Yves >Take AF doc to last call ASAP. done >2002/09/18: Asir >Send request of clarification of details on closing issue 338 done >2002/09/18: Marc >Provide text for issue related to table 17 based on the >"" >but with the text pulled out to before the table as it may apply to >other status codes as well. done >2002/09/18: Editors >Incorporate changes wrt 292 resolution done >2002/09/18: Marc >Send closing text for issue 292 done >2002/09/18: Noah & Henrik >Try drafting a clearer justification to Joseph Reagle on issue 250. done, henrik to send >2002/09/18: Jacek >Send detailed email to xmlp-comment and commentator, 185 and 297. (a draft may be sent to Wg for review) pending - draft sent, henrik comments, updated draft to be sent >2002/09/18: Henrik >Talk to Gudge about recovering the text for issue 353 as per "" done >2002/09/18: Jacek >Write a proposal for new attribute for issue 231 and send it to member list done -- Primer Nilo: nothing to report. -- Spec editors: much of the closing texts (for xmlp-comments) has been sent out, no other items to report -- Attachment Feature doc df: published as a Last Call WD yesterday, comments will go to the existing LC issue list -- LC Issue List carine: it is up to date -- Implementation tracking [2] df: nothing to report -- Media type draft, IANA application df: nothing to report -- Planning Oct/Nov f2f df: we have a potential host for meeting (Sonic). There are 13 attendees so far, and the room holds 20. df asks whether anyone else plans to attend the f2f ... the number of attendees increases to 18... There is no objection expressed to holding the f2f meeting at Sonic in Bedford, MA. -- Discussion of pushback to any issues we have closed no pushback identified -- Part 2, Table 17 discrepancy Per last week's telcon, Marc proposed new text [5], and there seems to be agreement on this with a few small changes [6]. Any objection to accepting the proposal in [5] plus the changes in [6]? no objections to accepting this as resolution to the issue Herve logged this originally - Carine will find the thread and log it as an issue in the LC issues list Data Model Issues -- 302, data model edges orignating and not terminating How should we resolve the situation described in [8]? jk - we discussed this on the previous call and decided to use the original proposal text rather than the amended text that nopw appears in the Editor's copy of the spec. So I thought we were waiting to see the original text again. df: notes that the resolution of editorial issue 353 affects the text under consideration for issue 302 jk: the clarification clarified 302, the other proposal by Martin Gudgin is not satisfactory hfn: the only things that have been changed in the editor's copy are those that are colored (uri pasted into irc); I don't understand what you think has changed.. jk: editors draft when gudge sent mail was different than what it is now in bullet 4. It used to talk about inbound only and outbound only edges, which it doesn't do in the current draft df: in summary then - there was intermediate text form that had been seen by enough persons to be agreed upon and now that text is not in the editors copy of the spec jk: yes df: there is a precedence for resolving these issues - 353 was editorial and 302 is substantive, so 302 take precendence hfn: but the text is not agreed upon until now df: there is enough memory of the existence of a tranistory text that we should recover that text and use it as the resolution for 302 hfn: where is it? nm: can we use CVS to find the text? df: presumably it was a draft checked in. hfn: then we can get back to it df: reopen action to henrik to recover that transitory text noah: for discussion? df: need to see the text again, we can't make a decision until we see it nm: ..to contribute to a discussion rather than adopt? df: yes df: for next week's telcon, we expect to see a variety of text, specifically: one will be the recovered text, a second will be the text that is available today, and a third will be gudge's other proposal. We will make a decision when we've seen all three texts. df: if hfn finds text, then he will post it -- 231, what is the difference between a struct and an array in the edge case? During last week's telcon, we resolved this issue by agreeing to introduce a new attribute whose name and values were to be decided. At the WG's request, Jacek created new text [26] describing the attribute, and initiating a discussion of the name and values. Some changes were suggested and accepted by Jacek to his initial proposal [23]. We do need to choose a name and its values: Name choices-- valueType, content, node, nodeType (all in enc:namespace) Value choices-- simple/struct/array, simple/struct/array/false, terminal/struct/array df: first thing we need to do is to lay out the arguments so that we are clear on what it is that this attr can or cannot do; for example, can its value be defaulted, how does such a mechanism work, etc df: ray, please summarize what you would like to be able to do with schema and this attribute rw: currently when using schemas we have to go to great lengths if something is an attribute. It seems that adding this as a default we could have one unified way that we could find out if it was an array or whatever. Without that I can't see the value of adding another point on all the conditions we have to check for df: what is your proposal? rw: how it would work with schema is not the intent - the thing objected to in the proposal is that the value of the infoset property was true which would prevent a schema or dtd from providing the information about whether is is a struct or whatever. I propose we don't specify that the value of specified has to be true df: so a schema could provde the attribute's default value? rw: yes nm: this issue is not captured properly in the agenda - take out the mention of 'false' in the value choices - clarify the proposal. I don't agree with ray's proposal but I don't think the status quo is very far from what he wants. Part 1 states that none of the values could be provided by a schema and they must be in the message - the question is what Part 1 should. Historically, schemas are optional to avoid scripting problems and dependence on particular schema languages. There is a precedent however - we say in non-normative references that 'can build a better decoded graph if you use schema'. So it is possible to construct two graphs from a soap message - one where you don't use schema (normative), and one uses schema to add more information (non-normative). If you take the spec as it stands, then if you validate the document, and add the default, then you can modify your graph. Nothing precludes this. df: response from ray? rw: A lot of the noah's premises are not true - that was done in email. What I'm proposing does not put constraints on processing model. Very often, the so called normative graph is useless because you cannot tell significant pieces of information what you are decoding df: not everyone finds it useless rw: it depends on the messages - we haven't done the work at the encoding level. I consider script access is a poor implementation when it does not use schema.Oour callers from mozilla etc certainly has all this. df: the point being made is that nothing prevents you from using a schema to validate the encoding rw: yes, but this new attr is somehow not relevent if it came from a schema. If the processing model did modify the infoset it should be recognised as a general approach rather than a prioprietary mechanism marc: my understand of the attribute's purpose was to help with self-description of an encoded graph in the absence of a schema. If you can default in schema and omit from message, then you lose the point of the attribute because the information is not there. rw: .. mh: why? rw: the use case in which the thing is not self-described is important to a lot of people. It's difficult to detect if the incoming things is an array. mh: [puzzled] jk: ray is concerned that if you have a schema then you know the structure, and the attribute is redundant in that case. Keeping it having the infoset property set to true doesn't effect the schema and clarifies our intent df: is there a negative interaction depending upon whether the attr is set or not? rw: I don't understand statement df: if the attr is set, the schema can still validate that piece of xml but in addition it will be able to provide other information about the types and strycture of the xml and there's no negative consequence that the value is true. rw: the point is that the schema cannot easily determine if it is an array jk: if we allow this attr in schema, we will extend schema to understand soap encoding. A schema currently tells you the structure without the use of the attribute. If we agree on that then it is true that the attr doesn't affect schema. rw: there are (??) proprietary and tricky techniques for this and if you are using DTDs etc it wont work nm: proposal: ray doesn't want people to invent this themselves (agreed) - I think this is a fairly large design change. We have a precendent in appendix C - suggest we amend this to say that if default or fixed values are provided to encode attributes then their values may be used to appropriately augment the graph. This will cater for what ray wants and will stick with decisions we've already made. rw: this proposal comes closer than anything I've heard and will solve the itemYype issue. I'm willing to pursue this direction although I am unsure of the exact wording. df: is there any other input? mh: I am concerned about the wording. nm: It was not meant to be exact ... it only introduces an attribute in the enc: namespace df: we just need assurance that this is a reasonable direction rw: I believe it is df: we need somebody to volunteer to produce text.... nm: I'll take it no objections!! discussion to follow on dist-app df: can we decide what are the choices of values for this attribute and what is the attr's name? general: yes df: what are the choices for the values jk: simple/struct/array and terminal/struct/array df: there was some discussion that the use of the word 'terminal' may be confusing. Hence it looks like the former set of values are preferable, is there discussion? no discussion df: Is there any objection to specifying the set of values to be simple, struct, and array? no objection df: it is decided, simple, struct, and array are the permitted values for this attribute df: are there preferences for the name of the attribute? jk: because we allow the value 'simple', I think that valueType is the most appropriate attr name because the word 'simple' is only used in the spec in the valueType (??) rw: has anyone considered anything that doesn't involve 'type', like 'class'? mh: this may be reasonable because we are 'above' the type level? df: to avoid naming black hole on the phone, we'll take the naming discussion to email df: is there anything anything else to consider on this issue? jk: which list should we use for the naming discussion and who will send the first posting? df: dist-app action: jk to initiate discussions wrt name of attribute RPC Issues -- 306, is use of Appendix A optional? Do we indeed have consensus that Appendix B (nee A) is referenced from the RPC section and its usage is "SHOULD" [13]. hfn: the initial proposal was from me, and I'm fine with the Appendix being referenced from the RPC section - I have no deep feeling on this. no discussion df: is there any objection to closing this issue with the proposal? no objections df: the issue is closed with the proposal action: df to send mail to xmlp-comments and originator -- 298, RPC array representation unnecessary Some support has been expressed for the issue [24]. df: Jacek, please outline the rationale for this issue jk: in soap1.1 we had named attributes and this had been the model all the time until john ibbotson came up with a proposal to add array represntation of rpc. In current implementations there are named attributes (they are soap1.1). The LC WD asked for feedback as to whether this should be kept or removed. Systinet's position is that it should be removed, and we note that wsdl does not support it yet. hfn: same feedback from microsoft rw: I am fine with proposal as long as people are clear as to what it means. I thought that the reference over-trivialized the issue. There is a fundamental difference here that struct cannot be accessed by a positional index, yet many implementations access parametrs by position rather than by name. So, we want to make sure that people realize that we are throwing something away df: what goes away? jk: remove from the RPC all mention of the array representation, a small change no further discussion df: are there any objections to closing the issue by removing all mention of array representations? no objections df: the issue is so closed action: jk - send mail to xmlp-comments action: editors confirm with jk to identify wording -- 299, RPC return value accessor is too complex Martin Gudgin proposes to close the issue by taking no action [25]. Does the WG agree? rw: I was partly involved in the creation of that mechanism, and my preference was not to have the return mechanism at all, but needed something to prevent it from interfering with the schema. I can't see why a return value accessor is actually required, and if people want to get rid of it that's ok with me jk: I agree with ray df: what would be the impact to the spec? jk: if we decided to remove this mechanism, the issue the impact is minimal in the text but it might be viewed as a big change hfn: this is indeed a concern. It will break implementations that already exist, and we need to evaluate the impact on interop work. jk: agree with ray, but not reply to gudge, because that's ok (??) rw: yeah df: if it was earlier, would we have pushed this change more strongly? I am think I am hearing to go ahead with Gudges suggestion, and acknowledge there is a big risk in making such a change. hfn: agree jk: yes rw: users can avoid the mechanism if they don't like it df: is there any objection to closing issue 299 without taking any action? no objections df: issue 299 is closed with status quo action: mail to xmlp-comments and originator - Don Mullen SOAP-supplied features -- 304, no one-way MEP Martin Gudgin has proposed that we close this issue by taking no action [29]. Does the WG agree? df: yves has sent a proposal that we include a Request MEP. Jacek, why do you want this mep added? jk: we mention soap abstract model that has oneway mep and it is mentioned elsewhere. It's been in the mindset all along, just not in the spec, and we should pursue this. hfn: I am wondering whether this is a nice-to-have or whether it's fundamental to what we have? jk: its always been thought that soap can be used for one-way messages. Wsdl has one-way operations (one of the basic things) so it is not really a nice-to-have. mh: we would need to change the http binding to include a oneway mep hfn: or provide another http binding yl: oneway can only be in email and not in http which already has a response mh: but this is in the other direction! nm: I agree that the world will need oneway mep, but I am concerned that doing it right will have knock-on issues. Also, our meps become real when they are implemented in a binding, so if we do this, the mep should be reflected in a binding, and the only one we have is http which is inherently request response. So this is a bit bigger to do right than what I want to do to get out of last call. So on balance, I suggest leaving it out of this spec. jk: I dont disagree .... I think ws-desc wg will ask us to this work df: why? jk: wsd has oneway ops in the core... df: for soap 1.2? jk: don't know, it may be strange if those guys do it... df: is there any objection to closing this issue by taking no action? yl: it may be good to postpone so if we find time we can still get a proposal into the soap1.2 spec df: this will be a major change, which will send us back before last call, and if we want to make it real we'll need a binding and that will make it a huge task. yl: I don't think it is a big change mh: people should be able to make their own bindings nm: my point is that there is no point in putting the mep in the spec when there is no binding in the spec. This is too big a change to make at last call. We should say its structurally decoupled and it's just a policy matter of who does it (a oneway mep) later. df: are there any objections to closing this issue by taking no action? no objections df: issue 304 is closed with the status quo action: send msg to xmlp-comments and commentator - jk -- 305, SOAP-Response MEP does not need sending+receiving states Henrik has proposed a resolution [27] that has support although it will require fairly extensive changes to the HTTP binding section. Do we accept the proposal? jk: the problem is that sending+receiving state implies that something is being sent there hfn: this is a change in the state machine, mh says it will be editorially painful mh: dave.orchard likes it hfn: that makes me feel batter! mh: we need to decouple the meps in the http binding, which means more text and tables hfn: can we acheive the same thing by just putting in text to say that for the request response mep both those states are the same? mh: maybe df: sounds substantial - should we look at text before deciding? hfn: editors take action item to come up with some ideas (additions to existing proposal) and see how we can achieve this. Question is whether this a good direction - I consider this to be mostly editorial really df: ask the WG? action: hefn, mh: come up with one or more concrete proposals HTTP binding -- 319, clarification that HTTP does define base URI Henrik has proposed a resolution [28]. Do we accept this proposal? hfn: in the binding framework we say that underlying protocol can define a base uri. HTTP does this, and so we should say so. nm: I basically agree - we should make clear that our http binding uses as its base the base that http uses itself hfn: that's fine [at this point the minute-takers phone exploded and he was off line for about 40 seconds] df: is there any objection to closing the issue with the proposal? no objections df: the issue is so closed action: hfn to send msg to xmlp-comment Envelope issues -- 356, Allow unqualified elements as children of Body After a lengthy discussion [9], there seems to be consensus on new wording. Is there any objection to accepting the proposed wording? rw: is this a change to the status quo, and if so, what was broken? hfn: we required that all elements in the body are namespace qualified. The comment came back saying there are lots of scenarios where no qnames. rw: and they couldn't just wrap in an element? mh: but question is why do they have to? rw: ok nm: I agree nothing breaks but...agree since the body interpretation has been loosened up...but...feel that the only way we keep things in order in this very loose world is by using namespaces. It just kind of makes the message a lot less self describing... ... nm: some day we might regret this (!) hfn: it is true for any XML dm: I agree with noah; it doesn't feel right, lots of people would be dispatching off the children of the body hfn: yes, people should use namespaces - there's nothing in our spec which says anything breaks ns or no ns mh: could put some wording in there to recommend using namespaces hfn: there is some wording in there 'strongly recommend' nm: it should say SHOULD df: so I think we are asking for the MAY to become a SHOULD hfn: from a compliance point of view this means conditional [basic diagreement of what SHOULD means - resolved when RFC consulted] straw poll: MAY+note vs SHOULD .... the SHOULDs have it. proposal: close 356 using the text referenced in the agenda but changing the MAY to a SHOULD further discussion action: write the motivation of why there should be a SHOULD - noah revisit this issue next week. [meeting adjourned]
http://www.w3.org/2000/xp/Group/2/09/25-minutes.html
CC-MAIN-2014-41
refinedweb
3,945
64.75
Clainos Machemedze1,330 Points am getting a 'Task one no longer passing' message after writing up the last task of the odd_even challenge I have tried writing the code below the function or above it but still the same message .Please help am behind time in terms of progress. import random start = 5 while start: fig = random.randint(1,99) if even_odd(fig): print("{} is even").format(fig) else: print("{} is odd").format(fig) start -= 1 def even_odd(num): # If % 2 is 0, the number is even. # Since 0 is falsey, we have to invert it with not. return not num % 2 1 Answer Alex KoumparosPython Web Development Techdegree Student 35,921 Points Hi Clainos, You have two issues in your code. First, your while loop occurs before your function definition. When the interpreter first hits your while loop, it hasn't yet seen your even_odd function definition, so when you call even_odd in your loop, Python doesn't know what even_odd means. The fix for this is simply to move your whole while block down below your function definition. The other issue is the location of the closing parenthesis for your print statements. format() is a method on the string, so it should occur immediately after the closing quotes. Then after the closing parenthesis of format, you then add the closing parenthesis for the print statement. Hope that's clear Alex Clainos Machemedze1,330 Points Clainos Machemedze1,330 Points thanks was becoming frustrated
https://teamtreehouse.com/community/am-getting-a-task-one-no-longer-passing-message-after-writing-up-the-last-task-of-the-oddeven-challenge
CC-MAIN-2020-45
refinedweb
244
72.97
The FOR command has become the batch language's looping construct. If you ask for help via FOR /? you can see all the ways it has become overloaded. For example, you can read the output of a command by using the for command..] "Of course, one could invent a brand new batch language, let’s call it Batch² for the sake of discussion, and thereby be rid of the backwards compatibility constraints. But with that decision come different obstacles. " I don’t think these obstacles need to be explained to anybody with any dependance on VB6 code. I know it’s hard to conceive of, but Microsoft doesn’t actually have to solve every problem under the sun. Perl is a great scripting language, and is portable to Unix systems. There’s also Python, Ruby, tcsh; all are free, I believe. I’ll grant the big advantage of knowing your language is already installed (.BAT). I think that calls for command namespaces. Yeah, but aren’t these new batch features added on a pretty ad-hoc basis already? I don’t recall having access to FOR/F under Win98, for instance. What if you were to provide a converter along with the new batch language? You run your old batch program through the converter, and voila, you’re now using the new langauge… but why the hell was the choice command removed from the NT line? I hate to admit it, but the scripting languages in Unix blow away what Microsoft offers. It would be awesome if a future version of Windows included the scripting capabilities of tcsh. New batch language? I thought it was already here in WSH and cscript.exe. It’s probably the legacy and excess backward compability, but every time I try to do something with BATs, it almost get the feeling that the syntax has delibarately been designed such that you can’t do anything useful with it. For example: FOR /F "tokens=*" %i IN (‘ver|find version’) DO echo %i | was unexpected at this time. Why on Earth can’t you use | there? This is just one example out of million. Fortunately we have cygwin these days… I’m with AndyB: these days, if I have the urge to write a batch file, I write javascript and execute with cscript.exe. However quirky (a lot!), I love the batch language! my only gripe is that there’s no clean way to break out of a for loop. Sure, it looks slightly more like line noise than Perl (can you tell what does the below code do?), but you can do very interesting stuff with it: @echo off setlocal setlocal enableextensions setlocal enabledelayedexpansion set startdir=%1 if .!startdir!. == .. ( set startdir=!CD! ) call :visit "!startdir!" goto :EOF :visit setlocal set dir="%~dpnx1" pushd !dir! && ( ( ^ for /f "tokens=*" %%D in (‘dir /ad/b’) do ( ^ call :visit "%%D" "%%~aD" ) ) & popd ) || ^ goto :EOF ( echo %~a1| findstr /R /X ……..l 2>&1 > NUL ) || ( ^ rd !dir! 2> NUL && ( echo !dir:~1,-1!>&2 || exit /b ) ) KJK::Hyperion: Thanks for the tip. "despite not being really documented anywhere, it’s not hard to understand by long and painful trial-and-error" – Exactly what I mean :) Trial-and-error is what I’ve been doing, Trial-and-eventual-success muvh much less. BAT must have the lowest productivity rate among the languages I’ve ever used (and that includes AutoLISP…) Regarding find vs. findstr – my mistake, didn’t verify that. "I hate to admit it, but the scripting languages in Unix blow away what Microsoft offers. It would be awesome if a future version of Windows included the scripting capabilities of tcsh." What do you need that WSH doesn’t provide? "How did one get the output of a program (or input from the user) into a variable in the DOS era?" For simple things, my approach was to redirect output into another batch file that set the variable, and then call the second batch file. It’d look something like this (making allowances for long file names): copy __set_cd_template.bat __set_cd.bat cd >> __set_cd.bat call __set_cd.bat del __set_cd.bat __set_cd_template.bat would contain something like the text "set CURRENT_DIRECTORY=^Z". The lack of a space between the equal sign and the EOF (^Z) was important. Otherwise, the output from the file got redirected into a second line. You could also do interesting things with edlin. I remember pretty clearly implementing the equivalent of pushd/popd with edlin and batch files. pushd would append a line that read "CD<current_directory>" to a global batch file in a well known location. popd would call the global batch file and use edlin to truncate the last line. It would (quickly, behind the scenes) take you through every directory in the stack, but you’d end up where you were when you said ‘pushd’. Around the time time, chkdsk would report percentage fragmentation of the files in a given directory. I built a set of batch files atop pushd/popd that traversed the entire directory hierarchy of a hard disk and dumped a report of the fragmentation percentages in a well known location. (Or you could use Norton… :-) I used to work at a company whose complex build system ran entirely on 4DOS/4NT batch files. It was probably the first system that was designed in-house back in the DOS days, and they just stuck with it. It scaled horribly (ie, not at all) and was hard for newbies to learn, but since every product used it, no one wanted to change it. So breaking batch files would literally make the company unable to function. Perl is not good as a replacement. You can’t easy use it on CD or etc. Lot’s of files needed to run it. (ok actually I have single-file perl4.exe for DOS ;) And it is not batch language – It’s not easy to generate programmatically perl script which will be run next. It’s easy to do with BAT. Thanks all for hints on JS/BAT/find* and explanations of BAT parsing. Nekto2: "Perl is not good as a replacement. You can’t easy use it on CD or etc." I like Perl, use it daily, but this is 100% true. Then I have discovered KiXtart (). Not perfect, or as powerfull as Perl, but worth more than a look. mschaef: the copy sethack.bat tmp.bat foo >> tmp call tmp.bat del tmp.bat hack is what I used, too :). But if I recall correctly ancient DOS versions did not even have CALL… :) "?" So all features available in batch files under WinXP are also backported to Windows 95? I find that hard to believe… "Welcome to evolution." So we can agree it’s not Intelligent Design? [wag] " but why the hell was the choice command removed from the NT line?" Because you can accomplish the same thing with set /p: set /p answer=Select from [a, b, c]: if %answer% == "a"… " …"Those who don’t know Unix are doomed to reinvent it, poorly."" The general concensus from what I’ve read is that MSH (aka Monad) is as (or more) powerful a shell than anything offered by *nix. "So all features available in batch files under WinXP are also backported to Windows 95? I find that hard to believe…" If you read all of what Raymond wrote, you see that the argument is that any batch file will run in Windows XP and 2003, not the other way around. If Windows XP and 2003 used another shell, with another batch language syntax, no batch file from prior to XP would run in them. In other words, if any version of Windows to ship with Monah (MSH) expected .bat files to contain Monah (MSH) syntax, all existing .bat files would break. From the link provided… ""Once you have access to the beta site, download the beta version of the .NET Framework 2.0 (a 24mb download), and the MSH Preview version (4.1 mb) from the downloads page. Once downloaded, run Dotnetfx.exe to install the .NET Framework, then run Windows command shell preview.exe to install MSH."" I consider 28Mb of downloads (or inclusions on CD/DVD) and an installation of the .NET framework at a minimum, just to get a new shell scripting language is a little too heavy. One of the light-weight standalone scripting languages around seems a better solution. e.g. 4DOS (although it is not free) or one of the programs listed above. Did anybody say "Cygwin"? Raymond just explained a cool trick possible with batch files and noted how popular they are inspite of being so ‘Kludgy’. *Nobody* mentioned anything about competing with other scripting languages. LOL Perl ! Ruby ! Btw, given this is The *Old* New Thing, let me ask an outdated question. How did one get the output of a program (or input from the user) into a variable in the DOS era? That is, before the WinNT era FOR /F and other modern folly. bat man: you need to escape the "|". Try to get familiar with the extremely idiosyncratic and extremely quirky multi-stage parsing of batch files (for example: don’t *ever* try to nest FOR loops if you use the !variable! syntax – string the loops in a pipeline instead); despite not being really documented anywhere, it’s not hard to understand by long and painful trial-and-error It helps to know, for example, that the batch language has *no* constructs whatsoever (with the possible exception of labels) – rem, echo, for, if, etc. are all built-in commands; being built-in means, among other things, that they have a custom parser (note how echo outputs the string argument verbatim, preserving whitespace), and that they are handled by a slightly different tokenizer (which is why "cd.." is equivalent to "cd .." but "findstr.exe" isn’t equivalent to "findstr .exe"); even the parenthesis isn’t a construct, it’s a built-in command with a custom parser that handles line breaks itself, which is why you can write this: if .%var%. == .. ( command command ) else ( command ) but not this: if .%var%. == .. ( command command ) else ( command ) since the "if" built-in doesn’t handle line breaks in any special way In your case what you *really* mean is: FOR /F "tokens=*" %I IN (‘ver^|findstr /i version’) DO @echo %I Note the use of uppercase letters for variables (it’s unambiguous if you’ll ever use the %~xxxX syntax, and chanches are you will), the escaped "|", the use of findstr instead of find (find can only search disk files), and the @ to suppress the echoing of the echo command …"Those who don’t know Unix are doomed to reinvent it, poorly." ;-) About find vs. findstr — find can search more than just disk files. For example, just today I’ve done a: netstat -n | find ":25" to see the states of all the open, closed, etc. SMTP connections on a 2K Server box. Unless you have to use findstr in XP? If so, then *that* reeks of backwards-incompatibility… One of the things not mentioned is that batch files can’t be self modifying. (Or rather they can if you are extremely careful). The batch file is closed before each command is executed, and then reopened and the next command read from the absolute location that the previous one ended. You would get wierd errors if you moved the contents of the file around at all. Cooney wrote on Friday, September 09, 2005 6:29 PM: > Why is that? We have established > methods for dealing with this already. > Leave standard batch files alone and > make something new and clean. There is something new and clean already. As many others pointed, it’s called WSH (Windows Scripting Host). WSH exists since Win98 and has two excellent scripting languages out of the box: JScript and VBScript. Implementation of other popular languages can be found easily, including Tcl, Python and Perl, so beloved by Unix admirers (search web for ActivePerl). Personally, I almost always prefer WSH over batch files. I agree with others here that batch script has done its job well for many years, but now it’s dead-end path of scripting evolution. Moreover, I find batch script as counterproductive quite often, because it gives you illusion that no programming efforts are required; "it’s just few CMD commands away" thinks typical developer and starts typing. Then he/she needs to peek in documentation before typing almost every command, because nobody remembers cumbersome syntax. Then developer is stunned by results of first run and starts to debug it (with echo, of course, there is nothing else). Then poor guy cursing his way through opaque rules of parsing and execution context of batch scripts. After couple of hours of exhausting coding it works finally. However, nobody can understand it from first glance including the author. What about maintenance now? That’s why I don’t like batch scripts. There is mental barrier that developers often need to overcome before start writing JScript or Perl: it looks like "programming". It has functions and variables, debuggers, even editors with syntax highlight. It makes impression as full blown programming in contrast with "simple" batch files. Many times people think "what? I won’t start coding now for such simple task; there is batch script for that!", but they often forget that tasks tend to expand quickly (usually as early as you’re writing very first solution) and batch script reaches its limits even quicker. As a result, coding efforts invested in batch scripts are much higher than those for JScript/Perl for comparable tasks. And I even don’t want to think about batch script maintenance vs JScript maintenance. "You pipe text around." Actually, often binary data is piped around, for instance, on some *nix systems, where ‘tar’ dosent support gzip directly, the following command will extract a .tar.gz archive: gzcat file.tar.gz | tar xvf – >.) I also feel the need to pull up a quote from Jeffrey Snover, Architect for Administration Experience Platform (and one of Monad’s lead developers). "So, Monad is a way to automate the system. It has four components. First is it’s as interactive and composable as kshell or bash. So if you’re familiar with those types of shells, we have those capabilities. It’s also as programmatic as say Perl or Ruby. Third is we’ve focused in on some production-oriented capabilities of it, like VMS’s DCL or the AS400. We’re really focused in on trying to solve admins’ problems. And then fourth, we go and we take all the management information in your system and make it as easy to find as files in the file system." — One and two are already covered by *nix, as is the fourth. In fact, the fourth is one of UNIX’s strong points, because process and device information already appear as files in the file system (see /dev and /proc). The third point seems to deal with DCL’s ties to VMS’s security system… but in *nix this isn’t all that necessary, because security settings are already managed through shell utilities. Even ACL lists are managed that way by the *nix systems that support them (Linux and some of the BSDs). Anyway, I think I’ve said enough on the subject, because this is starting to get rambly. >>.) Why is it more powerful? Because the script languages in traditional UNIX is based on pure text streams. You pipe text around, which you then have to parse. MSH pipes *objects* around instead, which make it far more powerful. In fact, these are .NET objects. And if you know .NET then you can do a lot of magic in there :) Some videos of "Monad" are available at Channel9: And they also have a blog where you can read more about it:. Anyways, I really like it :] Raymond: you can’t use that "backward compatability" excuse for all idiotic "features" that windows have. And with batch files windows misses the point big time. It is hard to believe, but *nixes have virtually unchanged shell languages for about 30 years. And it is all backward compatible. So please, have a respect for your readers: call things by their name. Bug is a bug and mistake is a mistake and mess is a mess. And batch files are design mistake – plain and simple. YEAH!!! I wish the tar and gzip people used XML so I wouldn’t have to create my own parsers. I hate it when those unix guys can’t follow a standard. note: sarcasm at many levels. O…K… I’m thoroughly confused. ;-) > You pipe text around, which you then have to parse No, *you* (as the programmer) don’t usually have to parse it, unless you dream up your own file format(s). That’s what expat, libxml2, Lisp’s s-expression interpreter, etc. are for. As for the utility of passing objects around, yes, sometimes it would be helpful (but then, in most scripting languages, you’d just use XMLRPC, and in Lisp, you’d use s-expressions), but not always. Especially not when the format of the objects you’re passing around is undocumented, and binary, and you’re having problems with whatever’s going on on one side or the other. It’s really easy in Unix to dump the output of a command to a file and look at it in a text editor; it’s not too much more difficult to redirect an XMLRPC "call" to a file and look at it. It’s also not too difficult to dump a bunch of s-expressions to a file. And if you know a bit about XML, or the structure of Lisp, you can generally figure out what the problem is. Whereas, what do you do if you’re having problems with the way MSH is passing objects? The difference is transparency. I have yet to find anything that beats the power of the Bash shell. Combine that with say, Perl/Python/Ruby/whatever, and you have the most complete shell environment you could wish for. Real men use c Backwards compatibility means being compatible with your mistakes, too. #! means never having to say you’re sorry. /couldn’t resist PingBack from PingBack from PingBack from
https://blogs.msdn.microsoft.com/oldnewthing/20050909-24/?p=34263
CC-MAIN-2017-34
refinedweb
3,066
73.68
28 October 2009 21:47 [Source: ICIS news] HOUSTON (ICIS news)--US tyre producer and styrene butadiene rubber (SBR) consumer Goodyear Tire & Rubber Company anticipates year-over-year global industry growth in 2010, especially in markets featuring high-value-added features, the company said on Wednesday. Goodyear, which produces SBR for its tyres, said it also anticipated growth in tyres with larger rim diameters and fuel-efficient technology. North American sales for the quarter ended 30 September totalled $4.4bn (€3.0bn), down 15% from third-quarter 2008 figures amid lower demand and other factors. Unit volume tyre sales in the third quarter were down 21%, while replacement tyre shipments were up less than 1%, reflecting the slowdown in tyre-related businesses. However, third-quarter sales were up 11% from the second quarter, the Akron, Ohio-based tyre maker said. Tyres are a key end market for SBR. An SBR market source said the US market saw an uptick in demand in the third quarter in the aftermath of destocking during the first half of the year, and as a result of the ?xml:namespace> However, auto sales, and hence SBR demand, have slowed again, the source said. Low demand would make it difficult for SBR producers to push through price increases that result from higher production costs, producers said earlier this year. Goodyear’s segment operating income for the third quarter was $275m, up from $24m in the second quarter and up from $266m in the third quarter of 2008. Third-quarter net income was $72m, up from $31m for the same quarter a year earlier. New savings in the third quarter were $195m, resulting in total savings of $540m during the first nine months of the year, Goodyear said. Additional reporting by Brian Ford ($1 = €0.68) To discuss issues facing the chemical industry go to ICIS connect
http://www.icis.com/Articles/2009/10/28/9258995/us-goodyear-forecasts-tyre-industry-growth-in-2010.html
CC-MAIN-2013-48
refinedweb
310
60.14
Created on 2008-12-21 08:53 by cdavid, last changed 2016-04-15 17:07 by Ivan.Pozdeev. I believe the current pyport.h for windows x64 has some problems. It does not work for compilers which are not MS ones, because building against the official binary (python 2.6) relies on features which are not enabled unless MS_WIN64 is defined, and the later is not defined if an extension is not built with MS compiler. As a simple example: #include <Python.h> int main() { printf("sizeof(Py_intptr_t) = %d\n", sizeof(Py_intptr_t)); return 0; } If you build this with MS compiler, you get the expected sizeof(Py_intptr_t) = 8, but with gcc, you get 4. Now, if I build the test like: gcc -I C:\Python26\include -DMS_WIN64 main.c Then I got 8 as well. I believe the attached patch should fix the problem (I have not tested it, since building python on amd64). Is there any change to see this integrated soon ? The patch is only a couple of lines long, thanks Can you please provide some setup instructions for mingw-w64? What URLs should I install in what order, so that I can compile Python? I think compiling python itself with it would be quite difficult - I have never attempted it. My 'scenario' is building extensions against the official python for amd64 on windows. The quickest way to test the patch would be as follows: - take a native toolchain (by native, I mean runs on and target 64 bits windows subsystem - I have not tested cross compilation, or even using 32 bits toolchain on WoW). The one from equations.com is recent and runs well: - builds a hello-world like python extension from the command line: (I am sorry, the wiki page is bit messy, I will clean it up). Since we use our own distutils extensions in numpy, I don't know how much is needed for support at the stdlib distutils level. If that's something which sounds interesting to python itself, I am willing to add support in python proper for mingw-w64. Lowering the priority. It's too difficult to setup the environment to be able to reproduce the issue being fixed.. I can make the toolchain available to you as a tarball, though, so that you can easily test from a windows command shell without having to install anything. Ok, it looks like following gcc 4.4.0 release, there is an installer for the whole toolchain: This installs gcc (C+Fortran+C++ compilers, the download is ~ 40 Mb), and it can be used from the command line as conventional mingw32. Hopefully, this should make the patch much easier to test. >. As a principle, I always try to reproduce a problem when reviewing a patch. In many cases, doing so reveals insights into the actual problem, and leads to a different patch. That the patch is "harmless" is not a convincing reason to apply it. > I can make the toolchain available to you as a tarball, though, so that > you can easily test from a windows command shell without having to > install anything. That would be nice. I have now tried reproducing the problem, and still failed to. I downloaded, from, the distribution mingw-w32-bin_i686-mingw_20100123_sezero.zip. With this, I get c:\cygwin\mingw64\bin\gcc.exe -mno-cygwin -shared -s build\temp.win-amd64-2.6\Release\w64test.o build\temp.win-amd64-2.6\Release\w64test.def -LC:\Python26\libs -LC:\Python26\PCbuild\amd64 -lpython26 -lmsvcr90 -o build\lib.win-amd64-2.6\w64test.pyd c:/cygwin/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/4.4.3/../../../../x86_64-w64 -mingw32/bin/ld.exe: cannot find -lmsvcr90 collect2: ld returned 1 exit status How am I supposed to link with this toolchain? If linking with -lmscvr90 is incorrect, what should I use instead? If it is correct, where do I get the proper import library from? Hi Martin, It was nice meeting you at Pycon. I finally took the time to set up a windows 64 bits environment, and here are the exact steps I needed to do to reproduce the issue, and fix it by hand: - Start fresh from windows 7 64 bits - clone python hg repository, and checkout the 2.6 branch (path referred as $PYTHON_ROOT from now on). - build python with VS 2008 pro, 64 bits (release mode) - install mingw-w64. I used this exact version: mingw-w64-1.0-bin_i686-mingw_20110328.zip (release 1.0, targetting win64 but running in 32 bits native mode), and unzipped it in C:¥ - I compiled a trivial C extension in foo.c as follows: C:\mingw-w64\bin\x86_64-w64-mingw32-gcc.exe foo.c -I $PYTHON_ROOT\Include -I $PYTHON_ROOT\PC -shared -o foo.pyd $PYTHON_ROOT\PCbuild\amd64\python26.lib -DMS_WIN64 The patch removes the need for defining MS_WIN64. I don't know exactly the policies for which branch this could be applied - anything below 2.6 does not make much sense since most win64 users use 2.6 at least. Ideally, it could be backported to 2.6, 2.7 and 3.1 ? Has anyone looked at this? I'm trying to build gdb with Python enabled with mingw-w64 (Python 2.7.1 with manually created import libraries), but have to manually define MS_WIN64 in the CFLAGS. The patch only does what's right (i.e. define a macro that should be defined). David has nicely explained what needs to be done to reproduce the issue. Thanks! I'm also using this patch successfully (together with). John: in the current versions of the toolchain, Python's configure fails for me. I follow steps 1..3 of "Steps to date". Then running ./configure fails saying that it does not work. I then tried alternatively these three approaches: 1. set PATH to include /mingw/mingw/bin 2. set CC to /mingw/mingw/bin/gcc.exe 3. set CC to /c/mingw/mingw/bin/gcc.exe Even though I can run gcc -v just fine, configure fails with configure:3593: checking whether the C compiler works configure:3615: /mingw/mingw/bin/gcc conftest.c >&5 gcc.exe: error: CreateProcess: No such file or directory^ configure:3619: $? = 1 configure:3657: result: no So apparently, mingw has some problem mapping the file name back to a Win32 path. Marting, this issue is about building python extensions with mingw-w64 not about building python itself. Martin, Ralf is right and my as previously linked is about building a python extension. I should have been more explicit about that. FWIW I found that the configure scripts on MinGW-w64 generally work fine if you add a "--build=x86_64-w64-mingw32" argument on the ./configure command line. Then you should only add /mingw/bin to your path (and edit /etc/fstab to map c:/mingw64 to /mingw). Adding those subdirectories to the PATH doesn't seem to be successful; I suspect that those binaries are off the standard path for some reason, eg internal use by GCC only, or something (GCC seems to do some clever tricks with relative paths, so I'm sure it's important that you use the correct starting executable). There's no need to discuss or even run configure scripts. Martin, please reread the OPs original message. It's easy enough to reason about the issue instead of trying to reproduce it. This bug is still present in Python 2.7.5 on Windows 64-bit . I am currently providing the following instructions for MinGW-w64 users wanting to link to Python27.dll: I want to add that this bug led to bizarre behavior (described here:) when using 64-bit Boost-Python compiled with Mingw-w64 in Windows 7. Boost-Python and programs linked to it compiled, but failed at run-time with segfaults. The solution described by jdpipe worked for me, but I only found it after a day of fruitless debugging attempts. The problem is still present in python 3.4 with mingw gcc 4.8.2. I was having trouble with compiling radare2's python swig bindings. The solution described here: also fixes my problem when using generated dlls. Still a problem with mingw-w64 gcc 5.1 and Python 3.4.3, time to fix this? Well, the time to fix this would have been six years ago. The python core developers have shown a disinterest to fix problems with gcc on windows for a rather long time. I wouldn't expect this issue to be fixed. Supporting mingw building of extensions has always been a complicated process, mainly because there appear to be many variants of mingw (and in particular, there seem to be multiple 64-bit builds). Add to this the fact that cygwin is sometimes used as a cross-compiler (often using the same code in distutils) and it's hard to know what to support, or what will break. I would be very happy to see support for building of Python extensions using mingw (even though I use MSVC myself). However, in order for core Python to have a stable target to aim at, I believe the users of mingw need to work to provide an easy to use, unified platform that can be the target for core support. I think the following would go a long way to giving the core developers (specifically me, I'd be willing to work on the core end of things to improve the situation) a chance of providing maintained support for using mingw as an alternative compiler for building Python extensions on Windows: 1. A well-documented and maintained process for setting up 32-bit and 64-bit mingw environments which can be used to build Python extensions. These need to use stable, well-maintained builds of mingw - ones that can be relied on to remain available in the long term (the links to equation.com and the Scipy instructions in this issue are now dead, for example, and both TDM and Mingw-W64 have been mentioned as toolchains, but it's not practical to try to support both - by support I mean at least that "some core developer installs them both and checks that things work"). Documentation patches to explain how to set up the official mingw build environment (once the community has agreed what it is) can be submitted to the packaging.python.org tracker. 2. Python 3.5 will be built using the MSVC 2015 "Universal CRT". From what I've seen on the mingw lists, there seem to be no plans for mingw to support this CRT in any form. Without that support, it's not clear to me how mingw builds for Python 3.5 will be possible, so the mingw community needs to work out how they will provide a solution for Python 3.5. Timely solutions for issues like this are needed if mingw is to be a supported build environment going forward. While it's understandable that the mingw community hasn't had much encouragement to work on things like this in the past, it is something that needs to change if full core Python support is to happen. So that's the position, really. If the mingw community can come up with an "official" mingw distribution that supports building extensions for Python 32 and 64 bit, including for 3.5+, I will look at how we ensure that distutils supports that distribution going forward (that will likely mean *dropping* support for other mingw builds at some point). I know the Scientific Python users make a lot of use of mingw for builds, so maybe their standard build environment is a good target to aim for. Bluntly, I'm not sure the community can achieve the above. It's a pretty hard goal, and lots of people have been working on it for a long time. But that's precisely why the core developers can't offer much in the way of support for mingw - it's too much of an unstable target as things stand. (Note for example, that I currently have *three* mingw 64-bit installations on my PC, and they all work slightly differently - to the extent that I have projects that I can *only* build with particular ones). With regard to this specific patch, it seems that according to msg132798 it's easy enough to work around the issue by manually defining the symbol. The patch seems to apply cleanly, still, but I'm not aware if it has been tested on cygwin, for example. So, like Martin, I'm reluctant to apply it just because it *looks* harmless. Particularly this close to the 3.5 beta deadline. For what it's worth I dropped support for mingw32 in psutil for exact the same reasons. As such I cannot imagine how hard could it be to add and maintain support for mingw in a much larger project as Python. Hunting around I found this on #3871. From #17590 upwards there are perhaps 25 issues with mingw in the title, so there's certainly work to be done. Please don't look at me, I'm simply not interested. Please note that Paul is pretty new to the core team, and is a crossover with the packaging folks (which is mainly where the mingw issues lie). What this means to the mingw community is that with Paul on the core team and willing to work on the support, the mingw users have the best chance they've ever had of making progress on this. Does this mean scarce Windows resources being diverted off to what I consider a side show? Unless it's categorically stated that mingw is officially supported in which case fine, provided the experts index and everything else associated with official support is updated accordingly. Not at all. Mingw support is important for the scientific community, as I understand it, and I'm willing to help there if I can. That won't be at the cost of other areas I can contribute to, but I consider packaging as much my area of expertise as Windows - and mingw support covers both of those areas. To give some background, I was involved in adding mingw support for the MSVC 2010 builds of Python, which involved working with the mingw project on getting -lmsvcr100 support added. That was a battle, and I fully expect universal CRT support to be even harder[1]. I do *not* expect to get involved in that, but as I said, I do want it (along with a single mingw distribution blessed by the Python mingw user community) as a prerequisite for a higher level of core support. That's (IMO) a *very* high bar, and I don't expect it to be easy for the mingw-using community to achieve it. But if they do, then the amount of effort involves deserves some recognition, and for my part I am offering some of my time improving core Python support on that basis. [1] For example, AIUI, with the universal CRT, even the header definitions change - e.g., FILE* is not based on an _iob symbol - so you have to know the target CRT at *compile* time, not just at link time. That's an additional level of complexity not present in previous CRT releases. Paul, Thank you for your serious take on the issue. I'm Ruben, a long-time contributor and 3-ish year toolchain builder for MinGW-w64. I originally helped patching Qt 4.5/4.6 so that it worked with MinGW-w64 on 64-bit Windows. I can help liaison between you and the MinGW-w64 project, and point you towards stable, well-maintained MinGW-w64 builds. Perhaps we can discuss all the nitty gritty details elsewhere (or I can continue here of course), so we can get this straightened out. In short, this is the story: - MinGW.org is the "old" MinGW project, which has become pretty stale and is behind in soooo many aspects that it isn't even funny anymore (mostly new Windows APIs, DirectX support, and C++11 threading support is lacking). MinGW-w64 is a clean-room implementation that was released into the public domain, and by now included in all (I repeat, all) major Linux distro's as a cross compiler. Arch, Debian/Ubuntu, Fedora/Redhat etc. all provide a MinGW-w64 based cross-compiler. A complete list of MinGW-w64 features can be found on the web page: - TDM is a famous name in the MinGW world because he provided a high quality toolchain with installer when MinGW.org lacked in providing the new GCC 4. Unfortunately, he applies (perhaps useful) patches which break binary compatibility with upstream GCC. Since my toolchains (first uploads in September of 2010, last one in June 2013), and later with the MinGW-builds toolchains which are now installable through the MinGW-w64 website directly (and shipped with Qt), there is really no reason to go look elsewhere for toolchains. The MSYS2 project also provides numerous binary packages and I think almost exactly the same toolchains within their environment. The official MinGW-w64 installer can be found here: (it might occasionally complain it cannot download repo.txt, that's a Sourceforge redirect error thing that's proving might hard to fix, in any case you can also find the toolchains directly below) - There are several ABI incompatible variants, explained nicely on the Qt wiki: This is a choice you'll have to make. The greatest compatibility is offered by the "posix threading" (which gives C++11 <thread> support) and 32-bit sjlj and 64-bit seh variants. The 32-bit-dw2 provides a bit more juice in exception heavy code, but has the caveat that exceptions cannot be thrown across non-GCC-built code. The 32-bit dw2 variant is also what works with Clang, and is what is delivered in MSYS2 when installing the 32-bit toolchain. Since the 32-bit sturctured exception handling Borland Patents have expired, maybe a new, 32-bit seh version will emerge, but I have heard nothing concrete towards this direction. - The MSYS2 project also provides GCC-built Python binaries, which of course needs quite heavy patching. These patches and build scripts can be found here: - the -mno-cygwin option is really a dinosaur, and should be thrown into a deep pit, buried, and forgotten. Cygwin provides MinGW-w64 cross-compilers ({x86_64,i686}-w64-mingw32-gcc) which work in exactly the same way as a Linux->Windows cross-compiler. The official Windows binaries are just "gcc" and work as any native compiler on Linux. I hope this provides you with much of the information you need. If anything is unclear (I wrote this up in like ten minutes) or you need some concrete help figuring some things out, feel free to contact me directly or through the bugtracker. I sent an email to the MinGW-w64 public mailing list (subscription needed to post though, otherwise your message will be lost in the review queue) stating your and my intentions and current state of affairs: Ruben Ruben, Thanks for the detailed explanations. Just to be clear, I am *not* the person that will take this aspect of the process forward - that will be the community of people building (or wanting to build) extensions for Python with mingw. I don't know if that community has a spokesperson yet (or even if it's a well-defined "community") but they would be the ones to to engage with the mingw developers. In particular, the choices around ABI incompatible variants that you mention are precisely the sort of thing the community needs to establish - which variant is compatible with Python, how to get a maintained build of that variant (there seems to be a lot of "get such-and-such's personal build from this URL" in the current crop of instructions for building Python extensions with mingw - that's not really sustainable). The problem boils down to there needing to be a definitive, simple, and maintained set of instructions and software for "how to set up a mingw build environment for Python extensions". The core Python developers can't provide that (as we use MSVC). What we can do, when such a thing exists, is look at whether it's a toolchain that we can reasonably support. At the moment mingw patch requests come in based on someone's custom build environment, that we can't easily reproduce, and we can't be sure is the same as anyone else's. That's not something we can support - hence the frustration from the mingw-using community, because we have partial support from the days when mingw.org and cygwin were the only two options for gcc-on-windows and we didn't really communicate the change in status (which admittedly would have been "we no longer support mingw", so wouldn't have helped much...) Hopefully, the discussion on this issue clarifies the position a bit. Give us a well-defined "gcc on Windows" (mingw) platform definition, and we'll look at supporting it. Otherwise, we can maintain the status quo (what's there remains, but patches pretty much never go in, because reproducing issues and testing changes is too much effort to be viable) or formally drop support for mingw (which I'd be reluctant to do, but it may be worth it just to offer clarity). Paul, OK, I understand your point of view. As you say, there is no single "MinGW" community, nor a guiding body that takes these decisions. If you're not willing to choose one, all I can say is this: it will probably not matter which version you choose (all will work), only mixing them won't work. A sound choice would be to follow the Qt Project (it's what they ship in their SDK): They chose the dw2/posix combo, which IMHO is the best choice all-round. For 64-bit, the obvious one is seh/posix. Incidentally, that's what MSYS2 people chose, and they regularly build all of Python with these toolchains (plus some other packages including numpy/scipy), see and search for "-python-". These python builds are done from source, do not link to msvcr100, but just msvcrt.dll, which is the OS component MinGW GCC links to currently and in the foreseeable future. As it stands, you can easily reproduce these builds by: 1. Installing MSYS2 (download and install, see) 2. Installing GCC (i.e. "pacman -S mingw-w64-i686-gcc" for 32-bit or "pacman -S mingw-w64-x86_64-gcc" for 64-bit) 3. Installing all of Python's dependencies (see e.g. PKGBUILD:) 4. running makepkg in the directory with the python PKGBUILD with the patches next to it. make sure to use the "MinGW-w64 Win64 Shell" or "MinGW-w64 Win32 Shell") that MSYS2 places in your start menu. This sets PATH to include the relevant toolchains and tools, much like the VS command prompts. You can then extract the necessary dependency DLLs from the MSYS2's /mingw32/bin and /mingw64/bin directories so that a standalone MinGW-w64 Python installation can be created from that without issue. If you feel this qualifies as an easy, maintainable, reproducible setup, perhaps work can be done to integrate the large amount of patches currently required. Note that these patches will work with any decent and/or official MinGW-w64 GCC build. The time of everyone needing to build their own toolchain is past. Even if some people still building all kinds of cludgy variants seem to have missed the memo. Note that these builds differ from the official MSVC builds (which may not be a bad thing: it keeps MSVC<->GCC compatibility issues arising from mixing the code to a minimum). Obviously, when MinGW-w64/GCC supports the UCRT stuff, this incompatibility can be harmoniously resolved. Until then, this seems to me like a good solution, and it is how everyone else is doing it (i.e. separate GCC builds on Windows). If there is no interest in having a (community-supported, semi-official) GCC-built Python on Windows, I'm sure something else can also be worked out, which would include stripping the current dinosaur -mno-cygwin code which is what this bug was originally all about. >). The problem is that the people who build those extensions (which is not me, nor is it anyone on the core Python team) have never settled on a single version of the mingw toolchain as "what they want the distutils to support". So each bug report or patch is needs different "how to install mingw" instructions to be followed before a core developer can work on it. I'm suggesting that the people raising distutils bugs about mingw support get together and agree on a common toolchain that they'll use as a basis for any future bugs/patches. >>). Let's say you have the official, upstream W32 CPython, built with MSVC and linking with, say, msvcr90.dll You build, say, libarchive-1.dll with MinGW-w64, because that's what you use to build stuff. Because it's MinGW-w64, libarchive-1.dll links to msvcrt.dll. Then you want to build, say, pyarchive extension for your Python and you do that with MinGW-w64, because that's what you use to build stuff. Because it's msvcrt.dll, libpyarchive.pyd link to msvcrt.dll. Then you run Python code that takes, say, sys.stdout file object and passes it to pyarchive. Python file object is backed by msvcr90 file descriptor. pyarchive expects a file object backed by msvcrt file descriptor. Boom. Only three ways of avoiding this: 1) Use MSVC for everything. This is what upstream CPython does. 2) Use MinGW-w64 for everything (including CPython itself). This is what MSYS2 does.. This is why the discussion keeps coming back to building Python with MinGW-w64. This is why Universal CRT can be a solution (the absence of CRT incompatibility would resolve a lot of issues; the rest is manageable - remember MinGW-w64 has to use the same CRT/W32API DLLs that MSVC does, so binary compatibility is always achievable for anything with C interface). On 19 May 2015 at 17:09, Руслан Ижбулатов <report@bugs.python.org> wrote: >. That is the one this issue is about. It *is* possible (mingw grew the -lmsvcr100 and similar flags, at least in part to support it). But it's not easy to set up, and the people asking for it to be supported have never really addressed all of the issues involved (at least not in a reproducible/supportable way). Building Python with mingw, while out of scope for this particular issue, has always failed because nobody has been willing to step up and offer the long-term support commitment that would be required, AIUI. My understanding matches yours, Paul. Core does not want to *distribute* a mingw built python, but if the mingw community came up with a support strategy, including one or more buildbots building using mingw, I believe that we would accept the patches. Basically, it has to meet the PEP 11 rules for supported platforms (including enough userbase to produce the people to maintain it long term :) A few comments from the perspective of what's needed for the scientific Python stack: 1. Of the three options mentioned in msg243605, it's definitely (3) that is of interest. We want to build extensions with MinGW-w64 that work with the standard MSVC Python builds. We've done this with mingw32 for a very long time (which works fine); not being able to do this for 64-bit extensions is the main reason why there are no official 64-bit Windows installers for Numpy, Scipy, etc. 2. There is work ongoing on a mingw-w64 toolchain that would work for the scientific Python stack:. It actually works pretty well today, what needs to be sorted out is ensuring long-term maintainability. More details about what it's based on are provided in - I think it's consistent with what Ruben van Boxem recommends. Carl Kleffner, who has done a lot of the heavy lifting on this toolchain, is working with upstream mingw-w64 and with WinPython to ensure not creating yet another incompatible flavor mingw. 3. It's good to realize why making mingw-w64 work is especially important for the scientific Python stack: there's a lot of Fortran code in packages like Scipy, for which there is no free compiler that works with MSVC. So one could use MSVC + ifort + Intel MKL (which is what Enthought Canopy and Continuum Anaconda do), but that's quite expensive and therefore not a good solution for the many of contributors to the core scientific Python stack nor okay from the point of view of needing to provide official binaries that can be redistributed. Paul's comments on picking a single mingw-w64 version, and that version not being a download from someone's personal homepage, make a lot of sense to me. We (Carl & several core numpy/scipy/scikit-learn devs) happened to have planned a call on this topic soon in order to move towards a long-term sustainable plan. I wouldn't expect everything to be sorted out in a couple of weeks (it's indeed a hard goal), but knowing that Paul is willing to review and merge patches definitely helps. Ralf, thanks for the comments. The scientific community is definitely the key group that *need* mingw (as opposed to people who merely want to use it because they don't want to buy into MS compilers, or for similar personal reasons). My personal view is that if the scientific community comes up with a mingw/gcc toolchain that they are happy with, and willing to support, then I would see that as a reasonable target to be "the" supported mingw toolchain for distutils. I'd like to see a single-file "download, unzip, and use" distribution rather than the current rather intimidating set of instructions on how to set the toolchain up - but I'm sure that's part of what you're intending to cover under "ensuring long-term maintainability". Indeed, our idea of "easy to install" was/is a wheel or set of wheels so that "pip install mingw64py" does all you need. If necessary that can of course be repackaged as single download to unzip as well. I want to second Ralf's comments about both the need for this and it being easy to get. What is required to make this happen (specifically the easy to install build chain - pip install...)? It would be good to enumerate the outstanding issues. The current difficulty of building extensions on Windows should not be underestimated. Microsoft seem to change how their various tools work, with different updated SDKs and removing tools and changing things (even retrospectively) quite regularly. I've wasted quite a bit of time setting up windows machines to build the various flavours (bits and python version), only to find that the same strategy doesn't for some reason beyond my comprehension doesn't work on a different machine. Throw in different windows versions and the problem is pretty insurmountable and unsustainable. To be clear, the current situation surely cannot be worse than a MinGW situation. Of course, I mean: *To be clear, the MinGW situation surely cannot be worse than the current situation.* The situation is not THAT bad. You can install a prerelease of mingwpy with pip: pip install -i mingwpy or with conda: (thanks to omnia-md) conda install -c mingwpy It is not hosted on PYPI as long as there is no version for python-3.5. @carlkl right, but it's not really a seamless experience. I think my question is: What needs to still be done in order that a user with a fresh Python install in Windows (and no compiler installed) can do "pip install an_extension_that_needs_compiling" and it _just works_? Is someone with a better understanding of the issues able to comment on this? > What needs to still be done in order that a user with a fresh Python install in Windows (and no compiler installed) can do "pip install an_extension_that_needs_compiling" and it _just works_? The package developer takes the time to build a wheel on Windows (presumably they are already testing on Windows...?) and publishes it. Problem solved. Most of our efforts are (or should be) aimed at making it easier for the developers to do this, rather than trying to make a seamless build toolchain for the end user. @Steve Great, so what needs to be done so that I as a package developer can do `pip install windows-build-system`, `python setup.py bdist_wheel` and it actually creates a wheel? (which AFAICT is the same problem). My interest is precisely as a package developer. I've spent far far too much time fighting compilers on Windows (many days) and I don't want to do that any more. Every time I come across a new machine, I need to re-establish the current way to do things, which invariably doesn't work properly because I can't find the SDK to download, or the SDK installation doesn't include things any more, or some other reason which I can't work out. On Linux, everything is basically wonderful - I notice no difference between pure python packages and extensions packages. As an occasional Linux user, I notice a huge difference between pure Python and extension packages there, but basically always have a compiler handy on my Windows machines. It's all about context and what you're used to :) The advice has always been "Visual Studio X" is what is needed, and for 3.5 onwards that becomes "Visual Studio 2015 or later". Unfortunately, the story isn't so simple for legacy Python and 3.3/3.4 because those versions of VS are not so easy to get (unless you're a professional Windows developer with an MSDN subscription, which is pretty common). It is possible to use some other installers to get the old compilers, but Python was not designed to work with those and so there are issues that we cannot fix at this stage. It also doesn't help that older versions of VC weren't as standards compliant, so people wrote code that doesn't compile when ported. There are also many dependencies that don't work directly with MSVC (for the same reason, but in this case it wasn't the package author's fault). If you follow distutils-sig, where this occasionally comes up, you'll see the direction for packaging generally is to avoid needing to build. The hope is that even setuptools becomes nonessential enough that it can be dropped from a default install, but package developers will install it or another build manager to produce their packages (on Windows at least, though there's work ongoing to make this possible on many Linux distros too). > @Steve Great, so what needs to be done so that I as a package developer can do `pip install windows-build-system`, `python setup.py bdist_wheel` and it actually creates a wheel? (which AFAICT is the same problem). Hi Henry, I expect progress on the mingw-w64 front within the next few months. There'll be a status update with some more concrete plans soon. Also, has appeared last week - a few wheels have been set in motion. > The advice has always been "Visual Studio X" is what is needed, and for 3.5 onwards that becomes "Visual Studio 2015 or later". Hi Steve, that's actually not very useful advice for the scientific Python community. While things like C99 compliance are or could get better, there will always be a large Fortran-shaped hole in your suggestion. See my post above (from May 19) for more details. Thanks Ralf - I'm happy and keen to help so please feel free to poke me if you need assistance with anything. I'll keep an eye out too - is it actively being discussed on any list?. I'm not aware of any C99 limitations still present in VC14, so please let me know so I can file bugs against the team. >. Hey Steve, I'm a bit surprised to be hearing this now given all our off-list discussions about these issues this year. Can you clarify what you're talking about here? Who is "we", what other solutions do you see, and why would they be preferable? (If the compatibility issues are solved, then AFAIK gfortran is basically perfect for 99% of uses; the only alternatives are proprietary compilers with much nastier -- F/OSS-incompatible -- license terms. Note that by contrast gfortran itself is GPLed, but with a specific exemption added to clarify that it is totally okay to use for compiling proprietary code.) "We" is a lot of different companies and individuals. Anyone distributing prebuilt binaries is helping here, a few people are working on the licensing concerns for some components, other people are working on C BLAS libraries. I see the issue approximately as "it's hard to install the scipy stack", which is broader than "Windows does not have a Free Fortran compiler" and allows for more solutions (apologies for putting words in your mouth, which is not my intent, though I have certainly seen a fixation on this one particular solution to the exclusion of other possibilities). And FTR, there are plenty of major Python-using companies that insist on compiling from scratch and also refuse to touch GPL at all, no matter how many exemptions are in the licenses. GFortran is not the ideal solution for these users. Hi Steve- okay, thanks for clarifying! I think you already know this, but for the general record: the reason for the apparent fixation on this solution is that after a lot of struggle it's emerged as basically the only contender for scipy-development-on-windows; there are a number of problems (fortran, BLAS, C99, python 2.7 support, desire to distribute F/OSS software) and it's the only thing that solves all of them. The details that lead to this conclusion are rather complicated, but here's how I understand the situation as of the end of 2015: - If you just want to compile C/C++ (don't need fortran or BLAS), and you can either [live with MSVC 2008's somewhat archaic understanding of C] or [drop 2.7 support and only support 3.5], then we're actually in a pretty good place now: you can install the msvc-for-2.7 distribution for 2.7, install msvc 2015 for 3.5, and you're good to go. - Alternatively if you don't care about your code being F/OSS and have money to spare, then you can solve all of the above problems by using icc/ifort for your compiler and MKL for your BLAS. (The "F/OSS" caveat here is because you actually cannot distribute binaries using this toolchain as F/OSS.) - If you're a F/OSS project and you need BLAS, then your options are either OpenBLAS or (hopefully soon) BLIS. Neither of them can be compiled with any version of MSVC, because both of them use asm extensions/dialects that MSVC doesn't understand. The good news is that soon probably you will be able to compile them with clang! However, I think clang only targets compatibility with recent MSVC, not MSVC 2008, so this is useless for python 2.7? I could be wrong here. (Well, you can also try crossing your fingers and try mixing runtimes -- the BLAS interface specifically is narrow enough that you might be able to get away with it. I'm not sure how many projects can get away with just BLAS and no LAPACK, though, and LAPACK is Fortran; I've heard rumors of LAPACK-in-C, but AFAICT they're still just rumors...) - If you're a free software project who needs [C99 on Python 2.7] or [BLAS on 2.7] or [Fortran, period], then none of the above options help, or show any prospect of helping (except maybe if clang can target MSVC 2008 compatibility). OTOH the mingw-w64-with-improved-MSVC-compatibility approach fixes all of these problems at once, thus eliminating the whole decision tree above in one swoop. It is true that it doesn't help with the "GPL cooties" problem; AFAIK that's the only limitation. Of course any company has the right to decide that they absolutely will not use a GPL-licensed compiler, for any reason they might feel like. But if they want random volunteers at python.org to care about this then it seems to me that those companies need to *either* articulate some convincing reason why their needs are legitimate (gcc seems to work just fine for tons and tons of companies, including e.g. the entire linux and android ecosystems), *or* start paying those volunteers to care :-). And even if we do care, then I'm not sure what there is to be done about it anyway -- if you want to go buy a license to icc/ifort then you can do that today, have fun, it seems to work great? > "We" is a lot of different companies and individuals. Anyone distributing prebuilt binaries is helping here, a few people are working on the licensing concerns for some components, other people are working on C BLAS libraries. Note that we by default recommend to users to use a distribution like Anaconda/Canopy (for example at). That's fine for many scientific users, but not for people that already have a Python stack installed or simply prefer to use pip for another reason. So pre-built binaries like the ones in Anaconda/Canopy help, but don't solve the "make `pip install scipy` work" problem. And giving up on pip/PyPi would make no one happy... > I see the issue approximately as "it's hard to install the scipy stack", which is broader than "Windows does not have a Free Fortran compiler" It's: "it's hard to install the scipy stack on Windows". On OS X and Linux it's really not that hard. On OS X, you can install all core packages with pip (there are binary wheels on PyPi). On Linux you can do that too after using your package manager to install a few things like BLAS/LAPACK and Python development headers. And the lack of Windows wheels on PyPi is directly related to no free Fortran compiler. > and allows for more solutions (apologies for putting words in your mouth, which is not my intent, though I have certainly seen a fixation on this one particular solution to the exclusion of other possibilities). Much more effort has gone into pre-built binaries than into MinGW, as well as into other things that help but can't be a full solution like a C BLAS. And I haven't seen other solutions to "make the scipy stack pip-installable" that could work. So I have to disagree with "fixation". > I'm happy and keen to help so please feel free to poke me if you need assistance with anything. I'll keep an eye out too - is it actively being discussed on any list? Thanks Henry. There's no ongoing discussion on a list right now, but give it a week or two. I'll make sure to ping you. Hi all, There is now a much more concrete plan for the static MinGW-w64 based toolchain, and the first funding has materialized. Please see the announcement on the Numpy mailing list (), the MingwPy site () and in particular the "main milestones" in
http://bugs.python.org/issue4709
CC-MAIN-2017-04
refinedweb
7,321
69.72
Simplifying Decision Tree Interpretability with Python & Scikit-learn This post will look at a few different ways of attempting to simplify decision tree representation and, ultimately, interpretability. All code is in Python, with Scikit-learn being used for the decision tree modeling. When discussing classifiers, decision trees are often thought of as easily interpretable models when compared to numerous more complex classifiers, especially those of the blackbox variety. And this is generally true. This is especially true of rather comparatively simple models created from simple data. This is much-less true of complex decision trees crafted from large amounts of (high-dimensional) data. Even otherwise straightforward decision trees which are of great depth and/or breadth, consisting of heavy branching, can be difficult to trace. Concise, textual representations of decision trees can often nicely summarize decision tree models. Additionally, certain textual representations can have further use beyond their summary capabilities. For example, automatically generating functions with the ability to classify future data by passing instances to such functions may be of use in particular scenarios. But let's not get off course -- interpretability is the goal of what we are discussing here. This post will look at a few different ways of attempting to simplify decision tree representation and, ultimately, interpretability. All code is in Python, with Scikit-learn being used for the decision tree modeling. Building a Classifier First off, let's use my favorite dataset to build a simple decision tree in Python using Scikit-learn's decision tree classifier, specifying information gain as the criterion and otherwise using defaults. Since we aren't concerned with classifying unseen instances in this post, we won't bother with splitting our data, and instead just construct a classifier using the dataset in its entirety. One of the easiest ways to interpret a decision tree is visually, accomplished with Scikit-learn using these few lines of code: Copying the contents of the created file ('dt.dot' in our example) to a graphviz rendering agent, we get the following representation of our decision tree: Representing the Model as a Function As stated in the outset of this post, we will look at a couple of different ways for textually representing decision trees. The first is representing the decision tree model as a function. Let's call this function and see the results: tree_to_code(dt, list(iris.feature_names)) def tree(sepal length (cm), sepal width (cm), petal length (cm), petal width (cm)): if petal length (cm) <= 2.45000004768: return [[ 50. 0. 0.]] else: # if petal length (cm) > 2.45000004768 if petal width (cm) <= 1.75: if petal length (cm) <= 4.94999980927: if petal width (cm) <= 1.65000009537: return [[ 0. 47. 0.]] else: # if petal width (cm) > 1.65000009537 return [[ 0. 0. 1.]] else: # if petal length (cm) > 4.94999980927 if petal width (cm) <= 1.54999995232: return [[ 0. 0. 3.]] else: # if petal width (cm) > 1.54999995232 if petal length (cm) <= 5.44999980927: return [[ 0. 2. 0.]] else: # if petal length (cm) > 5.44999980927 return [[ 0. 0. 1.]] else: # if petal width (cm) > 1.75 if petal length (cm) <= 4.85000038147: if sepal length (cm) <= 5.94999980927: return [[ 0. 1. 0.]] else: # if sepal length (cm) > 5.94999980927 return [[ 0. 0. 2.]] else: # if petal length (cm) > 4.85000038147 return [[ 0. 0. 43.]] Interesting. Let's see if we can improve interpretability by stripping away some of the "functionality," provided it is not required. Representing the Model as Pseudocode Next, a slight reworking of the above code results in the promised goal of this post's title: a set of decision rules for representing a decision tree, in slightly less-Pythony pseudocode. Let's test this function: tree_to_pseudo(dt, list(iris.feature_names)) if ( petal length (cm) <= 2.45000004768 ) { return [[ 50. 0. 0.]] } else { if ( petal width (cm) <= 1.75 ) { if ( petal length (cm) <= 4.94999980927 ) { if ( petal width (cm) <= 1.65000009537 ) { return [[ 0. 47. 0.]] } else { return [[ 0. 0. 1.]] } } else { if ( petal width (cm) <= 1.54999995232 ) { return [[ 0. 0. 3.]] } else { if ( petal length (cm) <= 5.44999980927 ) { return [[ 0. 2. 0.]] } else { return [[ 0. 0. 1.]] } } } } else { if ( petal length (cm) <= 4.85000038147 ) { if ( sepal length (cm) <= 5.94999980927 ) { return [[ 0. 1. 0.]] } else { return [[ 0. 0. 2.]] } } else { return [[ 0. 0. 43.]] } } } This looks pretty good as well, and -- in my computer science-trained mind -- the use of well-placed C-style braces makes this a bit more legible then the previous attempt. These gems have made me want to modify code to get to true decision rules, which I plan on playing with after finishing this post. If I get anywhere of note, I will return here and post my findings. Related:
https://www.kdnuggets.com/2017/05/simplifying-decision-tree-interpretation-decision-rules-python.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+kdnuggets-data-mining-analytics+%28KDnuggets%3A+Data+Mining+and+Analytics%29
CC-MAIN-2018-05
refinedweb
781
68.87
General IIS, ASP.NET troubleshooting, learning articles, code samples, etc doesn't do any checks, or maintains any lists of mapped URLs, but just a simple redirection of all the requests to. Feel free to modify it accommodating your need. #include <stdio.h> #include <stdlib.h> #include <afx.h> #include <afxisapi.h> BOOL WINAPI __stdcall GetFilterVersion(HTTP_FILTER_VERSION *pVer) { pVer->dwFlags = (SF_NOTIFY_PREPROC_HEADERS ); CFile myFile("C:\\ISAPILOG\\URLs.html", CFile::modeCreate); myFile.Close(); pVer->dwFilterVersion = HTTP_FILTER_REVISION; strcpy(pVer->lpszFilterDesc, "Sample Redirection ISAPI"); return TRUE; } DWORD WINAPI __stdcall HttpFilterProc(HTTP_FILTER_CONTEXT *pfc, DWORD NotificationType, VOID *pvData) { char buffer[256]; DWORD buffSize = sizeof(buffer); HTTP_FILTER_PREPROC_HEADERS *p; CHttpFilterContext *chc; chc = (CHttpFilterContext *)pfc; char *newUrl; CFile myFile("C:\\ISAPILOG\\URLs.html", CFile::modeWrite); switch (NotificationType) { case SF_NOTIFY_PREPROC_HEADERS : p = (HTTP_FILTER_PREPROC_HEADERS *)pvData; char newUrl[50]; wsprintf(newUrl, ""); char szTemp[50]; wsprintf(szTemp, "Location: %s\r\n\r\n",newUrl); pfc->ServerSupportFunction (pfc, SF_REQ_SEND_RESPONSE_HEADER, (PVOID) "302 Redirect", (DWORD) szTemp,0); myFile.SeekToEnd(); myFile.Write("<BR><B> Orignial URL : </B>",strlen("<BR><B> Orignial URL : </B>")); BOOL bHeader = p->GetHeader(pfc,"url",buffer,&buffSize); CString myURL(buffer); myURL.MakeLower(); myFile.Write(buffer,buffSize); myFile.Write(" <B>New URL : </B> ",strlen(" <B>New URL : </B> ")); myFile.Write(newUrl,strlen(newUrl)); myFile.Close(); return SF_STATUS_REQ_HANDLED_NOTIFICATION; } return SF_STATUS_REQ_NEXT_NOTIFICATION; } Above is my sample, You might want to check my earlier ISAPI blog post to get the .def file and steps to create the DLL. Hope this helps! If> <location path="Default Web Site"> <system.webServer> <odbcLogging dataSource="ODBCLogging" tableName="HTTPLog" userName="Username" password="mypassword” />" </system.webServer> </location> Below are the AppCmds to configure the above attributes for the site:. Okay, this is an interesting stuff.: => 32-bit => 64-bit Here are some learn.iis.net documents on this module: Bit Rate Throttling Setup Walkthrough Bit Rate Throttling Configuration Walkthrough Bit Rate Throttling Extensibility Walkthrough! There a lot of other articles available in iis.net which explains how to run an ASP.NET application on IIS7. Here are those steps: Your application pool's configuration in applicationHost.config should look like below: <add name="ASP.NET 1.1" enable32BitAppOnWin64="true" managedRuntimeVersion="v1.1" managedPipelineMode="Classic" autoStart="true" /> <add name="ASP.NET 1.1" enable32BitAppOnWin64="true" managedRuntimeVersion="v1.1" managedPipelineMode="Classic" autoStart="true" /> Below are the commands using appcmd.exe tool which would do this.) Now, comes an interesting UI module. You can write a UI module to do whatever you want. I felt like writing one today, and thought of writing one for the above. Below is how it looks: Here is the link for the DLL: To add this module in your IIS 7 manager follow the below steps: " /> <add name="IIS7Fx11Advisor" /> Let me know if this helps you! I">: </script>: In ASPX .cs file: 2: [ScriptMethod] 3: public static string ForwardingToUserControlMethod(string ddlValue) 5: return WebUserControl.MyUserControlMethod(ddlValue); One. When One! M? If you are new to IIS7 and reading about the new RequestFiltering module, you might have some questions about the length of the URL and the length of the querystring which would be used while denying/allowing a request. I thought I would put these simple information on this blog post. Here we go: 404.14 - URL too long<requestLimits maxUrl=“10” />The length of the URL would constitute just the length of the URL (/iisstart.htm) - including 1 for the / and not the length of the query strings. So, if you browse to, then the length of this URL would be 17. 404.15 - QueryString too long <requestLimits maxQueryString=“10” /> The length of the query string here would constitute the length of the query strings, their values and also the delimiters (& and =). ? in the front is not counted for this. Hope this helps someone reading my blog, somewhere in this world.Happy Learning! Do you remember Scott Guthrie talking about a web deployment framework? Last November he gave a hint about this tool and its here now. Technical Preview 1 of its tool has been released now and the team is open for feedback. Check out the team's blog here. You can download the x86 version or the x64 version of this Technical Preview 1 version of Microsoft Web Deployment Tool. You can check the walkthroughs too. I tried just playing around this tool, and believe me, it was fairly simple tool to learn, play and deploy! And, for some reason, this tool looks fairly similar to the appcmd.exe command on its syntax and its ability to output in XML format. And, the one big thing impressed me most is the manifest file input. That's awesome! Check the walkthroughs to learn about this. Happy Deployments! Okay, here is one more IIS7 UI module which would be used while using FTP server with Active Directory user Isolation. In IIS6.0, you had Iis file which you would use to set msIIS-FTPRoot and msIIS-FTPDir property for the user in Active directory. Below is how it looks: To add this module in your IIS 7 manager follow the below steps: " /> <add name="IIS7ADFTPUI" />
http://blogs.msdn.com/rakkimk/default.aspx
crawl-001
refinedweb
834
50.84
Hi Julien, On 28.07.17 23:37, Julien Grall wrote: Hi, AdvertisingOn 07/28/2017 08:43 PM, Volodymyr Babchuk wrote:On ARMv8 architecture SMC instruction in aarch32 state can be conditional.version + paragraph please.Also, ARMv8 supports both AArch32 and AArch64. As I said in my answer on "arm: smccc: handle SMCs/HVCs according to SMCCC" ([1]), This field exists for both architecture. I really don't want to tie the 32-bit port to ARMv7. We should be able to use ARMv8 too. Not sure if I got this. My ARM 7 ARM (ARM DDI 0406C.c ID051414 page B3-1431) say following:"SMC instructions cannot be trapped if they fail their condition code check. Therefore, the syndrome information for this exception does not include conditionality information." ARMv8 ARM (ARM DDI 0487A.k ID092916) says that SMC from aarch32 state can be conditional and my patch checks this. But SMC from aarch64 state is unconditional, so there are nothing to check. At least, when looking at ISS encoding, i see imm16 field and RES0 field. No conditional flags. Thus, we should not skip it while checking HSR.EC value. > For this type of exception special coding of HSR.ISS is used. There is additional flag to check before perfoming standart handling of CCVALIDperforming standardand COND fields. Signed-off-by: Volodymyr Babchuk <volodymyr_babc...@epam.com> --- xen/arch/arm/traps.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index eae2212..6a21763 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c@@ -1717,8 +1717,20 @@ static int check_conditional_instr(struct cpu_user_regs *regs,int cond; /* Unconditional Exception classes */ +#ifdef CONFIG_ARM_32 if ( hsr.ec == HSR_EC_UNKNOWN || hsr.ec >= 0x10 ) return 1; +#else+ if ( hsr.ec == HSR_EC_UNKNOWN || (hsr.ec >= 0x10 && hsr.ec != HSR_EC_SMC32))+ return 1; + + /* + * Special case for SMC32: we need to check CCKNOWNPASS before + * checking CCVALIDMissing full stop.+ */ + if (hsr.ec == HSR_EC_SMC32 && hsr.cond.ccknownpass == 0) + return 1; +#endif /* Check for valid condition in hsr */ cond = hsr.cond.ccvalid ? hsr.cond.cc : -1;Cheers, [1]" _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org
https://www.mail-archive.com/xen-devel@lists.xen.org/msg117346.html
CC-MAIN-2017-34
refinedweb
358
62.54
Identity Management This was ripped from Jeff Watkins' blog. ...with a little changes to play nice with latest svn [as of 9th Jan 2006]. Quick start example Step 1 - Create new project $ tg-admin quickstart Name your project idtest and set the dburi in dev.cfg to point to a server and database you want to use. Step 2 - Edit idtest.egg-info/sqlobject.txt db_module=idtest.model, turbogears.identity.soprovider Step 3 - Create login.kid template The login template and the controllers.py file are already created with up-to-date SVN quickstarts. This was instituted sometime around SVN 485. Here's the code in case you don't have it. < template < - edit controllers.py Ad this code to the top of the file: from turbogears import identity import cherrypy Then add this inside the model class: @turbogears.expose( html="idtest.templates.login" ) def login( self, *args, **kw ): if hasattr(cherrypy.request,"identity_errors"): msg= str(cherrypy.request.identity_errors)( in_group="admin" ) def secured( self ): return dict() Note: You may need to revise the above code for the @identity.require decorator. In a recent mailing list post, Jeff Watkins writes the following. n the past you decorated your methods as such: @turbogears.expose() @identity.require( group="admin", permission="foo,bar" ) The require decorator checked whether the visitor was a member of the admin group AND had the permission foo AND had the permission bar. Many people wanted something more flexible, and with revision 400, any of the following are valid require decorators: @identity.require( in_group( "admin" ) ) @identity.require( in_all_groups( "admin", "editor" ) ) @identity.require( in_any_group( "admin", "editor" ) ) @identity.require( has_permission( "edit" ) ) @identity.require( has_all_permissions( "edit", "delete", "update" ) ) @identity.require( has_any_permission( "edit", "delete", "update" ) ) But most importantly, you can use decorators like theses: @identity.require( Any( in_group( "admin" ), has_permission ( "edit" ) ) ) @identity.require( All( from_host( "127.0.0.1" ), has_permission ( "edit" ) ) ) @identity.require( All( from_any_host( "127.0.0.1", "10.0.0.1" ), in_group( "editor" ) ) ) You can also use these same predicates in your own code: if in_group( "admin" ) and has_permission( "edit" ): pass else: pass I still haven't addressed the need for something like `is_owner`, because that seems *so* model specific. However, you may need to use the in_group, in_all_groups, etc. functions in the identity namespace. For example: @identity.require( in_group( "admin" ) ) changes to @identity.require( identity.in_group( "admin" ) ) Step 6 - Turn on Identity management Edit dev.cfg. Under the "IDENTITY" heading (around line 68), uncomment and edit the following to turn on identity management. Edit the failure url as well. identity.on=True identity.failure_url="/login" Step 7 - Create the database $ tg-admin sql create Step 8 - Create a user and group Using Catwalk is probably the easiest way to create user/group/permissions. Use this method if you can't get Catwalk set up. $ tg-admin shell Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from turbogears.identity.soprovider import * >>> hub.begin() >>> u=TG_User( userId="jeff", emailAddress="jeff@metrocat.org", displayName="Jeff Watkins", password="xxxxx" ) >>> g=TG_Group( groupId="admin", displayName="Administrators" ) >>> hub.commit() >>> Step 9 - Testing the login Start the project: $ ./idtest-start.py and visit secured page and login with the username and password you just created. It should fail with the message: Not member of group: admin Step 10 - Add the=TG_User.get(1) >>> g=TG_Group.get(1) >>> u.addTG_Group(g) >>> hub.commit() >>> Step 11 - Revisit secured page and login Browse to again and login, this time you should see the content of secured.kid. Identity and Kid templates In addition to restricting access to methods in controller files, identity checks can also be used to limit what links(or any other element, for that matter) show up in kid templates. This is done using py:if="" statements, like so: <a py:This is a test</a> <a py:This is a test</a> Make sure you import turbogears somewhere in your template for those identity checks to work. <?python import turbogears ?> or, to save on typing, <?python from turbogears import identity ?> and omit the "turbogears" part of the py:if statement. Applying security settings, not from source code, but from configuration data You should be able to specify security settings not only from source code but via some other means. The goal is to allow an administrator to set the security policy, not the programmer. FAQ's. Attachments - sosmbprovider.py (6.3 KB) - added by Joel Pearson 11 years ago. Example: authenticating against a Windows domain - sosmbprovider.2.py (5.9 KB) - added by Joel Pearson 11 years ago. Removed extraneous comments - please ignore previous version - soldapprovider.patch (5.3 KB) - added by andy.kilner@… 11 years ago. Authenticate using LDAP - sosmbprovider-(after_r1512).py (3.2 KB) - added by Joel Pearson 10 years ago. Simplified version for use with TurboGears r1512 and later (post-0.9a6) - soldapprovider.py (1.6 KB) - added by bosticka 10 years ago. Ldap provider for post r1512 with filter adjusted for Active Directory authentication. Just change the filter to use other ldaps…
http://trac.turbogears.org/wiki/IdentityManagement?version=20
CC-MAIN-2016-50
refinedweb
853
52.56
Packages A source file may start with a package declaration: package foo.bar fun baz() {} class Goo {} // ... All the contents (such as classes and functions) of the source file are contained by the package declared. So, in the example above, the full name of baz() is foo.bar.baz, and the full name of Goo is foo.bar.Goo. If the package is not specified, the contents of such a file belong to "default" package that has no name. Default Imports A number of packages are imported into every Kotlin file by default: - kotlin.* - kotlin.annotation.* - kotlin.collections.* - kotlin.comparisons.* (since 1.1) - kotlin.io.* - kotlin.ranges.* - kotlin.sequences.* - kotlin.text.* Additional packages are imported depending on the target platform: - JVM: - java.lang.* - kotlin.jvm.* - JS: - kotlin.js.* Imports Apart from the default imports, each file may contain its own import directives. Syntax for imports is described in the grammar. We can import either a single name, e.g. import foo.Bar // Bar is now accessible without qualification or all the accessible contents of a scope (package, class, object etc): import foo.* // everything in 'foo' becomes accessible If there is a name clash, we can disambiguate by using as keyword to locally rename the clashing entity: import foo.Bar // Bar is accessible import bar.Bar as bBar // bBar stands for 'bar.Bar' The import keyword is not restricted to importing classes; you can also use it to import other declarations: - top-level functions and properties; - functions and properties declared in object declarations; - enum constants Unlike Java, Kotlin does not have a separate "import static" syntax; all of these declarations are imported using the regular import keyword. Visibility of Top-level Declarations If a top-level declaration is marked private, it is private to the file it's declared in (see Visibility Modifiers).
http://kotlinlang.org/docs/reference/packages.html
CC-MAIN-2017-13
refinedweb
302
54.93
#include <Ethernet.h>byte mac[] = {0x54, 0x52, 0x49, 0x41, 0x44, 0x00};byte ip[] = {192, 168, 2, 99};byte gateway[] = {192, 168, 0, 1};byte subnet[] = {255, 255, 0, 0};EthernetServer server(4444);EthernetClient client;int sizeMessage = 11;void setup() { Serial.begin(9600); Ethernet.begin(mac, ip, gateway, subnet); server.begin(); client = server.available();}void loop() { char inByte; char message[sizeMessage]; int bytes; while(!client){ client = server.available(); } if(client.available()){ inByte = client.read(); if(inByte == '<'){ bytes = client.readBytesUntil('>', message, sizeMessage); Serial.println(message); //Serial.print("Number of Bytes = "); //Serial.println(bytes); } } } bytes = client.readBytesUntil('>', message, sizeMessage); Serial.println(message); message is NOT a string. Do not pass char arrays that are NOT strings to functions that expect strings. Are you saying that my Serial.println() is the problem because message is not a string, or that I should be using a string variable in the readBytesUntil()?TIA I am just beginning with bytes and char types. if (client.available()) { char c = client.read(); Serial.print(c); } @ieee48 - that would get the Serialprint to be correct, but I am trying to collect the whole message so I can use it for controlling some other functions in the arduino.This example is basically to set and RGB LED from the ethernet TCP IP message. That's why I was trying to use the readByteUntil() so it would collect the message.I was trying to set a loop to add each character to a message, but it seemed that was what readByteUntil() was made for.Thank you for the suggestion.
https://forum.arduino.cc/index.php?topic=482596.msg3294546
CC-MAIN-2019-47
refinedweb
257
53.17
- Infrastructure - House - Garden Currently I extend my sensor network. As a base I rely on Arduinos which are connected via KNX. As a sensor I have decided for a DHT22. It is cheap, very precise and there is a ready Arduino library so that the query of the temperature and humidity values are relatively simple. You can install the required Arduino library directly from your Android IDE. As for accuracy. It must always be calibrated. There are no sensors that do not have to be calibrated. Even if they are pre-calibrated, they have to be recalibrated at the latest after 1-2 years as the properties of the components change slightly over time. It should also calibrate as close as possible in the later range of values. As a reference to calibrate the temperature I have a GTH 175 PT from Greisinger. He "should be" very accurate for this price. I will carry out the humidity via a saturation measurement. Simply fill a glass with saline. Some water on it. Put the bag together with the sensor and seal the air tight. After 24 - 48 hours and a temperature from approximately 20°C, an air humidity of 75.4% should be adjusted. On average, my deviation from the reference point was between -0.1 ° C and +0.1 ° C, with 13 of 14 sensors. A sensor had a deviation of +0.3 ° C. So no bad values. The humidity was between 0% and 8%. Here you can get better results if you: 1. You disable the "Progress" LED in the source code. 2. You disable the power LED. For this, however, one has to cut the cable on the conductor with a cutter knife. On the last picture you can see a small slit on the bottom for better air circulation. #include <Arduino.h> #include <avr/sleep.h> #include <KnxTpUart.h> #include "DHT.h" #define DHTPIN 4 #define DHTTYPE DHT22 //DHT11, DHT21, DHT22 // Room: Livingroom, KNX address: 1.1.49, GroupTemperature: 0/7/1, GroupHumidity: 0/7/2 const String deviceAddress = "1.1.49"; const String groupTemperatureAddress = "0/7/1"; const String groupHumidityAddress = "0/7/2"; const int interval = 60; const int ledpin = 13; DHT dht(DHTPIN, DHTTYPE); KnxTpUart knx(&Serial, deviceAddress); void setup() { Serial.begin(19200); UCSR0C = UCSR0C | B00100000; // Even Parity knx.uartReset(); dht.begin(); watchdogOn(); // Enable watchdog timer. // The following saves some extra power by disabling some peripherals I am not using. ADCSRA = ADCSRA & B01111111; // Disable ADC, ADEN bit7 to 0 ACSR = B10000000; // Disable analog comparator, ACD bit7 to 1 DIDR0 = DIDR0 | B00111111; // Disable digitale inputbuffer, set analoge input pins 0-5 to 1 } void loop() { float temperature = dht.readTemperature(); // Read temperature knx.groupWrite2ByteFloat(groupTemperatureAddress, temperature); delay(5000); float humidity = dht.readHumidity(); // Read humidity knx.groupWrite2ByteFloat(groupHumidityAddress, humidity); digitalWrite(ledpin, HIGH); // Set LED State to visualize progess delay(1000); digitalWrite(ledpin, LOW); pwrDown(interval - 6); // Shutdown ATmega328 for "interval" seconds; } void pwrDown(int seconds) { set_sleep_mode(SLEEP_MODE_PWR_DOWN); // Set deepest sleep mode PWR_DOWN for(int i=0; i < seconds; i++) { sleep_enable(); // Enable sleep mode sleep_mode(); // Start sleep mode sleep_disable(); // Disable sleep mode } } void watchdogOn() { MCUSR = MCUSR & B11110111; // Disable reset flag, WDRF bit3 of MCUSR. WDTCSR = WDTCSR | B00011000; // Set bit 3+4 to be able to set prescaler later WDTCSR = B00000110; // Set watchdog prescaler to 128K > is round about 1 second WDTCSR = WDTCSR | B01000000; // Enable watchdog interrupt MCUSR = MCUSR & B11110111; } ISR(WDT_vect) { //sleepcounter ++; // count sleepcycles }
http://www.intranet-of-things.com/smarthome/house/sensors/arduino
CC-MAIN-2017-13
refinedweb
565
51.04
... 5 6 7 8 ] Tharaka Abeywickrama Posts: 5 Nickname: tharaka Registered: Feb, 2006 About Jython, why not JPype? Posted: Mar 11, 2006 12:04 AM Reply Advertisement Speaking of Jython, instead of re-implementing Python on Java, why not look at integrating CPython with the JVM, as done in JPype ( ) ? Personally, I think maintaining two implementations is a pain; one will always fall behind. JPype seems to have the right idea. Although the last time I checked it out it was a bit crashy. It also has a few limitations compared to Jython. For example, it cannot inherit a Python class from a Java class. It can only implement Java interfaces. But I these issues are relatively easy to resolve, especially if the PSF gets involved. Because lets face it, the amount of libraries, APIs and frameworks that are available on Java is just HUGE. And to be able to use them from Python easily is a huge bonus. And it will attract the people who are afraid to move to Python because they are afraid of missing their favorite Java framework/API. It can be argued that most Java APIs have substitutions in Python. But for the experienced Java developer, IT IS NOT THE SAME THING. The abstractions are different, the concepts are different, and the Python version could be missing some features. They would have the overhead of learning the new system that would atleast make them unproductive for a while. An example is Swing. Python has its own GUI frameworks but they are NOT Swing. For a Swing pro who knows to do magic with it, its never the same. Therefore Java/Python integration is the key in attracting these people. Lutz Pälike Posts: 2 Nickname: tremolo Registered: Mar, 2006 Re: Marketing Python - An Idea Whose Time Has Come Posted: Mar 11, 2006 5:31 PM Reply My flatmate is working with Ruby on Rails and today he showed me his new Ruby T-Shirt. It looks quite cool i think. see: There are also some other in the ruby collection: I liked also the ones with "Java Rehabilitation Clinic" on it but when i saw there is also "Python Rehabilitation Clinic" i wondered what kind of python symptoms need to be treated by using ruby? Is fun one of these symptoms ? ;) A quick search for python in the t-shirt webshop delivered rather lame results by the way ... Bruce Eckel Posts: 868 Nickname: beckel Registered: Jun, 2003 Re: Marketing Python - An Idea Whose Time Has Come Posted: Mar 17, 2006 3:58 PM Reply Here's an idea: I assumed it was out of our league, but first prize is a 5K gift certificate to a video equipment supplier. Chris Hart Posts: 4 Nickname: chart Registered: Mar, 2006 Re: Marketing Python - An Idea Whose Time Has Come Posted: Mar 25, 2006 7:12 AM Reply Shhh...don't tell anybody about Python. It's our competitive advantage. What's the goal of evangelizing Python anyway? So we have a neat clubhouse where we can play with our friends? So we make oodles and oodles of money? So we get respect from Perl and Java developers? So we encounter less resistance or no resistance to using Python professionally? Python is a mainstream language, and for a language with no big business organization behind it, it has done pretty darn well. Lots of people find it useful for all kinds of programming tasks. There is a big community, some large companies and organizations use it, and plenty of authors are around of good books and articles and blogs. What more could we want? Are we afraid of RoR and Ajax stealing our thunder? Of them taking away our key people? Of them making it impossible for us to code in Python? Afraid we'll like them better? Afraid of us becoming closet Ruby and JavaScript developers? How about we take their best ideas and fold them into Python instead? Petr Mares Posts: 5 Nickname: dramenbejs Registered: Feb, 2006 Re: Marketing Python - An Idea Whose Time Has Come Posted: Mar 26, 2006 2:12 PM Reply Thanks a lot for comments Mr. Hart, it's inspiring! What can we get out of marketing? * Bigger user base, it brings (in respect to python lang): - more 3rd party modules - greater pressure to bring new language features - more massive testing (more people test more :) - lesser competitive advantage over other people * Bigger credibility of python language among managers -- remember that many IT managers don't know about language other than Java & C# :) * It could boost growth of companies providing comercial support for the language -- because it will be more profitable. * Lesser paycheck :(. More python programmers in the public will drop the price of python programmer on the job market. OR * Bigger paycheck :) if the marketing will be targeted on IT managers, not on creating more python programmers :))) Sorry for the english, try to understand me :) John Sirbu Posts: 5 Nickname: silverleaf Registered: Mar, 2006 Re: Marketing Python - An Idea Whose Time Has Come Posted: Mar 27, 2006 9:17 AM Reply @ Chris Hart Are we afraid of RoR and Ajax stealing our thunder? The fact we work with a snake means we are pretty fearless individuals. ;P So it's not fear, it is more a general attitude of rebuke toward the R language. The "Lightning" (e.g. Zen and Power) will always stay the same, "Thunder" (e.g. Adoption and "Branding") is just as important I believe. There a pages arguments constantly talking about how "the best doesn't win, the mediocre is rewarded", classic VHS versus Betamax argument. Redundant question here... if you invest in a technology wouldn't you have wanted your investment to be supported? I believe it is similar with any "new" technology or idea. With Python, we've have invested our time to improve and expand the language. It's power and simplicity is unparallel, and it is widely know that much of the R language/code implementation is based on Python (as of late much of the R community wish to say it is based on Perl). Subconcious fear? No. Subconcious disdain? Maybe. Of them making it impossible for us to code in Python? Hobbists can code in any language however most will agree that their favorite tools are perferable to use rather then the ones that they HAVE to use. Afraid we'll like them better? Doubtful. Anyone who has had experiences with evangalists showing up at their front door handing out pamphlets, will concur. The "pushers" are very nice, nicest people around in fact. Yet, something about their mindset and pushy nature make them universaly rejected. Granted the risk of becoming what you dispise is valid however in a community such as ours momentum is vital to our survival. "Afraid of us becoming closet Ruby and JavaScript developers?" There is a difference I would think between choosing a path and being forced down that path. Once again, Adoption, "Branding" and investment comes into play... How about we take their best ideas and fold them into Python instead? (Google->"History of Computer Languages") Again, it is widely know that much of the R language/code implementation is based on Python (and again as of late much of the R community are starting say it is based on Perl). The Japanese have an expression about what is valuable. "People need to put value on the nut not the flower..." I guess the analogy is about how the flower is ephemeral and the nut is real. Rails is the only widely known "package" the rubbers have for the language at this time. And to go even further it is not even an "application" since it is a framework. The great ideas were already put to good used in Python. Bittorrent, Inkscape, OGRE, BitPim, Fast Artificial Neural Network Library are some of the already developed and mature applications using Python. 4930 packages in Sourceforge using Python... 457 packages in Sourceforge using Ruby... Ruby is only 2/3 years younger then Python, and it remained a language for tinkers (as reflected by it sparse offerings) yet it's mass push behind it's adoption due to one single framework makes no sense. "What's the goal of evangelizing Python anyway?" Recognition, elavation, and adoption of a language that deserves it. -John Basem Narmok Posts: 2 Nickname: narm Registered: Aug, 2005 Re: Marketing Python - An Idea Whose Time Has Come Posted: Apr 12, 2006 7:45 PM Reply I think it is the time to have a *Python Programming Certification*? say foundation, and advanced levels, certification is a very good marketing tool, also if managed will it could generate a reasonable profit for the PSF! We have PHP, Oracle, Microsoft, and many more development certifications, so why don't we have a Python one! maybe this will open the door to teach Python more and more in the universities. I think the industry today needs a Python Certification! Kondwani Mkandawire Posts: 530 Nickname: spike Registered: Aug, 2004 Re: Marketing Python - An Idea Whose Time Has Come Posted: Apr 13, 2006 5:54 AM Reply Chris Hart says: > Python is a mainstream language, and for a language with > no big business organization behind it, it has done pretty > darn well. I wouldn't exactly call Google a small business. Go through the first blog Guido posted when he first got hired by Google, he pointed out that it was the 3rd most used language after C++ and Java. Or by claiming: "with no big business organization behind it", do you mean with no big business that dedicates all its resources to it? I don't think even the software giant M$ dedicates all its resources to one language. Guido van van Rossum Posts: 359 Nickname: guido Registered: Apr, 2003 Re: Marketing Python - An Idea Whose Time Has Come Posted: Apr 13, 2006 6:00 AM Reply > Chris Hart says: > > Python is a mainstream language, and for a language with > > no big business organization behind it, it has done pretty > > darn well. Kondwani Mkandawire replies: > I wouldn't exactly call Google a small business. Google doesn't promote Python. It doesn't spend marketing money on Python. Yes, it pays my salary when I work on Python; but Google gets things in return (such as its name mentioned here :-) that are more valuable to it than promoting Python. So Chris's assertion stands; Python continues to move ahead without big business behind it. Kondwani Mkandawire Posts: 530 Nickname: spike Registered: Aug, 2004 Re: Marketing Python - An Idea Whose Time Has Come Posted: Apr 13, 2006 6:23 AM Reply To me all this sounds like semantics. Quote: "Python is a mainstream language, and for a language > with > > > no big business organization *behind it*," > > Kondwani Mkandawire replies: > > I wouldn't exactly call Google a small business. > "Google doesn't promote Python. It doesn't spend marketing money on Python." Good enough but Google at the moment is still humongous and I guess different people would regard having an entity "behind them" in different ways. Yes, it pays my salary when I work on > Python; IMHO, that is promotion in itself. They are paying you to work with and on your language (my apologies if I misunderstand your situation). > but Google gets things in return (such as its name > mentioned here :-) I'm sure short of Java, every other company that backs a language, gets something in return. > that are more valuable to it than > promoting Python. So Chris's assertion stands; Python > continues to move ahead without big business behind it. Beg to differ on that part. Have used python though for a language without as much backing I think it rocks - was a brilliant contribution to the software community. Robert Wilkins Posts: 1 Nickname: datahelper Registered: Sep, 2006 Re: Marketing Python - An Idea Whose Time Has Come Posted: Sep 30, 2006 9:21 PM Reply Hey guys, If anyone wants to try out a new data crunching programming language, you can find a Linux-compatible copy at For crunching messy and complicated data, and for doing complex data transformations, this new stuff has significant advantages over SQL SELECT, SPSS, and SAS. Robert Michael Mason Posts: 1 Nickname: masonranch Registered: Dec, 2006 Re: Marketing Python - An Idea Whose Time Has Come Posted: Dec 23, 2006 11:05 AM Reply Urgently Looking for Josh Gilbert and Mingo. Contact masonranch@aol.com ASAP please Ian Ozsvald Posts: 1 Nickname: ianozsvald Registered: Jun, 2007 Re: go mobile Posted: Jun 15, 2007 5:29 AM Reply I'll back up Stephan (of Wingware) on this - the Python wiki has a great page on Python IDEs: with 3 recent summary reviews linked at the top (note: 2 of them are mine written on ShowMeDo.com). If you want to see the IDEs in action then there are videos in ShowMeDo for Wing, SPE, IPython and PyDev - each video makes it clear that the IDEs compare well to the IDEs in other languages. I moved from Visual C++ and IDEA into the Python world years ago and have never looked back. Ian. > This is a good example of how the Python community shoots > itself in the foot. Have you actually tried the current > crop of IDEs? > Flat View: This topic has 117 replies on 8 pages [ « | 5 6 7 8 ] Previous Topic Next Topic Sponsored Links Web Artima.com - - Advertise with Us
http://www.artima.com/forums/flat.jsp?forum=106&thread=150515&start=105
CC-MAIN-2014-10
refinedweb
2,243
70.23