text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
We still didn’t get to GANs this week! So pushing this material to next week… Papers: We still didn’t get to GANs this week! So pushing this material to next week… Papers: @mariya and I found some bugs in the DCGAN notebook: Replace: dl,gl = train(MLP_D, MLP_G, 4000) With: dl,gl = train(MLP_D, MLP_G, MLP_m, 4000) Replace: plot_gen() With: plot_gen(MLP_G) @Matthew and I jumped ahead and looked at the dcgan and wgan notebooks at study group on Tuesday. Both of us had the experience, when training the wgan in the notebook, of training our notebooks for hours only to have the notebook freeze upon completion and be unusable. Matt took 4.5 hours to run 200 epochs on his own box, but my Amazon P2 instance took 4 hours to do only 50. Seems like you’ll want to train for small epochs at a time and then save / checkpoint often to avoid our #deepsuffering fate Also - when training dcgan, I found it quite hard to assess if the training was actually improving results. From a purely qualitative perspective, seems like the results got better, then got worse. Ian Goodfellow mentions in his paper how there’s not a universally quantitative way to measure “good”. Is this still true or are there better methods now? One of the big steps in WGAN is that they suggest that loss functions appear to be at least somewhat meaningful with that approach. They don’t have a solid mathematical proof, but it aligns with my experience. I didn’t see the crashes you did. Be sure to set verbose=2 and maybe use tqdm instead if you want to see progress. Otherwise jupyter can crash your browser because of the overly fast progress bar updates. wgan training took a little over 7 minutes for 2 epochs on the p2.xlarge instance CPU times: user 5min 46s, sys: 1min 35s, total: 7min 21s Wall time: 7min 18s I wonder why it is so slow? what are the ways to speed it up? in style transfer we learned about developing a vangogh-irises style transfer, if we were to focus only on a vangogh-irises wgan, could that in theory make things faster? That seems very fast to me. Perceptual losses training took an hour or so. Why would you say it’s slow? if I want to run the imagenet process notebook, can I do it with less than the full imagenet dataset? how can I get such a subset? Thanks if it takes 7 minutes for every 2 epochs, assume linear growth, then it takes 700 minutes for 200 epochs (as in the code), that is 10+hrs. Is that what we should expect? I did check my gpu using nvidia-smi to make sure it’s being used. But I did add limit_mem() after loading the libraries because my notebook crashed without this limiting the memory usage. Sure. You could just download the validation set from imagenet (or academictorrents). Try using… from keras.callbacks import ModelCheckpoint You can set it to save every few epochs or only your best models (unless WGAN I guess you would want to save every few epochs if val_loss is meaningless). You use it by creating a list of callbacks and passing it to the callbacks kwarg of model.fit or model.fit_generator. I want to make sure I understand something correctly, and maybe clarify this for others as well… It looks like any bcolz arrays used with BcolzArrayIterator will need to be randomized ahead of time as well. If for example, you construct your bcolz arrays using by linearly iterating over a directory tree of categorized cat and dog images (eg. using os.walk or Keras’ flow_from_directory with shuffle=False), then most of the chunks in your bcolz array will contain images for the same category (eg. chunks of cats, followed by chucks of dogs). If you used the BcolzArrayIterator on bcolz arrays constructed this way, you would get very poor training performance since it would train your model with batches containing just cats, followed by batches containing just dogs (even with shuffle=True, since this only shuffles images inside each chunk). That’s correct. That’s the key reason we randomized the file name list in the last lesson, so the resized image array would be in a random order. The file names contain the label, so that didn’t cause any problems in this case - otherwise you’d need to be careful to permute your labels in the same way. I was working on the neural-sr.ipynb notebook and created the compressed array using chunk size 32, and is getting out of memory error. I am trying to recreate the array using chunk size = 16 K.set_value(m_sr.optimizer.lr, 1e-4) train(32, 18000) Yeah sorry I forgot to mention in the class - I had the same problem. 16 works fine. I am working through the neural-sr.ipynb. I don’t understand this code. def mean_sqr_b(diff): dims = list(range(1,K.ndim(diff))) return K.expand_dims(K.sqrt(K.mean(diff**2, dims)), 0) One difficulty I have with Keras is that I can’t really call a function with some test inputs and check it’s outputs. It’s just a mean squared error - it averages over all the dimensions except the first. Here’s a tip: you can call a keras function manually to test it out by wrapping the call in K.eval(...). And you’ll need to wrap any arrays you pass it in K.variable(...). E.g.: K.eval(K.sum(K.variable(np.array([1.,2])))) Thank you for the cool tip! K.eval(mean_sqr_b(K.variable(np.array([[1.,2],[3,4]])))) gives array([[ 1.58113885, 3.53553391]], dtype=float32) as expected. I guess we don’t want to average over the first dimension because its the ‘batch’ dimension? In the imagenet_process.ipynb, there is a minor bug. However it affects the least significant digit so its impact is minor. This function: def parse_w2v(l): i=l.index(' ') return l[:i], np.fromstring(l[i+1:-2], 'float32', sep=' ') should be: def parse_w2v(l): i=l.index(' ') return l[:i], np.fromstring(l[i+1:-1], 'float32', sep=' ') The trailing ‘\n’ is actually one character. Alternatively one could just strip newlines during readlines(): lines = [l[:-1] for l in open(w2v_path+'.txt').readlines()] imagenet_process.ipynb classids.txt generation: import nltk # nltk.download() # > d # > wordnet # > q wordnet_nouns = list(nltk.corpus.wordnet.all_synsets(pos='n')) with open(os.path.join(w2v_basepath, 'classids.txt'), 'w') as f: f.writelines(['n{:08d} {}\n'.format(n.offset(), n.name().split('.')[0]) for n in wordnet_nouns])
https://forums.fast.ai/t/lesson-10-discussion/1807
CC-MAIN-2018-51
refinedweb
1,121
74.29
An audiobook is a recording or voiceover of a book or other work read aloud. You can listen to audiobooks on any smartphone, tablet, computer, home speaker system, or in-car entertainment system. In this article, I will walk you through how to create an audiobook with Python. You don’t need to buy a subscription for an audiobook if you have a pdf of the book. At the end of this article, you will learn how to create an audiobook with the python programming language in a few lines of code. Also, Read – Machine Learning Full Course for free. Let’s Create an Audiobook with Python Python has a large collection of libraries due to the very active community which serves various purposes. Here we need to use two libraries pyttsx3 and PyPDF2 to create an audiobook with Python. Both the above libraries can be easily installed by using the pip command; pip install PyPDF2, pip install pyttsx3. Reading the PDF File PyPDF2 allows manipulation of pdf in memory. This python library is capable of tasks such as: - extract information about the document, such as title, author, etc. - document division by page - merge documents per page - cropping pages - merge multiple pages into one page - encrypt and decrypt PDF files - and more. I will use this library to split the pdf file page by page, then read the text on each page, then send the text to the next step in the process to create an audiobook with Python. Code language: Python (python)Code language: Python (python) import PyPDF2 pdfReader = PyPDF2.PdfFileReader(open('file.pdf', 'rb')) The pyttsx3 library is capable of converting text to speech offline. The text that we read from a pdf is then fed as an input to the text-to-speech engine: Code language: Python (python)Code language: Python (python) import pyttsx3 speaker = pyttsx3.init() Now the next step in the process is to loop the process for each page of the pdf file and stop the pyttsx3 speaker engine last: Code language: Python (python)Code language: Python (python) for page_num in range(pdfReader.numPages): text = pdfReader.getPage(page_num).extractText() speaker.say(text) speaker.runAndWait() speaker.stop() Now the next step is to save the audio as mp3 file: Code language: Python (python)Code language: Python (python) engine.save_to_file(text, 'audio.mp3') engine.runAndWait() This is how we can build an audiobook with Python in a few lines of code. I hope you liked this article on how to create an audiobook with the python programming language. Feel free to ask your valuable questions in the comments section below.
https://thecleverprogrammer.com/2020/10/22/create-an-audiobook-with-python/
CC-MAIN-2021-43
refinedweb
434
69.62
Simple library for interacting with the Veracross API Project description Veracross API Python Library Provides an easy way to pull information from the Veracross API in Python. Rate limiting and pagination will be handled automatically. Usage Example: import veracross_api as v c = {'school_short_name': 'abc', 'vcuser': 'username', 'vcpass': 'password' } # Create a new object with library vc = v.Veracross(c) # Follow the guidelines specified here: # Specify the endpoint documented in the api or just one record from that target. # Examples of endpoint are: facstaff, students, classes, courses, course_schedules, enrollments, etc. # To return one record from that target, just specify the id number. # Additional parameters are passed using a dictionary. # Return all faculty and staff data = vc.pull("facstaff") print(data) # Return one faculty and staff member by id data = vc.pull("facstaff/99999") print(data) # Pass url parameters in a dictionary to the pull method. # Return all faculty staff updated after 2019-01-01 param = {"updated_after": "2019-01-01"} data = vc.pull("facstaff", parameters=param) print(data) # Return the amount of requests left in rate limiting vc.rate_limit_remaining # Return the amount of time left before the limit is reset vc.rate_limit_reset All data will be returned as a dictionary. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/veracross-api/
CC-MAIN-2020-40
refinedweb
220
50.33
06 October 2011 07:44 [Source: ICIS news] By Felicia Loo and Clive Ong ?xml:namespace> Naphtha prices in Asia closed on Wednesday at $860-861/tonne (€645-646/tonne) CFR (cost and freight) Japan - the weakest level since 27 June, with the naphtha crack spread versus November Brent crude futures narrowing to a two-month low of $97.60/tonne, according to ICIS data. At midday on Thursday, naphtha prices were higher at $866.50-869.50/tonne CFR Japan on the back of overnight gains in global crude futures following the release of US government data confirming an unexpectedly large fall in domestic crude stocks. In Asian trade on Thursday, NYMEX crude and Brent crude softened but continued to trade above $79/bbl and $102/bbl levels, respectively. “There are more [naphtha] cargoes available in the market and the premiums are narrowing,” said a northeast Asian trader. He added that “margins are worsening and some petrochemical makers are already cutting runs” in northeast Integrated LDPE margins fell by $56/tonne week on week to $448/tonne on 30 September, and HDPE margins lost $67/tonne to $298/tonne in the same period, according to ICIS. Meanwhile, butadiene prices were assessed at $2,700-2,800/tonne CFR in the week ended 30 September, down by 20.8% from four weeks ago, ICIS data showed. Weakening demand from the downstream styrene butadiene rubber (SBR), butadiene rubber (BR) and acrylonitrile-butadiene-styrene (ABS) segments further dampened buying appetite. Spot prices of styrene monomer (SM) closed at below $1,300/tonne FOB (free on board) “The overall market is still weak as the fourth quarter is a traditionally slow season for SM and downstream styrenics market,” said a Korean trader. With the uncertain economic outlook in the “With several plant outages and maintenance ongoing in Reflecting a bearish market, South Korea’s Honam Petrochemical bought by tender 50,000 tonnes of open-spec naphtha for delivery to Yeosu and Daesan in the first half of November at premiums of $3.50/tonne and $4.00/tonne to Japan quotes CFR. The premiums were higher in previous South Korean spot tenders, where the cargoes were awarded at Also, On 21 September, FPCC restarted its 700,000 tonne/year No 1 naphtha cracker located in Mailiao following a prolonged outage since May. The cracker is now running at 90% capacity. The company also runs a 1.03m tonne/year No 2 cracker, which is operating at 100% capacity. “The market is waiting for The possibility of cuts on production rates by downstream players is weighing on naphtha prices, traders said. Honam Petrochemical’s Daesan-based PP plant will remain shut for another 10 days after the 20-day turnaround because of poor market conditions, a company source said. A source from Korea Petrochemical Industry Co said the company is considering cutting production at its 470,000 tonne/year PP plant at Regional crackers are not seeking much naphtha supplies in the spot market, traders said. “Crackers are minimising their requirements and their [naphtha] stocks are more than comfortable,” one trader said. ($1 = €0.75) Additional reporting by Chow Bee Lin and James Dennis For more on naphtha,
http://www.icis.com/Articles/2011/10/06/9497920/asia-naphtha-to-stagger-on-weakening-margin-volatile.html
CC-MAIN-2014-10
refinedweb
534
50.77
import "golang.org/x/net/context/ctxhttp" Package ctxhttp provides helper functions for performing context-aware HTTP requests. Do sends an HTTP request with the provided http.Client and returns an HTTP response. If the client is nil, http.DefaultClient is used. The provided ctx must be non-nil. If it is canceled or times out, ctx.Err() will be returned. Get issues a GET request via the Do function. Head issues a HEAD request via the Do function. func Post(ctx context.Context, client *http.Client, url string, bodyType string, body io.Reader) (*http.Response, error) Post issues a POST request via the Do function. func PostForm(ctx context.Context, client *http.Client, url string, data url.Values) (*http.Response, error) PostForm issues a POST request via the Do function. Package ctxhttp imports 5 packages (graph) and is imported by 738 packages. Updated 2019-08-13. Refresh now. Tools for package owners.
https://godoc.org/golang.org/x/net/context/ctxhttp
CC-MAIN-2019-35
refinedweb
153
64.78
@tag) packagenames:... @argfiles -Joptions are not allowed in these files. You can run the Javadoc tool on entire packages, individual source files, or both. When documenting entire packages, you can either use -subpackages for: ac to .java source ( see the Java Language Specification) that are present in the .class files. > Notice this is just a normal HTML file and does not include a package declaration. The content of the package comment file is block tags must appear after the main description. If you add a @see tag. @see tag, it must have a fully-qualified name. When you run the Javadoc tool, you specify the overview comment file name with the -overview option. The file is then processed similar to that of a package comment file. <body>and </body>tags for processing. To include unprocessed files, put them in a directory called doc-files"> */ information about Identifiers in the Java Language Specification). be thread-safe, there's no reason to specify that we achieve this by synchronizing all of its exported methods. We should reserve the right to synchronize internally at the bucket level, thus offering higher concurrency. /* statement horizontal and vertical distances of point (x,y) */ public int x, y; // Avoid. { @see tags together. } This {@docRoot} tag can be used both on the command line and in a doc comment: This tag is valid in all doc comments: overview, package, class, interface, constructor, method and field, including the text portion of any tag (such as @return, @param and @deprecated).). /** * } } {@link}, except the link's label is displayed in plain text than code font. Useful when the label is plain text. Example: Refer to {@linkplain add() the overridden method}.This would display as: Refer to the overridden method. {@literaltext } A<B>CA<B>C <B>is not interpreted as bold (and it is not in code font). If you want the same functionality but with the text in code font, use {@code}. @paramparameter-name description) { }For more details, see writing @param tags. @throws tag does not exist for an exception in the throws clause, the Javadoc tool automatically adds that exception to the HTML output (with no description) as if it were documented with @throws tag. The @throws documentation A doc comment may contain multiple @version tags. If it makes sense, you can specify one version number per @version tag. . option only when passing package names into the javadoc command – it will not locate .java files passed into the javadoc command. (To locate .java files, cd to that directory or include the path ahead of each file, as shown at Documenting One or More Classes.) If -sourcepathpackage whose is omitted, the Javadoc tool uses -classpath to find the source files as well as class files (for backward compatibility). Therefore, if you want to search for source and class files in separate paths, use both -sourcepath to find user classes as it relates to extension classes and bootstrap classes, see How Classes Are Found. in conjunction with -exclude to exclude specific packages. ). -classpath(above) for more details. Separate directories in dirlist with semicolons (;). option has no effect except for English from 1.2 forward. English has its own default algorithm: <P>. -localeoption must be placed ahead (to the left) of any options provided by the standard doclet or any other doclet. Otherwise, the navigation bars will appear in English. This is the only command-line option that is order-dependent. Specifies the locale that javadoc uses when generating documentation. The argument is the name of the locale, as described in java.util.Locale documentation, such as. EUCJIS/SJIS. If this option is not specified, the platform default converter is used. Also see -docencoding and -chars.) For example, the following generates the documentation for the package com.mypackage and saves the results in the C:\user\doc\ directory: C:> javadoc -d \user\doc com.mypackage For example, let's look at what might appear on the "Use" page for String. The getName() method in the java.awt.Font class returns type String. Therefore, getName() uses String, and you will find that method on the "Use" page for String. Note that this documents only uses of the API, not the implementation. If a method uses String in its implementation but does not take a string as an argument or return a string, that is not considered a "use" of String. You can access the generated "Use" page by first going to the class or package, then clicking on the "Use" link in the navigation bar. argument is relative to the destination directory ( docs/spi). Details - The -link option option for javadoc to create links only to API within the documentation it is generating in the current run. (Without the -link option,: importstatement: by wildcard import, import explicitly by name, or automatically import for java.lang.*. For example, this would suffice: import java.io.*; java.lang.*. void foo(File f) {} implements, extendsor throwsstatement. An important corollary is that when you use the -link option, option requires that a file named package-list, which is generated by the Javadoc tool, exist at the URL you specify with -link. The package-list file is a simple text file that lists the names of packages documented at that location. In the earlier example, the Javadoc tool looks for a file named package-list at the given URL, reads in the package names and then links to those packages at that URL. For example, the package list for the Java SE API is located at and starts as follows: java.applet java.awt java.awt.color java.awt.datatransfer java.awt.dnd java.awt.event java.awt.font etc. When javadoc is run without the -link option, when it encounters a name that belongs to an external referenced class, it prints the name with no link. However, when the -link option is used, the Javadoc tool searches the package-list file at, but your shell does not have web access. You could open the package-list file in a browser at, save it to a local directory, and point to this local copy with the second argument, packagelistLoc. In this example, the package list file has been saved to the current directory " ." . The following command generates documentation for the package com.mypackage with links to the, the package-list file is generally local, and when using relative links, the file you are linking to is also generally local. So it is usually unnecessary to give a different path for the two arguments to -linkoffline. When the two arguments are identical, you can use -link. See the -link relative example. Manually Creating a package-list File - If a package-list file. Likewise, two companies can share their use for -linkoffline optionline to. This option exposes all private implementation details in the included source files, including private classes, private fields, and the bodies of private methods, regardless of the -public, -package, -protected and -private options. Unless you also use the -private option, not all private classes or interfaces will necessarily be accessible via links." If you do not supply any -group option,.()"> :Xaoptcmf:"taghead " option and @author. Omitting taghead causes tagname to appear as the heading. Placement of tags - The Xaoptcmf part: To Do: The documentation for this method needs work.Use of Colon in Tag Name - A colon can be used in a tag name if it is escaped with a backslash. For this doc comment: /** * X to works even if @todo is missing, then the position of -taglet determines its order. If they are both present, then whichever appears last on the command line determines its order. (This happens because the tags and taglets are processed in the order that they appear on the command line. For example, if -taglet and -tag both or -taglet is considered unknown, and a warning is thrown. The standard tags are initially stored internally in a list in their default order. Whenever -tag options are used, those tags get appended to this list – standard tags are moved from their default position. Therefore, if. option in place of its -tag option,: " containing: com.mypackage1 com.mypackage2 com.mypackage3You would then run javadoc with: C:> javadoc @options @packages path1or path2): C:>: C:> javadoc -bottom @bottom @packagesOr you could include the -bottomoption at the start of the argument file, and then just run it as: C:> javadoc @bottom @packages . See Doclet Overview for more information. C:\home\src\java\awt\*java. The destination directory is C:\home\html. * option. The examples below illustrate both alternatives. argument, such as java:javax:org.xml.sax.:> cd C:\home\src C:> javadoc -d \home\html java\awt\Button.java java\applet\Applet.javaThis example generates HTML-formatted documentation for the classes Buttonand Applet. same example is shown twice – first as executed on the command line, then as executed from a makefile. It uses absolute paths in the option arguments, which enables the same javadoc command to be run from any directory. C:\>. This Platform, Standard Edition 6 API Specification' DOCTITLE = 'Java™ Platform, Standard Edition 6 API Specification' HEADER = '<b>Java™ Platform, Standard Edition 6'':
http://docs.oracle.com/javase/7/docs/technotes/tools/windows/javadoc.html
CC-MAIN-2016-40
refinedweb
1,516
56.66
let computeSomething a b d= a#doTheAThing (b#doTheBThing true) (d#doTheDThing "MyString?") let compose g f = function x -> g (f x) let fold_left operator start list = let rec rfold accumulator l = match l with [] -> start | head :: tail -> rfold (operator accumulator head) tail in rfold start list let composeParamList list = fold_left (function f -> function y -> match y with (a,b,d) -> compose (computeSomething a b d) f) (function x -> x) listNow let me see an example of Ruby or whatever. You can't be taking this attitude of Static Typing Is Guilty Until Proven Innocent. A short description of what the code above does would be very useful... -- ct I don't have the slightest idea what the above code is doing (I think it's a little bit confused - why in the world do you need that rfold function?), but here's what I think is a pretty direct translation into Ruby: def computeSomething(a, b, d) a.doTheAThing(b.doTheBThing(true), d.doTheDThing("MyString?")) end def compose(g, f) lambda { |x| g[f[x]] } end def fold_left(list, start, &operator) if list.empty? start else fold_left(list.rest, operator[start, list.first], &operator) end end def composeParamList(list) fold_left(list, lambda {|x| x}) { |f, y| a, b, d = y; compose(computeSomething(a, b, d), f) } endOf course, in Ruby I wouldn't write the fold_left function that way - it seems a lot simpler to just do it the (gasp!) imperative way: def fold_left(list, n, &operator) list.each { |x| n = operator[n, x] } n endHave I misunderstood what the OCaml code is doing? -- AdamSpitz The Ocaml code assumes that computeSomething(a,b,c) is returning a function from X to X, then it defines a function over a list of triplets (a,b,c) that returns the function composition of all the functions generated by computeSomething for all the parameters in the list. The inner function rfold in fold_left implementation it's only one of my coding habits, if I can I like to cut down the length of expressions, so the recursion is defined only in terms of 2 parameters instead of three, because operator never changes. I ventured a wild guess and I think I corrected a mispelled code in composeParamList, please correct me if I'm wrong. I'm a little puzzled why you have to treat operators different than functions, but every language has its own curiosities, that's not big deal. But anyways, it looks to me that you haven't gained much with your DynamicTyping in Ruby, isn't it ? The Ocaml code is as concise and as flexible as the Ruby code. More importantly if somebody reads match y with (a,b,d) -> ... gets a clear idea that y has to be a triple, and when he sees the typing from the compiler, he gets a better picture of specifically what kind of triple there need to be in that list, somebody who reads computeSomething(*y) hardly gets a clue, unless he has access to the source code of computeSomething(a,b,d). It looks to me that important design information gets lost in a dynamically typed environment. There's also the aspect that ocaml runs orders of magnitude faster, and a very substantial part of ocaml optimizations are result of its advanced type system. Sigh... :) I don't blame you for trying to correct that code in composeParamList; it's related to one of the Ruby idiosyncrasies that I hate most. Ruby gives you a special syntax for passing a closure into a function, so that you don't have to type the word "lambda" all the time - but if you want to pass more than one closure into a function, you're stuck; you've just gotta type out the word "lambda" and pass it in as an ordinary parameter. I've fixed the *y thing, too. I thought that *y was just as clear (and more direct) than expanding it out to (a, b, d), but if we had meaningful variable names it'd definitely be better your way. :) As for the runtime speed, the Self VM is the DynamicTyping world's poster child for cool optimizations; take a look at some of the papers on the SelfLanguage website if you're interested. We can still do type-based optimizations, even without a type system. ;) -- AdamSpitz B has a method #doTheBThing bool -> B1 D has a method #doTheAThing string -> D1 A has a method #doTheAThing -> B1 x D1 -> RAgain. basically this is pretty much the same reasoning that a Smalltalk or Ruby programmer has to keep in his head. When you go into deeper abstraction the type equations may become quite complicated for a programmer, even if intuitively he manages it, the details that affect the code are very prone to errors. It is much more comfortable to get those errors reported by the compiler together with what is expected for typing to be reasonable. In the same time, there can be warning bells, not necessarily errors: a function that you write is reported to have a type different than you'd expect. It is likely that you've got a programming error. TypeInference is absolutely essential for TypefulProgramming because otherwise it'd be prohibitive for the programmer to explicitly describe the typing relations. Just the sheer annoyance of writing so many symbols will put everybody off, like the sheer annoyance of writing Java interfaces will put off people when trying to create more powerful abstractions in Java. There are also limits to TypeInference that makes it strictly less expressive than DynamicTyping. There is correct dynamically typed code that you just can't write in Haskell or Ocaml. The most common example is: let f x= if x#test then x#doSomething else x#doSomethingElseIn Ocaml x is required to have #test and both doSomething and doSomethingElse. If x's class has only the #doSomething method but always returns true to #test, you'll get a type error in Ocaml even if the code would be running perfectly well in Ruby. It is believed however that such code is rare, unnecessary, and most likely a sign of not so good design. In any case one can always write code that does the same thing as the "untypable" code, albeit in a different form. (You can't avoid such code in cross-browser JavaScript. --VladimirSlepnev?) objectStore.getClient 143 # return client 143 objectStore.getInvoice 15 # return invoice 15then my method_missing signature looks like: def method_missing (methodId, objId) # get the metaclass -- Client, Invoice, etc -- for the objectType methodName = methodId.id2name methodName =~ /^get(.*)$/ objectTypeName = $1 objectType = ClassUtil?.getObjectTypeFromString objectTypeName # now get the object get objectType, objId end(methodId is a Ruby-defined object describing the method call, and I'm free to add as many parameters after that that I like.) But if I also want to retrieve a collection of domain objects by querying for a search term, those method calls might look like: objectStore.getClients "%International%", "name" # return a collection of clients whose "name" fields contain the string # "International"then my method_missing signature needs to handle both cases, in which case the signature might look like: def method_missing (methodId, objIdOrSearchTerm, fieldName = nil) # if we just want a single object objId = objIdOrSearchTerm # use the old code to search by objId # but if we want a collection searchTerm = objIdOrSearchTerm # write your new code to search using searchTerm and fieldName endThis is messy, in its way, and if I keep cramming stuff into method_missing I'm going to have to look into different ways to deal with it. (I certainly don't want to have parameters called objIdOrSearchTermOrCacheDurationOrSummaryField). But right this minute, it's working fine, and it makes my life quite a bit easier. -- francis sub with(IDisposable $value, mu &block) PRE { &block.arity == 1 } does { try { block($value); } ensure { $value.Dispose(); } }There. with accepts an IDisposable, and a block with Arity 1 (takes one argument, as specified by the precondition). Example code: with open("File.txt") -> $file { # do something with filehandle $file. }TypesAreInterfacesNotClasses?. let f x = if x#test then x#doSomething else x#doSomethingElse;;where this works in dynamically-typed systems if x#test is always true when x#doSomethingElse is undefined... Equivalent code (which can be statically proven not to have a *type* error) be something like: let f x = if x#test then ( assume (x : {doSomething : () -> ?}) in x#doSomething() else (* Error handling or throw exception. *) ) else ( assume (x : {doSomethingElse : () -> ?}) in x#doSomethingElse() else (* ... *) )Where assume (typedeclaration) is an explicit run-time type check, and ? means "infer this for me, woudja?"
http://c2.com/cgi-bin/wiki?TypefulProgramming
CC-MAIN-2015-35
refinedweb
1,432
58.82
Does anybody know how to import the auto complete emails from PC Outlook 2007 to Mac Outlook 2011? This is not the same as auto complete for words, but what happens when we begin to type in an email in Outlook and it shows us a list of emails to choose from. I imported contacts fine but don’t know how to do this. Sorry for the long winded question but want to avoid confusion. import 'Outlook.nk2 The .nk2 file contains all the auto complete addresses for outlook in windows and is located in Windows XP:- C: drive:Documents and Settingsuser nameApplication DataMicrosoftOutlook. Windows Vista/7:- C: drive: usersuser nameAppDataRoamingMicrosoftOutlook You can copy the .nk2 file from the above locations. or use this soft the nk2 editor Install it> open the nk2.exe file> select all contacts> click on file> save selected items > in save as type choose CSV> ok Use this CSV format and then you can import it office 2011 on MAC You will observe that the CSV file if imported directly, will not be displaying the desired information in Mac Outlook, under contacts. So you need to edit the CSV file, and delete all other fields except the "Name" and "Email Address" fields. Once the CSV file is edited, import it in Outlook for Mac using the Import option. You will now find all the contacts from the NK2 file under a new contact list in Mac Outlook. or you can instal thunderbird on both computers in windows you can import contacts as LDIF, then the file you add it on usb key and then transfer to MAC and ask thunderbird to import LDIF then will go to MAC office.
http://www.makeuseof.com/answers/import-autocomplete-email-outlook-2011-mac/
CC-MAIN-2017-51
refinedweb
284
66.57
Hi, I have a script i am porting over from 2.62 to 2.63. In my script i go through an objects faces and select/deselect based on some criteria. However i have found that deselecting a face/polygon does not work as i expect. E.G myFace.select = False When I examine the face in blender I can see that the face appears to not be selected however it has a yellow outline as if the edges are selected. When I perform an operation on the mesh it treats the face as selected. If I set a face to selected = True it works as 2.62 however the de selection seems to have changed and I cant figure out how to do it. I have tried deselecting the verts and edges but it has no effect. So how can I deselect a polygon in blender 2.63? Thanks Karl How can I deselect a face/poly. The 2.62 way no longer works Scripting in Blender with Python, and working on the API Moderators: jesterKing, stiv 2 posts • Page 1 of 1 switch to object mode, set polygons[n].select = False, switch back to edit mode or: use bmesh api or: use bmesh api Code: Select all import bmesh ob = bpy.context.object me = ob.data bm = bmesh.from_edit_mesh(me) bm.faces[n].select_set(False) # you may use select = False, but make sure you flush selection (see docs) I'm sitting, waiting, wishing, building Blender in superstition... 2 posts • Page 1 of 1 Who is online Users browsing this forum: No registered users and 0 guests
http://www.blender.org/forum/viewtopic.php?t=25158&view=previous
CC-MAIN-2015-22
refinedweb
268
73.68
On Apr 13, cattelan@xxxxxxxxxxx wrote: > First it would be best not to include linux/fs.h in any file > not in the fs/xfs/linux directory. It will cause headaches. > > If fact that linux/xfs_sema.h shouldn't be there either. > > If the file is general enough say types.h it can go > in xfs_os_defs.h; I doesn't really hurt if some > *.c files include some header files they don't need. > As long as it doesn't create conflicts. > > if you really really need to include fs.h put it > in xfs_os_defs.h with an > #ifdef NEED_FS_H wrapper. > then define it in what ever file that it is needed. If I were to change this: #include <xfs_os_defs.h> #ifdef SIM #define _KERNEL 1 #define __KERNEL__ 1 #endif #include <linux/xfs_sema.h> to this: #define NEED_XFS_SEMA_H #include <xfs_os_defs.h> #ifdef SIM #define _KERNEL 1 #define __KERNEL__ 1 #endif it wouldn't do the Right Thing in the SIM case, would it? -Phil -- Phil Schwan, Captain Popetastic, Linuxcare Canada
http://oss.sgi.com/archives/xfs/2000-04/msg00189.html
CC-MAIN-2014-52
refinedweb
170
71.92
Inside the constructor, as well as in other methods belonging to the object, a special keyword called this can be used. This keyword is a reference to the current instance of the class. If, for example, the constructor’s parameters have the same names as the corresponding fields, then the fields could still be accessed by using the this keyword, even though they are shadowed by the parameters. package javaapplication19; public class JavaApplication19 { static class MyRectangle { int x, y; public MyRectangle(int x, int y){ this.x = x; this.y = y; } public int getArea() { return x * y; } } public static void main(String[] args) { MyRectangle r = new MyRectangle(10, 20); int area = r.getArea(); System.out.println(area); } }
https://codecrawl.com/tag/this/
CC-MAIN-2018-30
refinedweb
117
53.41
We will see two programs to find the average of numbers using array. First Program finds the average of specified array elements. The second programs takes the value of n (number of elements) and the numbers provided by user and finds the average of them using array. To understand these programs you should have the knowledge of following Java Programming concepts: 1) Java Arrays 2) For loop Example 1: Program to find the average of numbers using array public class JavaExample { public static void main(String[] args) { double[] arr = {19, 12.89, 16.5, 200, 13.7}; double total = 0; for(int i=0; i<arr.length; i++){ total = total + arr[i]; } /* arr.length returns the number of elements * present in the array */ double average = total / arr.length; /* This is used for displaying the formatted output * if you give %.4f then the output would have 4 digits * after decimal point. */ System.out.format("The average is: %.3f", average); } } Output: The average is: 52.418 Example 2: Calculate average of numbers entered by user In this example, we are using Scanner to get the value of n and all the numbers from user. import java.util.Scanner; public class JavaExample { public static void main(String[] args) { System.out.println("How many numbers you want to enter?"); Scanner scanner = new Scanner(System.in); int n = scanner.nextInt(); /* Declaring array of n elements, the value * of n is provided by the user */ double[] arr = new double[n]; double total = 0; for(int i=0; i<arr.length; i++){ System.out.print("Enter Element No."+(i+1)+": "); arr[i] = scanner.nextDouble(); } scanner.close(); for(int i=0; i<arr.length; i++){ total = total + arr[i]; } double average = total / arr.length; System.out.format("The average is: %.3f", average); } } Output: How many numbers you want to enter? 5 Enter Element No.1: 12.7 Enter Element No.2: 18.9 Enter Element No.3: 20 Enter Element No.4: 13.923 Enter Element No.5: 15.6 The average is: 16.225
https://beginnersbook.com/2017/09/java-program-to-calculate-average-using-array/
CC-MAIN-2020-40
refinedweb
337
53.78
range() vs enumerate() In this lesson, you’ll learn how to use range() and enumerate() to solve the classic interview question known as Fizz Buzz. range() is a built-in function used to iterate through a sequence of numbers. Some common use cases would be to iterate from the numbers 0 to 10: >>> list(range(11)) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] To learn more, check out The Python range() Function. enumerate() is a built-in function to iterate through a sequence and keep track of both the index and the number. You can pass in an optional start parameter to indicate which number the index should start at: >>> list(enumerate([1, 2, 3])) [(0, 1), (1, 2), (2, 3)] >>> list(enumerate([1, 2, 3], start=10)) [(10, 1), (11, 2), (12, 3)] To learn more, check out Use enumerate() to Keep a Running Index. Here’s the solution in the video: for i, num in enumerate(numbers): if num % 3 == 0: numbers[i] = "fizz" if num % 5 == 0: numbers[i] = "buzz" if num % 5 == 0 and num % 3 == 0: numbers[i] = "fizzbuzz" You could also solve the question using only one if condition: for i, num in enumerate(numbers): if num % 5 == 0 and num % 3 == 0: numbers[i] = "fizzbuzz" elif num % 3 == 0: numbers[i] = "fizz" elif num % 5 == 0: numbers[i] = "buzz" Both are valid. The first solution looks more similar to the problem description, while the second solution has a slight optimization where you don’t mutate the list potentially three times per iteration. In the video, you saw the interactive Python Terminal iPython. It has color coating and many useful built-in methods. You can install it. 00:00 In this video, you will learn about two different built-in functions, range() and enumerate(). Let’s get started with a real-world coding question called FizzBuzz. 00:11 So, fizz_buzz() is a function that takes in a list of integers and replaces all integers that are evenly divisible by 3 with the string "fizz", replaces all integers divisible by 5 with the string "buzz", and replaces all integers divisible by both 3 and 5 with the string "fizzbuzz". 00:30 So, here is the example. We have a list [45 , 22 , 14 , 65 , 97 , 72 ]. 00:36 We call fizz_buzz() on that list, and it mutates that list and replaces the numbers according to these rules. So, 45 is divisible by both 3 and 5, so it gets replaced with the string "fizzbuzz". 22 and 14 are not divisible by either 3 or 5. 00:55 65 is divisible by 5— "buzz". 97 is not divisible. 72 is divisible by 3 and we get the string "fizz". 01:06 So, let’s try to code out the solution using the built-in function range(). So, range() takes in a number. Given a list, we have to get the length of the list and pass that in, and it will iterate through the numbers 0 until whatever you pass in. 01:24 Let’s run this file interactively to see how range() works. So, --no-banner, just to remove some of the output at the top, and range_vs_enumerate.py. 01:35 Cool. Let’s call fizz_buzz() on just a random list, [1, 2, 3]. We get 0 1 2. That’s because the length of the list is 3, and so we iterate through 0, 1, and 2, not including 3. 01:51 So, how can we use range() to solve our problem? Well, we can get that number at that index, numbers[i], and then we do our checks here. if num % 3—so, that will check if it’s divisible, because % (modulo) gets the remainder and if there is no remainder, then it is divisible by 3. 02:11 Then, we mutate our numbers, at index i, equal to the string "fizz". Let’s copy this for 5. 02:26 Then our last case is if it’s both divisible by 3 and divisible by 5— 02:34 then, we get numbers[i] = "fizzbuzz". So, let’s run this file interactively. Copy this, 02:51 numbers. Cool! That output looks correct. Before we move on, let’s just quickly go over the doctest module in Python. So, it’s a really nice module that we will actually go over more in detail in a couple videos, but basically when you run python3 -m doctest and then the filename, it will look at all your functions, look at the docstring, look at the doctests, and then run them and compare the output here with the output of your code. So when we run it, if nothing is outputted, that means it passes all the tests. But if—let’s just say—we change this string, we run it, it says Expected—this is our expected—but we actually got this by running the code. 03:41 So that way, it’s a little easier. Instead of copying and pasting like this, we can just run this command over and over to test our code. 03:50 So now, let’s clean this code up using enumerate(). enumerate() is a function that takes in an iterable and iterates through that iterable and returns tuples where the first element of the tuple is the index and the second element is the value. 04:04 So, if we do something like [tup for tup in enumerate([1, 2, 3])]—forgot to close—we get 0—which is the index, 1—which is the value, 1—which is the index, 2—which is the value. So, how can we use that to clean this up? 04:21 I’m just going to copy and paste this code below, comment this out, just so you can see the difference, change this to enumerate(). It takes an iterable, not a number. 04:33 Then, we do num, like this. 04:39 Save it, close this, run our doctest module. And it works. 04:45 One nice parameter we can pass into enumerate() is this extra start parameter, where we can basically loop through our iterable but the index will actually start at this index. 05:02 So, the code will look like this. And our index is 10, 11, 12. So, that concludes the video regarding range() and enumerate(). In the next video, you will learn about list comprehensions and some useful list methods. Mind blown. Thank you. Hi James, I installed iPython and ran it via command line but I don’t see similar interface like your have. Can you please tell me how I can have the similar setup as you? Thanks Hi @Abu I am also using VSCode in the video with “Dainty – Monokai” theme. I have a split screen setup with iPython + Terminal on one side and code on the other side. I actually realized it after posting the comment :-) Thanks a lot though. Hey @James Thanks for the video and mentioning ipython. I didn’t knew about ipython before, so I tried it after watching the view. But when i get into ipython it was surprising the interface was very familiar. I was using it daily for the past couple o months in django shell, didn’t noticed I was using ipython. LOL. Hello James, great material! My question is does the order of the if statements matter? I had thought that the if statement for those numbers divisible by 5 and 3 should come first, because it is the strictest. I am a newb, so I apologize if this is a dumb question. Thanks, Mark @Agil Yea IPython creeps into everything! I just learned about the ipdb debugger which expands on pdb. import ipdb; ipdb.set_trace() pypi.org/project/ipdb/ @Mark No dumb questions at all! You are right there is another way to write the code: if num % 5 == 0 and num % 3 ==0: numbers[i] = "fizzbuzz" elif num % 5 == 0: numbers[i] = "buzz" elif num % 3 == 0: numbers[i] = "fizz" But be careful you need to use if-elif-elif or else the following conditions will override the first one (3 if statements). Try it out yourself! Thanks, James! Become a Member to join the conversation. James Uejio RP Team on April 27, 2020 I use the interactive Python Terminal iPython. It has color coating and many useful built in methods. You can install it here.
https://realpython.com/lessons/range-vs-enumerate/
CC-MAIN-2020-40
refinedweb
1,408
80.31
Hi brian I uploaded the latest changes that support batch mode. see more comments bellow On 9/30/05, Brian Moseley <bcm@osafoundation.org> wrote: > i installed version 1.3.1 of the javaapp plugin and ran 'maven' to > build. running the jar gives me: 'Failed to load Main-Class manifest > attribute from jcr-commands-1.0-dev.jar'. am i doing something wrong? > thanks! > the problem with your previous attempt to run the cli was that you built the runnable jar called jcr-commands-1.0-dev-app.jar but you tried to run the jar built by default by maven. You have to run the goal "javaapp" and run the xxx-app.jar . the javaapp plugin will build a single runnable jar called "jcr-commands-1.0-dev-app.jar" that includes all the dependencies (jackrabbit, jcr-rmi, etc) and will create the manifest file pointing to the runnable java class. see the README file. type "help" in the interactive mode to get the list of available commands and type "help <command name>" to get a command's detailed description. I copied some texts from the spec but I soon realized that it might be illegal, is it? could anyone tell me if I can just copy some descriptions from the spec?, so unfortunately for you ;) I wrote most of the bundle. I'll review it asap. I warn you that it's not thorougly tested, see the TODO file regarding a recent post of yours, I let you know that it includes a command for registering namespaces and I plan to implement one for registering node types soon. br, edgar
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200510.mbox/%3C8a83c96b0510041415v700c43bakeaa20e2466a30d91@mail.gmail.com%3E
CC-MAIN-2017-13
refinedweb
273
74.69
public class Solution { public List<Integer> countSmaller(int[] nums) { List<Integer> output = new ArrayList<Integer>(); TreeSet<Integer> previouses = new TreeSet<Integer>(); if(nums.length == 0) { return output; } else { output.add(0); previouses.add(nums[nums.length - 1]); } for(int i = nums.length - 2; i >= 0; i--) { previouses.add(nums[i]); output.add(previouses.headSet(nums[i]).size()); } Collections.reverse(output); return output; } } Theoretically, the run time should be O(nlogn). What makes you think it should be O(nlogn)? I'm not sure, but I think your previouses.headSet(nums[i]).size() is likely only O(n), could be optimized to achieve O(logn), but that it would take extra effort/costs that are just not worth it because it's an unusual use case. I ran a little test, added three lines: public List<Integer> countSmaller(int[] nums) { nums = new int[5000]; for (int i=0; i<nums.length; i++) nums[i] = -i; ... (rest of your code) Running that took about 170 ms. Doubling the size to 10000 numbers roughly quadrupled the time to about 650 ms. So looks like your solution is indeed only O(n^2). Interesting idea, though. I didn't know headSet before. Calling size() on a headSet/tailSet view is linear time operation in Java.
https://discuss.leetcode.com/topic/33353/why-using-java-treeset-ends-up-a-time-limit-exceeds
CC-MAIN-2017-47
refinedweb
211
52.76
A lightweight library for working with Akoma Ntoso Act documents Project description Cobalt is a lightweight Python library for working with Akoma Ntoso documents. It makes it easy to work with Akoma Ntoso documents, metadata and FRBR URIs. It is lightweight because most operations are done on the XML document directly without intermediate objects. You still need to understand how Akoma Ntoso works. Read the full documentation at cobalt.readthedocs.io. Quickstart Install using: $ pip install cobalt Use it like this: >>> from cobalt import Act >>> act = Act() >>> act.>> act.>> act.frbr_uri.year '1980' >>> act.frbr_uri.date '1980-05-03' >>> act.frbr_uri.number '10' >>> act.frbr_uri.doctype 'act' >>> print act.to_xml() [ lots of xml ] Contributing Clone the repo Install development dependencies: pip install -e .[dev] Make your changes Run tests: nosetests && flake8 cobalt Send a pull request Releasing a New Version Run the tests! Update VERSION appropriately Update the Change Log section in README.rst Commit and tag: git tag vX.X.X git push -u origin --tags Build artefacts: rm -rf build dist && python setup.py sdist bdist_wheel Upload to PyPI: twine upload dist/* License and Copyright Cobalt is licensed under the LPGL 3.0 license. Cobalt is Copyright 2015-2020 AfricanLII. Change Log 4.1.0 - Allow setting of missing component names 4.0.2 - Better error handling when parsing malformed XML. 4.0.1 (replaced by 4.0.2) 4.0.0 - Support AKN 3.0 namespaces - Produce URIs with akn prefix by default (backwards compatibility maintained) - Support all Akoma Ntoso document types - Start FRBR URI work component with ! (eg. !main) - FRBRcountry uses full country code from the FRBR URI - FRBRnumber uses number portion from FRBR URI - FRBRdate for FRBRWork contains the date portion of the FRBR URI - Include AKN 3.0 schema and support for validating against the schema - The elements returned by components() are now attachment or component elements, not the inner doc 3.1.1 - FIX issue where a four-digit number in an FRBR URI confuses the parser 3.1.0 - Replace arrow with iso8601, avoiding arrow issue 612 3.0.0 - Python 3.6 and 3.7 support - Drop support for Python 2.x 2.2.0 - FIX don’t mistake numbers in uris with subtypes and numeric numbers as actors - FIX link to GitHub - Unicode literals when parsing FRBR URIs 2.1.0 - FIX don’t strip empty whitespace during objectify.fromstring 2.0.0 - FIX don’t pretty-print XML, it introduces meaningful whitespace 1.0.1 - FIX FrbrUri clone bug when a URI had a language. 1.0.0 - Move table of contents, render and other locale (legal tradition) specific functionality out of Cobalt. - FIX bug that returned the incorrect language when extracting a document’s expression URI. 0.3.2 - Inject original img src as data-src 0.3.1 - Support for i18n in XSLT files, including all 11 South African languages from myconstitution.co.za 0.3.0 - Support for images - Change how XSLT params are passed to the renderer - Add expression_frbr_uri method to Act class 0.2.1 - When rendering HTML, ensure primary container elements and schedules have appropriate ids 0.2.0 - When rendering HTML, scope component/schedule ids to ensure they’re unique 0.1.11 - Render ref elements as HTML a elements - Optionally prepend a resolver URL before a elements 0.1.10 - Convert EOL elements to BR when changing XML to HTML 0.1.9 - Support dates before 1900. Contributed by rkunal. 0.1.8 - lifecycle and identification meta elements now have a configurable source attribute 0.1.7 - TOCElement items now include a best-effort title 0.1.6 - Use HTML5 semantic elements section and article when generating HTML for acts 0.1.5 - FIX use schedule FRBRalias as heading 0.1.4 - Transforming XML to HTML now includes all attributes as data- attributes 0.1.3 - Refactor TOC helpers into own file - Fix .format in FrbrUri 0.1.1 - first release Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cobalt/
CC-MAIN-2021-04
refinedweb
689
60.31
On Tuesday evening I attended a Django District meetup on Grumpy, a transpiler from Python to Go. Because it was a Python meetup, the talk naturally focused on introducing Go to a Python audience, and because it was a Django meetup, we also focused on web services. The premise for Grumpy, as discussed in the announcing Google blog post, is also a web focused one — to take YouTube’s API that’s primarily written in Python and transpile it to Go to improve the overall performance and stability of YouTube’s front-end services. While still in experimental mode, they show a benchmarking graph in the blog post that shows as the number of threads increases, the number of Grumpy transpiled operations per second also increases linearly, whereas the CPython ops/sec actually decreases to a floor. This is fascinating stuff and actually kind of makes sense; potentially the opportunities for concurrency in Go defeat the GIL in Python and can give Python code deployable scalability. Still, I wanted to know, if it’s faster, how much faster is it? (skip ahead to results) In both the meetup talk and the blog post, the fibonacci benchmark is discussed. Unfortunately, neither had raw numbers and since I wanted to try it out on my own anyway, I thought I would. In this post I’ll review the steps I took to use Grumpy then the benchmarking numbers that I came up with. Getting Started Transpiling Because the package is in experimental mode, you must download or clone the Grumpy repository and do all your work in the project root directory. This is because relative paths and a couple of special environment variables are required in order to make things work. First clone the repository and change your working directory to the project root: $ git clone $ cd grumpy At this point you need to build the grumpy tools and set a couple of environment variables to make things work. $ make $ export GOPATH=$PWD/build $ export PYTHONPATH=$PWD/build/lib/python2.7/site-packages Note that the make process actually took quite a bit of time on my MacBook, so be patient! I also added the export statements to an .env file locally so that I could easily set the environment for this directory in the future. The hello world of Grumpy transpiling is quite simple. First create a python file, hello.py: #!/usr/bin/env python if __name__ == '__main__': print "hello world!" You then transpile it and build a binary executable as follows: $ build/bin/grumpc hello.py > hello.go $ go build -o hello hello.go The first step uses the grumpc transpiler to create Go code from the Python code, and outputs it to the Go source code file, hello.go. The second step uses the go build tool (which requires the $GOPATH to be set correctly) to compile the hello.go program into a binary executable. You can now execute the file directly: $ ./hello hello world! Fibonacci In order to benchmark the code for time I want to compare three executables: - A Python 2.7 implementation with recursion (fib.py) - A pure Go implementation with similar characteristics (fib.go) - The transpiled Python implementation (fibpy.go) Note: Obligatory Py2/3 comment: Grumpy is about making the YouTube API better, which is written in Python 2.7; so tough luck Python 3 folks, I guess. The hypothesis is that the Python implementation will be the slowest, the transpiled one slightly faster and the Go implementation will blaze. For reference, here are my implementations: #!/usr/bin/env python import sys def fib(i): if i < 2: return 1 return fib(i-1) + fib(i-2) if __name__ == '__main__': try: idx = sys.argv[1] print fib(int(idx)) except IndexError: print "please specify a fibonacci index" except ValueError: print "please specify an integer" The Python implementation is compact and understandable, coming in at 14 lines of code. The Go implementation is slightly longer at 24 lines of code: package main import ( "fmt" "os" "strconv" ) func fib(i uint64) uint64 { if i < 2 { return uint64(1) } return fib(i-1) + fib(i-2) } func main() { if len(os.Args) != 2 { fmt.Println("please specify a fibonacci index") os.Exit(1) } idx, err := strconv.ParseUint(os.Args[1], 10, 64) if err != nil { fmt.Println("please specify an integer") os.Exit(1) } fmt.Println(fib(idx)) } In order to transpile the code, build it as follows: $ $ build/bin/grumpc fib.py > fibpy.go $ go build -o fibpy fibpy.go And of course build the go code as well: $ go build -o fib fib.go The transpiled code comes in at a whopping 255 lines of code, so I’ll not show it here, but if you’re interested you can find it at this gist. One interesting thing about Grumpy is it uses a πsymbol for variable names that reference Python, for example, the grumpy package is imported into the namespace πg. So in terms of code, we have the following characteristics: But frankly that’s fair — Grumpy has to do a lot of work to bring over the sys package from Python, handle exceptions in the try/except, handle the builtins and deal with objects and function definitions. I actually think Grumpy is doing pretty well in the translation in terms of LOC. Benchmarking Typically I would use Go benchmarking to measure the performance of an operation — it is both formal and does a good job of doing micro-measurements in terms of number of operations per second. However, I can’t use this technique for the Python code and I want to make sure that we can capture the benchmarks for the complete executable including imports like the sys module. Therefore the benchmarks are timings of complete runs of the executables, the equivalent of: $ time ./fib 40 $ time ./fibpy 40 $ time python fib.py 40 Because the recursive fibonacci implementation does not use memoization or dynamic programming, the computational time increases exponentially as the index gets higher. Therefore the benchmarks are several runs at moderately high indices to push the performance. In order to operationalize this, I wrote a small Python script to execute the benchmarks. You can find the benchmark script on Gist (it is a bit too large to include in this post). NOTE: I hope that I have provided everything needed to repeat these benchmarks. If you find a hole in the methodology or different results, I’d certainly be interested. After the timing benchmarks I also wanted to run resource usage benchmarks. Since the fibonacci implementation currently doesn’t use multiple threads, I can’t compare run times across increasing number of processes (TODO!). Instead, using the memory profiler library I simply measured memory usage. In the results section, I run each process using mprof independently in order to precisely track what is running where. However, using the new multiprocess feature of the memory profiler library you could create a bash script as follows: #!/bin/bash ./fib $1 & ./fibpy $1 & python fib.py $1 & wait And run the memory profiler on each of the processes: $ mprof run -M ./fibmem.sh 40 $ mprof plot This will background each of the processes so that they are plotted as child processes of the main bash script. Unfortunately they are plotted by index, so it’s hard to know which child is which, but I believe that child 0 is the go implementation, child 1 is the transpiled implementation, and child 2 is the Python implementation. Ok, so after that long description of methods, let’s get into findings. Results For 20 runs of each executable for fibonacci arguments 25, 30, 35, and 40, I recorded the following average times for the various executables shown in the next figure. Note that the amount of time for the next argument increases exponentially, opening up the performance gap between executables. Unsurprisingly, the pure Go implementation was blazing fast, about 42 times faster than the Python implementation on average. The real surprise, however, is that the transpiled Go was actually 1.5 times slower than the Python implementation. I actually cannot explain why this might be — I’m hugely curious if anyone has an answer. In order to give a clearer picture, here are the log scaled results with a fifth timing for the 45th fibonacci number computation: In order to track memory usage, I used mprof to track memory for each executable ran independently in it’s own process, here are the results: And so that you can actually see the pure Go implementation as well as memory usage initialization and start up, here is a zoomed in version to the first few milliseconds of execution: The memory usage profiling reveals yet another surprise, not only does the transpiled version take longer to execute, but it also uses more memory. Meanwhile, the pure go implementation is so lightweight as to blow away with a stiff breeze. Conclusions Transpiling is hard. Grumpy is still only experimental, and there does seem to be some real promise particularly with concurrency gains. However, I’m not sold on transpiling as an approach to squeezing more performance out of a system.
https://bbengfort.github.io/2017/03/grumpy-transpiling-fib-benchmark/
CC-MAIN-2021-17
refinedweb
1,530
61.67
Some people may be using jOOQ with Groovy for easy scripting. As with the existing jOOQ / Scala integration, some Groovy language features can be leveraged. Take the following example, for instance::~/scala-test', 'sa', '', 'org.h2.Driver') a = T_AUTHOR.as("a") b =) Groovy is not such a typesafe language. When we miss the .on() clause in the above query, Groovy’s Eclipse IDE integration would indicate that the subsequent call to fetchInto() might not work at run time. But Groovy cannot be sure, just as much as the getValue() calls cannot be guaranteed to work in Groovy’s interpretation of what closures are. This is how Eclipse displays the above code: What’s also interesting to see is that Groovy cannot infer the SAM (Single Abstract Method) interface type that would best match the fetchInto()call. We have to explicitly tell Groovy to coerce the closure to a jOOQRecordHandler, and inside that RecordHandler, we cannot access the well-known type of r, which would be: Record3<String, String, String> Using jOOQ with Groovy is certainly possible, but also not as powerful as with Scala or with Java 8. Alternative ways of writing SQL with Groovy Apart from using a SQL query builder like jOOQ (obviously, as this is thejOOQ blog, or a syndication thereof), you can also use other ways of writing SQL in Groovy. The standard way is to use Groovy’s own SQL support, which is a lot more convenient string-based approach than JDBC directly. In fact, Groovy SQL is how JDBC should have been implemented in the first place: import groovy.sql.Sql sql = Sql.newInstance( 'jdbc:h2:~/scala-test', 'sa', '', 'org.h2.Driver') sql.eachRow('select * from t_author') { println "${it.first_name} ${it.last_name}" } Another, interesting approach is to leverage Groovy’s advanced internal DSL capabilities. Here’s an example by Ilya Sterin where he created a DSL for SQL creation in Groovy Select select = sql.select ("table1") { join("table2", type: "INNER") { using(table1: "col1", table2: "col1") } join("table3", type: "OUTER") { using(table1: "col2", table2: "col2") using(table1: "col3", table2: "col3") } where("table1.col1 = 'test'") groupBy(table1: "col1", table2: "col1") orderBy(table1: "col1", table2: "col1") } Read the full blog post here: {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/using-jooq-groovy
CC-MAIN-2017-51
refinedweb
377
63.09
Introduction In a JavaScript web application, a router is the part that syncs the currently displayed view with the browser address bar content. In other words, it’s the part that makes the URL change when you click something in the page, and helps to show the correct view when you hit a specific URL. Traditionally the Web is built around URLs. When you hit a certain URL, a specific page is displayed. With the introduction of applications that run inside the browser and change what the user sees, many applications broke this interaction, and you had to manually update the URL with the browser’s History API. You need a router when you need to sync URLs to views in your app. It’s a very common need, and all the major modern frameworks now allow you to manage routing. The Vue Router library is the way to go for Vue.js applications. Vue does not enforce the use of this library. You can use whatever generic routing library you want, or also create your own History API integration, but the benefit of using Vue Router is that it’s official. This means it’s maintained by the same people who maintain Vue, so you get a more consistent integration in the framework, and the guarantee that it’s always going to be compatible in the future, no matter what. Installation Vue Router is available via npm with the package named vue-router. If you use Vue via a script tag, you can include Vue Router using <script src=""></script> unpkg.com is a very handy tool that makes every npm package available in the browser with a simple link If you use the Vue CLI, install it using npm install vue-router Once you install vue-router and make it available either using a script tag or via Vue CLI, you can now import it in your app. You import it after vue, and you call Vue.use(VueRouter) to install it inside the app: import Vue from 'vue' import VueRouter from 'vue-router' Vue.use(VueRouter) After you call Vue.use() passing the router object, in any component of the app you have access to these objects: this.$routeris the router object this.$routeis the current route object The router object The router object, accessed using this.$router from any component when the Vue Router is installed in the root Vue component, offers many nice features. We can make the app navigate to a new route using this.$router.push() this.$router.replace() this.$router.go() which resemble the pushState, replaceState and go methods of the History API. push() is used to go to a new route, adding a new item to the browser history. replace() is the same, except it does not push a new state to the history. Usage samples: this.$router.push('about') //named route, see later this.$router.push({ path: 'about' }) this.$router.push({ path: 'post', query: { post_slug: 'hello-world' } }) //using query parameters (post?post_slug=hello-world) this.$router.replace({ path: 'about' }) go() goes back and forth, accepting a number that can be positive or negative to go back in the history: this.$router.go(-1) //go back 1 step this.$router.go(1) //go forward 1 step Defining the routes I’m using a Vue Single File Component in this example. In the template I use a nav tag that has 3 router-link components, which have a label (Home/Login/About) and a URL assigned through the to attribute. The router-view component is where the Vue Router will put the content that matches the current URL. <template> <div id="app"> <nav> <router-linkHome</router-link> <router-linkLogin</router-link> <router-linkAbout</router-link> </nav> <router-view></router-view> </div> </template> A router-link component renders an a tag by default (you can change that). Every time the route changes, either by clicking a link or by changing the URL, a router-link-active class is added to the element that refers to the active route, allowing you to style it. In the JavaScript part we first include and install the router, then we define 3 route components. We pass them to the initialization of the router object, and we pass this object to the Vue root instance. Here’s the code: <script> import Vue from 'vue' import VueRouter from 'vue-router' Vue.use(VueRouter) const Home = { template: '<div>Home</div>' } const Login = { template: '<div>Login</div>' } const About = { template: '<div>About</div>' } const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/login', component: Login }, { path: '/about', component: About } ] }) new Vue({ router }).$mount('#app') </script> new Vue({ router: router }).$mount('#app') See in the example, we pass a routes array to the VueRouter constructor. Each route in this array has a path and component params. If you pass a name param too, you have a named route. Using named routes to pass parameters to the router push and replace methods Remember how we used the Router object to push a new state before? this.$router.push({ path: 'about' }) With a named route we can pass parameters to the new route: this.$router.push({ name: 'post', params: { post_slug: 'hello-world' } }) the same goes for replace(): this.$router.replace({ name: 'post', params: { post_slug: 'hello-world' } }) What happens when a user clicks a router-link The application will render the route component that matches the URL passed to the link. The new route component that handles the URL is instantiated and its guards called, and the old route component will be destroyed. Route guards Since we mentioned guards, let’s introduce them. You can think of them of life cycle hooks or middleware, those are functions called at specific times during the execution of the application. You can jump in and alter the execution of a route, redirecting or cancelling the request. You can have global guards by adding a callback to the beforeEach() and afterEach() property of the router. beforeEach()is called before the navigation is confirmed beforeResolve()is called when beforeEach is executed and all the components beforeRouterEnterand beforeRouteUpdateguards are called, but before the navigation is confirmed. The final check, if you want afterEach()is called after the navigation is confirmed What does “the navigation is confirmed” mean? We’ll see it in a second. In the meantime think of it as “the app can go to that route”. The usage is: this.$router.beforeEach((to, from, next) => { // ... }) this.$router.afterEach((to, from) => { // ... }) to and from represent the route objects that we go to and from. beforeEach has an additional parameter next which if we call with false as the parameter, will block the navigation, and cause it to be unconfirmed. Like in Node middleware, if you’re familiar, next() should always be called otherwise execution will get stuck. Single route components also have guards: beforeRouteEnter(from, to, next)is called before the current route is confirmed beforeRouteUpdate(from, to, next)is called when the route changes but the component that manages it is still the same (with dynamic routing, see next) beforeRouteLeave(from, to, next)is called when we move away from here We mentioned navigation. To determine if the navigation to a route is confirmed, Vue Router performs some checks: - it calls beforeRouteLeaveguard in the current component(s) - it calls the router beforeEach()guard - it calls the beforeRouteUpdate()in any component that needs to be reused, if any exist - it calls the beforeEnter()guard on the route object (I didn’t mention it but you can look here) - it calls the beforeRouterEnter()in the component that we should enter into - it calls the router beforeResolve()guard - if all was fine, the navigation is confirmed! - it calls the router afterEach()guard You can use the route-specific guards ( beforeRouteEnter and beforeRouteUpdate in case of dynamic routing) as life cycle hooks, so you can start data fetching requests for example. Dynamic routing The example above shows a different view based on the URL, handling the /, /login and /about routes. A very common need is to handle dynamic routes, like having all posts under /post/, each with the slug name: /post/first /post/hello-world You can achieve this using a dynamic segment. Those were static segments: const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/login', component: Login }, { path: '/about', component: About } ] }) we add in a dynamic segment to handle blog posts: const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/post/:post_slug', component: Post }, { path: '/login', component: Login }, { path: '/about', component: About } ] }) Notice the :post_slug syntax. This means that you can use any string, and that will be mapped to the post_slug placeholder. You’re not limited to this kind of syntax. Vue relies on this library to parse dynamic routes, and you can go wild with Regular Expressions. Now inside the Post route component we can reference the route using $route, and the post slug using $route.params.post_slug: const Post = { template: '<div>Post: {{ $route.params.post_slug }}</div>' } We can use this parameter to load the contents from the backend. You can have as many dynamic segments as you want, in the same URL: Remember when before we talked about what happens when a user navigates to a new route? In the case of dynamic routes, what happens is a little different. Vue to be more efficient instead of destroying the current route component and re-instantiating it, it reuses the current instance. When this happens, Vue calls the beforeRouteUpdate life cycle event. There you can perform any operation you need: const Post = { template: '<div>Post: {{ $route.params.post_slug }}</div>' beforeRouteUpdate(to, from, next) { console.log(`Updating slug from ${from} to ${to}`) next() //make sure you always call next() } } Using props In the examples, I used $route.params.* to access the route data. A component should not be so tightly coupled with the router, and instead, we can use props: const Post = { props: ['post_slug'], template: '<div>Post: {{ post_slug }}</div>' } const router = new VueRouter({ routes: [ { path: '/post/:post_slug', component: Post, props: true } ] }) Notice the props: true passed to the route object to enable this functionality. Nested routes Before I mentioned that you can have as many dynamic segments as you want, in the same URL, like: So, say we have an Author component taking care of the first dynamic segment: <template> <div id="app"> <router-view></router-view> </div> </template> <script> import Vue from 'vue' import VueRouter from 'vue-router' Vue.use(VueRouter) const Author = { template: '<div>Author: {{ $route.params.author}}</div>' } const router = new VueRouter({ routes: [ { path: '/post/:author', component: Author } ] }) new Vue({ router }).$mount('#app') </script> We can insert a second router-view component instance inside the Author template: const Author = { template: '<div>Author: {{ $route.params.author}}<router-view></router-view></div>' } we add the Post component: const Post = { template: '<div>Post: {{ $route.params.post_slug }}</div>' } and then we’ll inject the inner dynamic route in the VueRouter configuration: const router = new VueRouter({ routes: [{ path: '/post/:author', component: Author, children: [ path: ':post_slug', component: Post ] }] }) Download my free Vue Handbook! Check out my Web Development Bootcamp. Next cohort is in April 2022, join the waiting list!
https://flaviocopes.com/vue-router/
CC-MAIN-2021-49
refinedweb
1,856
61.06
all about I/O ------------- computers aren't useful unless we can put data into them, and get results out. input -- data to computer output -- data from computer computer model: ----- -------- |CPU| <----> |memory| ----- ^ -------- | | \ / ----- |i/o| ----- examples of input devices: keyboard, mouse, network, disk, ?? examples of output devices: printer, (terminal) display, network, ?? simulator has only 2 I/O devices, keyboard (for input) display (for output) ISSUES THAT MUST BE SOLVED: programmer interface -- tools give get_ch, put_ch, put_str these are actually OS implemented procedures. (The OS is the program that interfaces between the programmer or application user and the actual hardware. For us, it is NT.) protection issues -- in a real system, there could be more than one terminal (terminal is a keyboard and display together) Should one user be able to display characters on another's display? Lock up another's keyboard? Send a file of infinite length to the printer, effectively shutting all others out? In practice, the OS's job is "resource management," allocating all portions of the processor. Examples of resources are the CPU and all I/O devices. physical issues -- A computer today (1998) can complete an instruction at the rate of about 1 each nsec. Unfortunately, typical I/O devices are much slower, often requiring 10s of milliseconds to deal with a single character. That is approx. 1 million times slower! This situation is dubbed the "access gap." disk - a real, live, phisical device ------------------------------------ Vocabulary, to form a picture of a disk (ch 13, p260) PLATTER -- sort of like a phonograph record or CD. data is stored on a SURFACE of a platter. all platters are tied together and rotate around the SPINDLE at a fixed speed. each surface has one or more READ/WRITE HEADS. Platters are broken down into TRACKS. A single track is one of many concentric circles on the platter. All the corresponding tracks on all surfaces, taken together, form a CYLINDER. Each track is broken down into SECTORS. How we read/write to a sector. Given: the sector position on the cylinder. (looked up in a table, or calculated from the disk address). -- the disk is spinning. -- the read/write head is moving to the correct cylinder (track). THIS TAKES A LONG TIME RELATIVE TO THE OTHER STUFF. It is the physical movement, acceleration, etc. comes into play. This is SEEK time. -- once the read/write head is over the correct cylinder, there is bound to be some time to wait until the correct sector is under the head. This is ROTATIONAL LATENCY. -- Even at the correct sector, it still takes some time for the data to be read/written. This is the READ or WRITE time. time to read a sector = seek time + rotate time + read time. So, the nitty gritty issue is: how does the OS accomplish I/O requests? There are 2 possibilities. 1. have special I/O instructions -- input need to know which device, how much data, where the data is to go -- output need to know which device, how much data, where the data currently is How does the processor know that the instruction has completed? (Is there any need to wait?) What happens if the device encounters an error? (Does this halt the computer?) 2. the solution of choice overload memory locations to use as communication channels. for example, address 0x0000 0000 -| . | real memory . | . | 0xffff 0000 -| 0xffff 0008 - data from keyboard (Keyboard_Data) 0xffff 0010 - data to display (Display_Data) then, by reading (loading) from location 0xffff0008, data is requested from the keyboard then, by writing (storing) to location 0xffff0010, data is sent to the display the syscall code in the OS must be (in essence) mov eax, Keyboard_Data # get_ch syscall return from syscall and mov Display_Data, eax # put_ch syscall return from syscall This method of I/O is called MEMORY-MAPPED I/O. Problems with memory-mapped I/O as currently given: -- get_ch presumably returns once a character has been typed. What happens if the user does not type a character? Types it on the wrong keyboard? Goes to get a drink of water? What happens to the data if the user types 2 characters before get_ch has been called? How does the computer know if a character has been typed? -- put_ch and put_str: how does the computer know that the device is ready to print out a second character? What if the printer jams? (printers and terminals are SLOW!) What is needed is a way to convey information about the STATUS of I/O devices. This status information is used to coordinate and SYNCHRONIZE the useage of devices. address 0x0000 0000 -| . | real memory . | . | 0xffff 0000 -| 0xffff 0008 - data from keyboard (Keyboard_Data) 0xffff 000c - STATUS from keyboard (Keyboard_Status) 0xffff 0010 - data to display (Display_Data) 0xffff 0014 - STATUS from display (Display_Status) assume that the MSB is used to tell the status of a device. MSB = 1 means device ready MSB = 0 means device is busy note that we can check for device ready/busy by looking to see if the Status word is negative (2's comp) or not. for the keyboard, a 1 means that a character has been typed a 0 means that no character is available for the display, a 1 means that a new character may be sent a 0 means that the device is still disposing of a previous character Then, the code in the OS must be more like keyboard_wait: ; for get_ch test Keyboard_Status, 80000000h jz keyboard_wait mov eax, Keyboard_Data and display_wait: ; for put_ch test Display_Status, 80000000h jz display_wait mov Display_Data, eax This scheme is known as BUSY WAITING, or SPIN WAITING. The little loop is called a SPIN WAIT LOOP. Something that is not well explained (at this level) is how these status bits get set and cleared. The spin wait loop reads the status word, but does not change it. The device (its CONTROLLER) sets and clears the bit. An implied fuction is that the device sets the bit when it becomes ready to work on another character. AND, a mov from Keyboard_Data also clears the MSB of Keyboard_Status AND, a mov to Display_Data also clears the MSB of Display_Status PROBLEMS with this programmed I/O approach: -- much time is wasted spin waiting. if it takes 100 instructions to program this, and each instruction takes 20ns to execute, then it takes 100 * 20nsec = 2000nsec = 2 usec to execute this code if a device takes 2msec (=2000usec) to deal with one character, then the percent of time spent waiting is time waiting 2000us ------------ = --------------- = .999 = 99.9% total time 2000us + 2usec We'd like a solution that spent less time "doing nothing" -- if (somehow) a second key is pressed before the program does a get_ch, the first key pressed is lost. There is only one character's worth of storage. This problem is actually a "Catch-22." The keyboard_wait code has to be run often enough that no characters are lost, but executing this code spin waits until a character is pressed. The system could do nothing but wait around for characters! Some problems are solved by the use of queues (buffers). The check for device ready is separated from the sending and receiving of characters. Code for this is in the text, pages 265 and 266. putnextchar: print a character if there is one in the queue, and the device is ready (done by OS periodically) printstring: put character(s) in queue and return (called by user program) getnextchar: get a character and place in a queue, if one is waiting at the keyboard (done by OS periodically) getstring: get character from queue (if available) and return; if queue is empty, spin wait until there is a character. (called by user program) Some difficulties are caused by this situation: -- OS must call getnextchar regularly and often so as not to lose characters. -- What happens if the queue(s) become full? Are characters lost? -- OS must call putnextchar regularly to empty out the queue.
http://pages.cs.wisc.edu/~smoler/x86text/lect.notes/io.html
crawl-003
refinedweb
1,321
64
Board index » lisp All times are UTC yes, but in a somewhat different environment. the executable for C++ code is a function that resides on disk and which is loaded into memory when you call it. it then runs to completion and you're back at the operating system prompt or whatever. in Emacs, the loading is separated from the calling. you load the file into Emacs, and then you call the function. however, since the normal action when you type into Emacs is to insert the characters into a buffer, you may have to do something special to call the function. one option is to be in the *SCRATCH* buffer, which is in LISP-INTERACTION mode. (the capitalization here is not to be taken literally, but as differentiating names from the text.) here you can type expressions, but when you type C-j (LF) after a form, it gets evaluated. (this is like pressing return (C-m, CR) in a normal command-line interaction.) another option is to call the function EVAL-EXPRESSION through the keybinding M-: and give it the form to evaluate. yet another option is to make the function "interactive", which means that it may be bound to a key or called with M-x, and reads its own arguments from the user. Emacs Lisp is a Lisp environment, like Windows is a C++ environment and Unix/Linux is a C environment. when you wish to mix environments, it is very instructive to see their similarities relative to their natural habitat, not try to force them to be similar in the natural habitat of the other. e.g., the fact that you start your computer and it loads the operating system and finds which functions are available is analogous to starting Emacs and loading the files you need to call your functions. typically, therefore, you ask Emacs to load all your files for you in .emacs, which Emacs loads automatically, and then you call them. to compile Emacs Lisp files, use BYTE-COMPILE-FILE. other Lisps have something very similar. to load the compiled files, use LOAD. (for a more Emacs-specific response and a less general Lisp response, you might try comp.emacs.) #:Erik 1. Executing Emacs Lisp code from Common Lisp? 2. Using modula-2.el under Emacs : compiling and linking : help 3. mzscheme and emacs M-x compile 4. Use gnat and emacs(Ada-mode) can not compile file :( 5. Compile with ALDEC from EMACS 6. Compile-mode for VHDL Compiler for emacs-19.19 7. Correction to APL keywords emacs lisp code 8. Emacs Lisp --> Scheme transformation 9. EMACS and Lisp 10. Lisp vs. Scheme Emacs 11. Lisp vs. Scheme Emacs 12. Lisp and scheme on Linux/BSD/Emacs
http://computer-programming-forum.com/50-lisp/03876ab93f6ac75a.htm
CC-MAIN-2019-13
refinedweb
457
66.54
Hi, The header needs to contain the following “Authorization”: “Bearer TokenGoesHere”. The problem i am facing is when i set a custom header I am not able to set the Proper multipart/form-data request, Can some help with this ? If don’t set the custom header K6 properly generates the header but header wont contain the a Authorization token so the request is getting rejected. Thank you in advance Multipart Form data with Header containing Authorization Token Hi, Can you elaborate and give an abridged example that demonstrates the problem? k6 has some issues with multipart/form-data requests (), but if you use the file workaround suggested in the issue, it should work and the Authorization: Bearer ... header shouldn’t be affected by it. Access token is got from a third party site Here is the code I used const header = { "headers": { "Authorization": "Bearer "+accessToken, } } const file = open(filePath, "b"); const tempFile = http.file(file); const uploadData = { file: tempFile }; http.post("URL", uploadData); *In this request the proper multi-part form data header is set but this will not work for use since we need the header to contain the token. Like the one set in the variable header we need some thing like http.post(“URL”, uploadData,header ); * In the request the token is set but the other value like the boundary is not set.I want a way to auto set the value or manually set the value in header like “boundary” and all other that is required multi part form header with the token or add the token to the auto generated header by K6 import http from "k6/http"; export default function () { const params = { "headers": { "Authorization": "Bearer Something", } }; let uploadData = {"one": 2, "else": "some", file: http.file("file data", "file.txt")}; let res = http.post("", uploadData, params); console.log(JSON.stringify(res, null, "\t")) } Works as you would want … I don’t know why it isn’t working for you … Can you make a script using httpbin.org that shows how when you set the Authorization header it isn’t correctly setting the form-data one ?
https://community.k6.io/t/multipart-form-data-with-header-containing-authorization-token/258
CC-MAIN-2019-51
refinedweb
351
52.29
) I found this very silly too. There is no deselect command. So I made a plugin that deselects. I set the plugin to work when there is text selected, if I press enter, it deselects. This made sense for me because when I have text selected, I never need to press enter anyway. Though you can set it to esc. Here's the plugin: import sublime, sublime_plugin class DeselectCommand(sublime_plugin.TextCommand): def run(self, edit): end = self.view.sel()[0].b pt = sublime.Region(end, end) self.view.sel().clear() self.view.sel().add(pt) Add this to your User Keybindings: { "keys": "escape"], "command": "deselect", "context": { "key": "selection_empty", "operator": "equal", "operand": false, "match_all": true } ] } Command+U (or Edit/Undo Selection) will undo the Select All and return you to the previous state. COD312: Another way to shrink a selection down to one end or the other is to press the left or right arrow key Thanks, but I find that using enter is faster because I use incremental find to navigate to editor, so when got to where I want, I don't want to move my hand to the arrow keys... This is really great! Thank you. I wonder if it's possible to combine both your plugin and the Command+U technique, so that we can rebind "escape" (or "enter") to "Undo Selection" (Command+U) in the context of Select All? In other words, maybe a plugin isn't necessary--just a keybinding. What do you think? This does the trick for me: { "keys": "escape"], "command": "soft_undo", "context": { "key": "selection_empty", "operator": "equal", "operand": false, "match_all": true } ] } Thanks again for your help (both!). C0D312,Your initial deselect plugin is perfect. It should be part of the default program configuration. Thank you very much. I was having a huge issue with not having a deselect hotkey as well, I don't mind clicking off the selection when it's small but having to go through the menu to do a soft undo after using select all is a pain. The deselect plugin is perfect using the Esc key, much appreciated! Also a great idea, but I think I prefer the plugin route. Thanks all!
https://forum.sublimetext.com/t/select-all-to-view-invisible-characters/3927/8
CC-MAIN-2016-40
refinedweb
364
65.12
Op 2005-11-03, Steven D'Aprano schreef <steve at REMOVETHIScyber.com.au>: > On Thu, 03 Nov 2005 04:30:09 -0800, Paul Rubin wrote: > >> Steve Holden <steve at holdenweb.com> writes: >>> > class A: >>> > a = 1 >>> > b = A() >>> > b.a += 2 >>> > print b.a >>> > print A.a >>> > Which results in >>> > 3 >>> > 1 >>> > >>> I don't suppose you'd care to enlighten us on what you'd regard as the >>> superior outcome? >> >> class A: >> a = [] >> b = A() >> b.append(3) >> print b.a >> print a.a >> >> Compare and contrast. > > > I take it then that you believe that ints like 1 should be mutable like > lists? Because that is what the suggested behaviour implies. No it isn't. One other way, to implement the += and likewise operators would be something like the following. Assume a getnsattr, which would work like getattr, but would also return the namespace where the name was found. The implementation of b.a += 2 could then be something like: ns, t = getnsattr(b, 'a') t = t + 2 setattr(ns, 'a') I'm not arguing that this is how it should be implemented. Just showing the implication doesn't follow. -- Antoon Pardon
https://mail.python.org/pipermail/python-list/2005-November/314835.html
CC-MAIN-2014-15
refinedweb
193
78.85
Source cpython_sandbox / Doc / library / types.rst :mod:`types` --- Dynamic type creation and names for built-in types Source code: :source:`Lib/types.py` This module defines utility function to assist in dynamic creation of new types. It also defines names for some object types that are used by the standard Python interpreter, but not exposed as builtins like :class:`int` or :class:`str` are. Dynamic Type Creation Standard Interpreter Types This module provides names for many of the types that are required to implement a Python interpreter. It deliberately avoids including some of the types that arise only incidentally during processing such as the listiterator type. Typical use of these names is for :func:`isinstance` or :func:`issubclass` checks. Standard names are defined for the following types: Read-only proxy of a mapping. It provides a dynamic view on the mapping's entries, which means that when the mapping changes, the view reflects these changes. A simple :class:`object` subclass that provides attribute access to its namespace, as well as a meaningful repr. Unlike :class:pace may be useful as a replacement for class NS: pass. However, for a structured record type use :func:`~collections.namedtuple` instead.
https://bitbucket.org/ncoghlan/cpython_sandbox/src/ae7fef62b462/Doc/library/types.rst
CC-MAIN-2015-18
refinedweb
197
56.15
It looks like you're new here. If you want to get involved, click one of these buttons! I I solved it by writing a small wrapper Java Class with Queue as a library import java.io.File; import java.util.ArrayList; import java.util.List; import org.broadinstitute.sting.gatk.walkers.bqsr.BQSRGatherer; public class BQSRGathererMain { /* * args: outputFile inputFiles */ public static void main(String[] args) { File output = new File(args[0]); List<File> inputs = new ArrayList<File>(); for(int i=1;i<args.length;i++) { inputs.add(new File(args[i])); } BQSRGatherer bQSRGatherer = new BQSRGatherer(); bQSRGatherer.gather(inputs, output); } } Answers Hi Matt, BaseRecalibrator is harder to parallelize because as you correctly read in the documentation, it works much better if it has access to the whole genome data. The recalibration model depends on having a lot of observations; if you reduce the number of observations you can dramatically reduce the effectiveness of the tool. The new parallelism document will include our best recommendations for parallelizing all of our major tools including BaseRecalibrator. As for its ETA, it is indeed due to come out soon; it got a little bit delayed because we've been working hard on preparing the workshop (which we're in the middle of right now), but it'll be out next week for sure. Geraldine Van der Auwera, PhD Many thanks Geraldine, I'll look out for the new parallelism guide. Matt Hi Geraldine, I've been looking at the new parallelism primer and accompanying guide on specific usage recommendations for various GATK tools. From these guides and another response by Eric Banks to a similar post (), it looks like it should be possible to run a scatter-gather approach and I'd be interested in any further suggestions for how to go about this. For example, can I divide the reference genome into a number of segments and run the BaseRecalibrator separately on each one? Would I then apply the calibration table using PrintReads to that segment or would I be able to somehow combine the calibration data into a single calibration table to apply to the whole genome? To avoid the problem of only having a few observations for the smaller unplaced/unlocalized contigs, should I instead divide the genome into roughly equal-size segments and would it matter if those spanned across chromosome boundaries? Or have I got it completely wrong and there is another way of applying the scatter-gather approach that doesn't involve chunking the reference genome? Thanks in advance, Matt Hi Matt, With Queue, the scattering is decided for you with the @PartitionBy(PartitionType.READ) annotation which says that each read can be processed individually. Without Queue though, since BaseRecalibrator is a read walker, it can only be scattered by contig so that reads aren't counted more than once. Geraldine Van der Auwera, PhD Many thanks Geraldine and Mauricio, this is very helpful. As you may have gathered, the reason I'm asking is because we're using another workflow management system. It looks like the source code for the relevant classes (BQSRGatherer, RecalUtils, etc.) are available from the GitHub site so I'll take a look and see if I can work out how to get this defined as a task in our workflow system. Of course, any help or suggestions on how to run this standalone (separate from Queue) would be gratefully received. Matt Matt (or anyone else), were you able to get BQSRGatherer working with your workflow management system? We're in the same boat and any tips would be appreciated. Thanks! Hi egafni, I'm afraid other priorities have got in the way of making any further progress with this but I still intend to return to it when I get the chance. Matt I solved it by writing a small wrapper Java Class with Queue as a library Thanks for sharing your solution, @mmoisse. Geraldine Van der Auwera, PhD Is the recalibrated base quality in BQSR bam files could only be read by GATK tools? Hi @srivas31, I'm not sure exactly what you are asking. Once you complete the recalibration process, the recalibrated base qualities replace the original qualities, and are readable by any program that reads valid BAM files. Geraldine Van der Auwera, PhD Hi, I'm testing the solution of mmoisse for scatter-gathering the BQSR procedure, following these steps: - Run for Gene Panel targets on each chromosome separately, for all samples on a MiSeq run together (multiple -I params) : -T baseRecalibrator -nct 8 -I Sample1.bam -I Sample2.bam -I SampleXX(1-15).bam -o recal.chrX.grp -L Targets.chrX.bed However, I get different values compared to running the baseRecalibrator on the full panel.bed file (all chromosomes). Is it normal to have different number of observations after scatter-gather compared to the full run? I have added a small subset of values below: => FULL TARGET BED FILE ReadGroup EventType EmpiricalQuality EstimatedQReported Observations Errors VSD-550-1_S9.L001.1 M 27.0000 29.0518 226637266 421742.43 VSD-550-1_S9.L001.1 I 44.0000 45.0000 226637266 10075.27 VSD-550-1_S9.L001.1 D 38.0000 45.0000 226637266 37670.18 VDP-480-1_S3.L001.1 M 27.0000 29.0699 252989865 459948.36 VDP-480-1_S3.L001.1 I 47.0000 45.0000 252989865 4789.00 VDP-480-1_S3.L001.1 D 38.0000 45.0000 252989865 41935.60 => Per Chr BQSR + JOIN GRP ReadGroup EventType EmpiricalQuality EstimatedQReported Observations Errors VSD-550-1_S9.L001.1 M 27.0000 29.1196 245566293 450173.53 VSD-550-1_S9.L001.1 I 46.0000 45.0000 245566293 6663.20 VSD-550-1_S9.L001.1 D 38.0000 45.0000 245566293 43416.70 VDP-480-1_S3.L001.1 M 27.0000 29.1318 241318168 440363.72 VDP-480-1_S3.L001.1 I 45.0000 45.0000 241318168 8093.53 VDP-480-1_S3.L001.1 D 38.0000 45.0000 241318168 42638.13 There are 15 samples, each in a readgroup in the test panel, For some RGs, i see more observations in the full vs joined run, for other readgroups, it's the other way around. I use GATK 2.4.9-g532efad Hi Geert, Unfortunately I can't explain these big differences in the number of observations. We'd expect some differences because of downsampling, but the numbers should be closer than what you're seeing. We'll do a couple of tests internally to see if we can reproduce this behavior. I'll keep you posted on what we find. Geraldine Van der Auwera, PhD Hi, Thanks for the reply. If this is indeed related to downsampling, it might be important to mention that the average base coverage is about 1200x for our dataset. I will rerun the analysis (split+ join vs full BQSR) without downsampling (dt=NONE and dfrac=1) and report the results here. best, geert OK, let me know how it goes. FYI we've run some checks on our end using Queue and we don't see any differences in the observation counts. Maybe some are getting lost when you join the grp tables? Geraldine Van der Auwera, PhD These are the results for running BQSR with "-dt NONE' option. The difference seems to be smaller, but it's still there. Should the java snippet from above run the exact same merging procedure as Queue? Or can I provide extra arguments somehow? => JOINED VSD-550-1_S9.L001.1 M 27.0000 29.0632 225951155 418512.76 VSD-550-1_S9.L001.1 I 43.0000 45.0000 225951155 12704.96 VSD-550-1_S9.L001.1 D 38.0000 45.0000 225951155 36649.58 VDP-480-1_S3.L001.1 M 27.0000 29.1282 236898453 444426.25 VDP-480-1_S3.L001.1 I 46.0000 45.0000 236898453 5473.07 VDP-480-1_S3.L001.1 D 37.0000 45.0000 236898453 44294.48 => FULL VSD-550-1_S9.L001.1 M 27.0000 29.0518 226636962 421742.43 VSD-550-1_S9.L001.1 I 44.0000 45.0000 226636962 10075.27 VSD-550-1_S9.L001.1 D 38.0000 45.0000 226636962 37670.18 VDP-480-1_S3.L001.1 M 27.0000 29.0699 252989713 459948.70 VDP-480-1_S3.L001.1 I 47.0000 45.0000 252989713 4789.00 VDP-480-1_S3.L001.1 D 38.0000 45.0000 252989713 41935.60 I'm also getting some quirky behavior using @mmoisse's script If you ask it to combine a single recal file, it produces a "combined" recal file with significant differences! Has anyone identified a solution to this issue? I updated my version of GATK, and things got better for the most part. when a diff a file that I run gather on, the output is still non-identical, at least on a tiny dataset. The difference is 1 integer in 1 row (column is empirical quality). I finally got around to using BQSRGatherer to merge recalibration tables and am happy to say that it is working for me. This is using a different workflow management system, not GATK-Queue. I'm running GATK 2.4.9 on human data sets using the version 2.5 resource bundle (hg19). following the recommendations for known sites in, and only get very minor differences (second decimal place) in values in the Errors columns. Many thanks to Mauricio Carneiro and mmoisse for pointing me in the right direction. Matt Good to hear, thanks for reporting back on your results. WIll pass on your thanks to @Carneiro, who is always happy to be helpful Geraldine Van der Auwera, PhD
http://gatkforums.broadinstitute.org/gatk/discussion/1919/parallelizing-base-quality-score-recalibration
CC-MAIN-2016-22
refinedweb
1,603
68.77
While supporting NDepend, we got some interesting user feedback about the understanding on how dependencies are inferred. As shown below, sometime, the Dependency Matrix displays some cells weighted with 0. The first reaction is, what the hell is a dependency of weight 0? On this screenshot, the Weight on Cells option is set to: Direct: # members. It means that if N members of a type/namespace/assembly are used by M methods of another type/namespace/assembly, then the corresponding blue cell will have a weight of N and its symmetric green cell a weight of M. If you look carefully at the Information Panel while pointing the 0's cell with the mouse, NDepend tells that there are no members of NamespaceUser4 using any members of NamespaceUsed, but still, there is a dependency between the 2 namespaces. This situation is unexpected! The explanation is that a type of the namespace NamespaceUser4 is using a type of the namespace NamespaceUsed without using any members. This is actually possible in many situations exposed in the code excerpt below, by passing an argument, declaring a field, implementing an interface, closing a generic type but also by tagging with an attribute, catching an exception... namespace NamespaceUsed { interface IInterface { void Method(); } } namespace NamespaceUser1 { class Foo { void Method(NamespaceUsed.IInterface arg) {} } } namespace NamespaceUser2 { class Foo { private NamespaceUsed.IInterface m_Field; } } namespace NamespaceUser3 { class Foo : NamespaceUsed.IInterface { public void Method() {} } } namespace NamespaceUser4 { class Foo<T> { } class Foo2 : Foo<NamespaceUsed.IInterface> {} } If we switch the option Weight on Cells to Direct: # types, there is no more 0's cell and we can see that some types are indeed using some others. Actually, when Weight on Cells is set to something else than Direct: # members/methods/fields it is not possible to have 0's cells because the way the Common Type System is made, it is not possible to have a dependency not involving some types. Another Issue with const values Sometime, NDepend won't report something that user considers as a dependency between 2 code elements. This case can happen when there are some enumerations values or some const fields. For example:: namespace NamespaceUsed { enum Enum { Val1 } class Constants { internal const string CONST_VALUE = "hello"; } } namespace NamespaceUser1 { class Foo { void Method() { int i = (int)NamespaceUsed.Enum.Val1; } } } namespace NamespaceUser2 { class Foo { void Method() { string s = NamespaceUsed.Constants.CONST_VALUE; } } } This screenshot shows that NDepend didn't infer a dependency between the namespaces! It is actually not a bug but a consequence of a C#/VB.NET compiler optimization. I quote the master Jeffrey Richer from his excellent book CLR via C# page 177: (...)When code refers to a constant symbol, compilers look up the symbol in the metadata of the assembly that defines the constant, extract the constant's value, and embed the value in the emitted IL code. (...) (...)When code refers to a constant symbol, compilers look up the symbol in the metadata of the assembly that defines the constant, extract the constant's value, and embed the value in the emitted IL code. (...) It means that by looking at the IL code, one cannot infer the dependency between a method and a constant used. Moreover, it is worth mentioning that enumeration's values are compiled as constant. Althought NDepend parses source code to get some metrics about comments for example, the bulk of information gathered from the code base comes from the IL. This behavior has many appealing advantages, the analysis is not impacted by the way code is formatted neither by special language syntax peculiarities, data obtained from source written in different languages can be compared, one can check that a tier library code base abides by dozens of quality tenets even without having access to source files, NDepend can collaborate with IL focused tools like Reflector but can still collaborates with source code focused tools like Visual Studio... Concerning this const compiler optimization, I would like to precise that it can be a source of major problems. Let's quote Jeffrey again: (...)These constraints also mean that constants don't have a good cross-assembly versioning story, so you should use them only when you know that the value of a symbol will never change(...) (...)These constraints also mean that constants don't have a good cross-assembly versioning story, so you should use them only when you know that the value of a symbol will never change(...) What Jeffrey means is that if assembly A is using some constants of assembly B, if constant changes in B and B is recompiled, if A is not recompiled it will still contains the old constant values. Finally, this const compiler optimization has been a source of complexity while developing the NDepend assembly comparison feature. Imagine that 1000 methods are using values of an enumeration (which is a real-world case). If for some reasons values of the enumeration are changing, do you want to be advised that 1000 methods have changed? Indeed, the IL code of the methods have been updated with the new values but still, the C# or VB.NET source code of these methods haven't changed. Hopefully we made our algorithms a bit smart and NDepend can detect this case and not report it to the user. [Advertisement] Pingback from 2008 October 09 - Links for today « My (almost) Daily
http://codebetter.com/blogs/patricksmacchia/archive/2008/10/08/some-unexpected-code-dependency-issues.aspx
crawl-002
refinedweb
887
58.82
Given the code fragment: public class MapPetShow { public static void main (String [ ] args) { Map<Integer, String> sourceMap = new HashMap<>( ); sourceMap.put(37, "Ms.Piggy "); sourceMap.put(8, "Gonzo "); sourceMap.put(4, "Rowlf "); sourceMap.put(12, "Fozzie "); sourceMap.put(82, "Kermit "); Map<Integer, String> finalMap = new TreeMap<Integer, String> ( new Comparator<Integer> ( ) { @Override public int compare(Integer obj1, Integer obj2) { return obj2.compareTo(obj1); }}); finalMap.putAll(sourceMap); for (Map.Entry<Integer, String> entry : finalMap.entrySet()) { System.out.print(entry.getValue()); } } } What is the result? A. A compilation error occurs B. Rowlf Gonzo Fozzie Ms.Piggy Kermit C. Kermit Ms.Piggy Fozzie Gonzo Rowlf D. Rowlf Ms.Piggy Kermit Fozzie Gonzo The correct answer is C. We ask JVM to create the finalMap object by invoking TreeMap‘s ctor with a custom-built Compator as its arg. Why would we do that? Obviously, to override the logic of the natural order of the elements that are being compared. And why should we care abour comparison in the first place? It’s because TreeMap was specifically created as a navigable implementation of the SortedMap interface in order to store its elements in a certain addressable order. So whenever you see a TreeMap on the exam, always start by checking if elements to be sorted implement Comparable (we’ve already met something similar in Problem 11 where those poor doggies didn’t know know how to sort themselves out). The javadoc for TreeMap says that “The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used”. In our case the keys are of type Integer, so it stands to reason that the custom-built Comparator is most likely intended to flip Integer‘s natural order meaning that the String values will be listed by their keys descending. The biggest key is 82, therefore Kermit the Frog gets the first place… Oh boy, I remember working hand in hand with Kermit the Protocol under CP/M, can you belive that? Alright, fast forward back to the present. After finalMap has been created, we populate it by invoking the putAll() method, which copies all entries from the source map and sorts them on the fly. Finally, the for-each construct iterates over the Set returned by entrySet() on finalMap. This Set contains key-value pairs of type Map.Entry, which is a static nested interface in Map. Values are extracted by invoking the getValue() method on each entry… This looks a bit messy so we could probably tidy it up through the wizardry of functional programming. Please try it. Now.Click here for a suggested solution. finalMap.entrySet() .stream() .map(Map.Entry::getValue) .forEach(System.out::print); // Kermit Ms.Piggy Fozzie Gonzo Rowlf
http://igor.host/index.php/2017/08/10/ocp-question-45-explanation/
CC-MAIN-2017-39
refinedweb
463
58.38
This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues. Product Version: NetBeans IDE Dev (Build 20080618061212) Java: 1.6.0_04; Java HotSpot(TM) Client VM 10.0-b19 System: Windows XP version 5.1 running on x86; Cp1252; Description: ============ just copy some form class and past (Refactor Copy) it into different package. Then previous package is included there as import but with unused status. The same happens with java class. Not possible to fix effectively without adding dependency for java.editor. NetBeans.org Migration: changing resolution from LATER to WONTFIX
https://bz.apache.org/netbeans/show_bug.cgi?id=137575
CC-MAIN-2020-29
refinedweb
109
61.43
The exception can be caught by the following command.The exception can be caught by the following command. import tkMessageBox import time time.sleep(6) Then Windbg displays:Then Windbg displays: C:\Work>"c:\Program Files\Debugging Tools for Windows (x64)\windbg.exe" -g c:\Python27\python.exe messagebox.py (c2a4.c374): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. *** ERROR: Symbol file could not be found. Defaulted to export symbols for c:\Python27\DLLs\tk85.dll - tk85!Tk_MainLoop+0x122: 00000000`102de8b2 ff9068030000 call qword ptr [rax+368h] ds:00000000`00000368=???????????????? call [rax]suggests to pay attention because the issue could be exploitable if raxis attacker controlled. We can see the crash happens when we read address near null so you may think it's a non-exploitable null pointer dereference. However to confirm this we need to investigate how raxis set to null. If null is read from previously freed memory we may have the chance to corrupt the memory in a way to inject arbitrary pointer therefore to make the issue exploitable. So the preceding instruction sets raxthat is: We can see the address is fixed, relative toWe can see the address is fixed, relative to 0:000> ub rip L1 tk85!Tk_MainLoop+0x11b: 00000000`102de8ab 488b0576bd0d00 mov rax,qword ptr [tk85!XDrawLine+0x59b98 (00000000`103ba628)] tk85, which was allocated when the module was being loaded in the virtual address space. Now we can say with more confidence the issue is a non-exploitable null pointer dereference. Python 3.x doesn't support tkMessageBoxso it's not affected.
https://reversingonwindows.blogspot.de/2014/01/triaging-crash-on-instruction-call-rax.html
CC-MAIN-2017-09
refinedweb
274
58.99
14 October 2010 11:06 [Source: ICIS news] MOSCOW (ICIS)--Russian protectionism which aims to shield domestic polymer producers is in fact damaging local polymer converters, said a senior SABIC executive on Thursday. Making imported polymer more expensive for Russian converters than for those in other Former Soviet Union (FSU) countries like the ?xml:namespace> This would eventually be to the detriment of domestic polymer producers, he said. “It can harm the one you want to protect,” he added. Drummen said he was hopeful that The question of the public image of plastics was also something which the Russian industry would need to address. It was not currently a hot topic in the region, but the public needed to be educated about plastics collection and recycling before supermarkets started to look at product substitution. “Plastics are too valuable to throw away … they should be used twice,” he said. “We should care about the image of our industry if we want to be there in 2030,”
http://www.icis.com/Articles/2010/10/14/9401290/Protectionism-damaging-to-Russian-polymer-industry.html
CC-MAIN-2015-18
refinedweb
165
50.87
This article coves multiple ways to perform a Linux system restart. We’ll be going over the steps to restart a Linux system from the terminal and also see a Python and C++ implementation of the same. The terminal is one of the most useful aspects of Linux as an operating system. From disconnecting networks to opening applications, Linux terminal does it all in a matter to seconds. In this article, we will be going through various ways to restarting the system, without the GUI. Perform a Linux System Restart from the Terminal There can be various commands that can achieve the task of restarting the system or scheduling a restart for the system. Let us take a look at some of the prominent ones. 1. Using the shutdown command As the name suggests, shutdown command can be used for managing the state of the system, that is, powering-off or rebooting. In order to reboot the system immediately, we run: sudo shutdown -r now The '-r' option specifies the command to reboot instead of powering it off. The now keyword notifies the command to proceed with the task immediately. The shutdown command can be used to schedule a restart in some minutes or hours. For instance, if we want to schedule a restart in 30 minutes, we run: sudo shutdown -r +30 The above command specifies the exact time of execution and implements as soon as the time is finished. After the command is scheduled and we do not wish to restart the system, we can overrule the command using the '-c' flag: sudo shutdown -c To learn more about the shutdown command, we can head over to the manual pages by typing man shutdown in the terminal. 2. Using the reboot command A system restart can be considered a soft reboot, where the Operating system turns off all the running and pending programs before shutting down the system. The reboot command does a clean shutdown in the essence that it is similar to a normal system restart. We need to simply run: sudo reboot The reboot command generally calls the systemctl, which is a controlling command for System Manager. There can be a forceful reboot implemented using the '-f' option sudo reboot -f The reboot command is a part of the system state management trio. The other two are halt and power-off that perform their task represented by their names. 3. Using the telinit command The Linux system has a concept of runlevels, which basically defines the list of scripts or daemons to be run for achieving a particular system state. There are a total of 7 runlevels. The runlevel 6 is reserved for a system reboot. The telinit command simply modifies the current runlevel to achieve a specific state for the system. In order to reboot the system, we run: sudo telinit 6 The above command does the job of running specific scripts and other background processes to restart our system. There are manual pages for runlevel as well as telinit to acquire more knowledge on this topic. Restarting a Linux System using Python For the purpose of restarting a Linux using a Python script, we use the famous os module. It can easily send commands to the Linux terminal that can be executed as a generally typed command. There can be few additions to the Python script for better user experience like a Main Menu and a Cancellation Question. 1. Main Menu The Python script asks for the user’s choice, whether a power-off or a restart is requested. import os # The Main menu print("\tMAIN MENU") print("Enter P for Power-off") print("Enter R for Restart") print() # User's choice choice = input("Enter your choice: ") 2. Constructing the required command After the user mentions his/her requirements, the appropriate command is to be constructed. That includes the amount of minutes the shutdown is to be scheduled. #") Firstly, the code does some sanity checks on the input and then it accepts an integer for the number of minutes the shutdown is to be delayed. After we have all the information, we use system() function for relaying the command to the terminal for proper execution. The key thing to note here is that, the Python script eventually performs the shutdown command discussed previously. 3. Cancellation Request At the end of the Python script, the user is asked whether the shutdown request is to be cancelled or not. # Cancellation Question choice = input("Do you want to cancel? (Y/N) : ") # Cancelling the shutdown if choice.upper() == 'Y': os.system("shutdown -c") If the user decides to cancel, the corresponding cancellation request is sent to be implemented. 4. Complete Python Code to Perform a Linux System Restart import os # The Main menu print("\tMAIN MENU") print("Enter P for Power-off") print("Enter R for Restart") print() # User's choice choice = input("Enter your choice: ") #") print() # Cancellation Question choice = input("Do you want to cancel? (Y/N) : ") # Cancelling the shutdown if choice.upper() == 'Y': os.system("shutdown -c") Restarting Linux using C++ The process of restarting Linux using C++ is almost similar to the above procedure. The added functions are explained below: - system() – The C/C++ function used to send commands from code to the Linux terminal. - c_str() – The function converts the string to char*, which is the required argument for system() function. Apart from the above two functions, the C++ code follows the procedure used in the Python script. Complete C++ Code #include <iostream> using namespace std; int main(){ // The Main menu cout<<"\tMAIN MENU"<<endl; cout<<"Enter P for Power-off"<<endl; cout<<"Enter R for Restart"<<endl; cout<<endl; cout<<"Enter your choice: "; // User's choice char choice; cin>>choice; // Some sanity checks if (choice == 'P' or choice == 'R'){ // Command to be used eventually string command = "shutdown"; // User input for scheduling shutdown cout<<"Enter number of minutes: "; string minutes; cin>>minutes; // Power-off command if(choice == 'P'){ command += " -P"; command += " +"; command += minutes; system(command.c_str()); } // Reboot command else if(choice == 'R'){ command += " -r"; command += " +"; command += minutes; system(command.c_str()); } else cout<<"Something went wrong"<<endl; cout<<endl; // Cancellation Question cout<<"Do you want to cancel? (Y/N) : "; cin>>choice; // Cancelling the shutdown if(choice == 'Y') system("shutdown -c"); } return 1; } Conclusion That brings us to the end of this article! Without a GUI, Linux allows us to perform a system restart in various ways that we’ve seen here today. Also, as a programmer working on Linux, you can use the ideas in the programming snippets here to implement in your code. And lastly, if you wish to know more about any of the commands mentioned about, just type in man <command name>. The man command is the perfect documentation for Linux nerds. We hope this article of restarting Linux via terminal was easy to follow. Thank you for reading.
https://www.linuxfordevices.com/tutorials/linux/linux-system-restart
CC-MAIN-2022-27
refinedweb
1,146
60.24
Talk:Felidae From RationalWiki Ahhh, Felidae. Those were the good old days. Wait, no they weren't. Ace McWickedModel 500 02:45, 11 September 2009 (UTC) - Delete? TheoryOfPractice 02:57, 11 September 2009 (UTC) - Hehehehe you know, I reckon keep it. As a testament to CUR's outrage. I just read the first talk page and its freaking hilarious. Ace McWickedModel 500 03:04, 11 September 2009 (UTC) - A good day to topple the towers. But not to delete the archives. ħuman 03:43, 11 September 2009 (UTC) - I trust you will never make another complaint about people disrespecting the dead with snark? ListenerXTalkerX 03:46, 11 September 2009 (UTC) - - DOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOITDOIT!!!!!1111111111!!!!!!! TheoryOfPractice 03:47, 11 September 2009 (UTC) - If we delete it what are we going to do with all those big cat pictures. I like it reminds me of when we had a pet twerp to play with. - π 04:05, 11 September 2009 (UTC) - DO IT YOU KNOW YOU WANT TO. If you lack the tastikles I'll do it. Keep the talk pages though, of course. ħuman 04:08, 11 September 2009 (UTC) [[Category:Talk pages for deleted articles]] - it's not deleted, really. Just neutered. ħuman 21:30, 23 February 2010 (UTC) w[edit] Is it ready to be released back into the wild yet? Wëäŝëïöïď Methinks it is a Weasel 19:47, 23 February 2010 (UTC) - I am fine with it being released. I wonder what other people think. I also wonder if CUR might suddenly appear. Hmmm, hmm, hmmmmm hmm. -- Mei (talk) 19:48, 23 February 2010 (UTC) - KILL IT WITH FIRE!!!! TheoryOfPractice (talk) 19:52, 23 February 2010 (UTC) - We can move it to main, but I should still have editorial control. I will also make a copy and hide it. -- Mei (talk) 19:53, 23 February 2010 (UTC) - We can make five or six copies & put one in each namespace to see how they develop. It will be an exercise in divergent evolution. Wëäŝëïöïď Methinks it is a Weasel 19:55, 23 February 2010 (UTC) - I am already breeding an army of lolcats. -- Mei (talk) 19:57, 23 February 2010 (UTC) - so YOUR the one responsible for Lolcats :( Pleeese stop :) Hamster (talk) 20:06, 23 February 2010 (UTC) - I suspect the current version is not the "cur-approved" copy... ħuman 21:29, 23 February 2010 (UTC) Return of the creature[edit] NOOOOOOOOOOO!!!!!!! Totnesmartin (talk) 21:36, 23 February 2010 (UTC) - The creature has not returned. Only the article he created. Wëäŝëïöïď Methinks it is a Weasel 23:00, 23 February 2010 (UTC) - I'd be highly amused if CUR popped his head back in to say hello. Acei9 23:29, 23 February 2010 (UTC) - Has anyone emailed him? ħuman 00:32, 24 February 2010 (UTC) - Do you still have an account at the Werelist? Wëäŝëïöïď Methinks it is a Weasel 18:37, 24 February 2010 (UTC) OK, i don't mean to be a dick but...[edit] ...this has gotta go. PACODOGwoof, bitches 03:29, 19 April 2011 (UTC) - Hahaha Pacodog vs. the cat boy. ТyTalk. 03:31, 19 April 2011 (UTC) - lol...c'mon cat boy make your stand. PACODOGwoof, bitches 03:37, 19 April 2011 (UTC) - Cat boy LANCB, but this was his magum opus, the article he edit warred over. Look at this page's archives. It is hilarious reading. ТyTalk. 03:38, 19 April 2011 (UTC) - Well, i'm nuking that shit. Cue HCM. Fuck cats. PACODOGwoof, bitches 03:52, 19 April 2011 (UTC) And why or how can I not dig through the deleted revisions? Did someone "accidentally" de-sop me? Or did the new version of MW forget to tell us how to do such? ħuman 05:42, 19 April 2011 (UTC)
https://rationalwiki.org/wiki/Talk:Felidae
CC-MAIN-2020-50
refinedweb
627
76.32
Recently, I was reading a book called The Cartoon Guide to Statistics, written by Larry Gonick and Woollcott Smith. This book explains the concepts in Statistics and Probability in a delightful manner using only cartoons. Out of the twelve chapters in this book, one chapter is dedicated to Probability, and it is from here that I learnt, rather understood, the basic concepts of probability. While there are several illustrative problems in this book, I would like to highlight one particularly interesting problem, which gave me an introduction to some of the basic concepts of probability. In this article, I'd like to give that problem statement, give an apparent solution (which is a wrong one), and then show the correct solution. Alongside, I will also provide a Python program which simulates this problem. It is hoped that this problem and solution will motivate you to go through this book, and encounter a different way of learning and understanding statistical concepts. In the Seventeenth Century, lived in France a person named Chevalier de Mere, who is said to be a gambler, and played a lot with dice. He had an interesting problem, which he posed to his friend, the mathematician and physicist Blaise Pascal. It is said that over the exchange of a few letters between Pascal and his fellow mathematician Pierre de Fermat, the Theory of Probability got evolved. The problem statement posed by Chevalier de Mere is as follows. Given these two random experiments, which one has a higher probability of occurrence? We next examine the apparent solution, and then the correct solution. The prima facie, or apparent solution is that both events above have the same probability. The "proof" of this is as follows: In what follows, we denote the probability of occurrence of an event as P(event). For the first problem posed above: P(one six in one roll of a dice) = 1/6 Therefore, P(one six in four rolls of a dice) = 4 * 1/6 = 2/3 For the second problem posed above: P(one double-six in one roll of a pair of dice) = 1/36 Therefore, P(one double-six in twenty four rolls of a pair of dice) = 24 * 1/36 = 2/3 This "proves" that both events have the same probability. This was the conclusion drawn by Chevalier de Mere, at that time. It is to be noted that the theory of probability was not yet developed, and he felt that his conclusion was correct. But, de Mere not only gambled a lot, but also carefully kept records of his wins and losses. His observation was that he was winning more in the first gamble. In the next section, we run these experiments using a Python program. A Python script is written to simulate both the above events mentioned in the Problem Statement. The main elements of this small program are: This function is very simple, and is shown below: # Function to roll a single dice once def roll_dice(): lo = 1 hi = 6 return random.randint(lo, hi) An important thing to note here is that the function random.randint() returns random integers from the uniform distribution for the limits specified, such that lo <= N <= hi. We should not use any weighted distribution here, for that would simulate a weighted dice instead. random.randint() lo <= N <= hi This is nothing but the above in a for loop. One important thing to note here is that as soon as a dice roll returns 6, this function returns True without doing any further rolls. for 6 True def four_rolls_single_dice(): for _ in range(4): if roll_dice() == 6: return True return False This is similar to the above function, but here, we check whether the sum of two dice rolls is 12, before returning True. 12 def twenty_four_rolls_two_dice(): for _ in range(24): if roll_dice() + roll_dice() == 12: return True return False There are two other functions named problem1() and problem2() where the calculation of probability is done by performing a large number (ten million) of experiments and recording the results. problem1() problem2() The output of the Python script is as follows: .......... Problem 1 - At least one six in four rolls of a single dice ----------------------------------------------------------- Computed Probability = 0.5178528 Actual Probability from formula = 0.5177469135802468 .......... Problem 2 - At least one double-six in twenty four rolls of a pair of dice -------------------------------------------------------------------------- Computed Probability = 0.4914268 Actual Probability from formula = 0.4914038761309034 It is seen from the output that the actual probability is also calculated in the Python script. It is also seen that the probabilities computed from the experiments and those computed from formula match closely, thus validating the program. It is to be noted that the results of each run will be slightly different as these are simulating random events. The formulas for the actual probability calculation are given in the next section. In Section 3 above, we saw the prima facie view which gave erroneous results. The key phrase in the problem statement is the phrase "at least". This phrase makes a key distinction in the probability computation. In this section, we compute the correct probabilities. We start with the probability of getting no six in a single roll of a dice. This probability is 5/6. In other words, P(no six in one roll of a dice) = 5/6 Therefore, P(no six in four rolls of a dice) = (5/6)4 = 0.4823 The above uses the Multiplication Formula, where P(E and F) = P(E) * P(F) when the events E and F are independent. In our case, one roll of a dice is independent of another roll of the same dice, they are not connected. Therefore, P(at least one six in four rolls of a dice) = 1 - P(no six in four rolls of a dice) = 1 - 0.4823 = 0.5177 As above, we examine the probability of getting no double-six in a roll of a pair of dice. This probability is 35/36. In other words: P(no double-six in one roll of a pair of dice) = 35/36 Therefore, P(no double-six in twenty four rolls of a dice) = (35/36)24 = 0.5086 Therefore, P(at least one double-six in twenty four rolls of a pair of dice) = 1 - P(no double-six in twenty four rolls of a pair of dice) = 1 - 0.5086 = 0.4914 From the above, it is seen that these probabilities are indeed different, and the second probability is smaller. This is the reason why Chevalier de Mere was losing more on the second gamble than on the first one. As noted above, this shows that he not only gambled a lot, but also kept meticulous records of his wins and losses. As briefly mentioned above, the key phrase is at least. For the first event, this means that the following probabilities have to be added to arrive at the correct probability - that of getting one six in four rolls, that of getting two sixes in four rolls, that of getting three sixes in four rolls and that of getting all four sixes in the four rolls. This will be 1 - the probability of getting no six at all. And for the second event, equivalent probabilities have to be added to arrive at the correct probability. Thanks for CodeProject member Marius Bancila for suggesting to incorporating this in the article. In this article, an interesting problem in probability was considered - a problem involving the rolling of dice. Two events were considered and their probabilities were compared. In the first event, a single dice was rolled four times, and probability of getting at least one six was examined. In the second event, a pair of dice were rolled twenty four times, and the probability of getting at least one double-six was examined. We first saw a prima facie solution which suggested that both these probabilities were the same. We then went through a Python program which simulates both these events, and found that the first event had a higher probability. We then showed how these two probabilities are different, and justified the conclusion drawn by the Python program. It is sincerely hoped that you found this problem interesting to analyze. It is also hoped that this article interests you to read The Cartoon Guide to Statistics mentioned at the top of this article, and discover statistics in an entirely new manner. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) I am not familiar with Python but, it seems to me, that your method for rolling two dice may be more efficient if you exited the loop if the first dice was not a 6 as, in that case, there is no need to roll the second dice and there is no need to sum the results. Even a small time saving may be worthwhile as the method is called many millions of times. Also, it's generally a good idea to keep the number of calls to the random number generator to a minimum as, eventually, the number sequence it generates will be repeated and you will simply be repeating trials that you have already done. You are probably right in this case but the piece here maintains that C# Random will repeat the sequence of numbers it generates eventually. C# Random Thanks for the interesting piece. It would be nice to explain why the prima facie case is wrong. The reason it's wrong is that it assumes that the probability of throwing the first 6 is the same for each throw. This is clearly incorrect. The probability of throwing the first 6 on throw 1 is 1/6 but to throw the first 6 on throw 2 you must not have thrown a 6 on throw 1 so that's 5/6*1/6 and so on giving the probabilities as Throw 1=1/6 Throw 2=5/6*1/6 Throw 3=5/6*5/6*1/6 Throw 4=5/6*5/6*5/6*1/6 Sum them and you arrive at the correct solution General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1247382/An-Interesting-Problem-in-the-History-of-Probabili
CC-MAIN-2022-40
refinedweb
1,727
58.92
A group blog from members of the VB team PingBack from Is the same for C# or is exclusive for VB? Hi, Bill: Here are some LINQ-to-SQL (L2S) scenarios I'd like to see covered in more detail: - Using L2S with stored procedures when the developer or users do not have table-level access; - Refreshing the L2S Designer when the database schema changes; - Canceling pending changes to a record; - Using L2S in a disconnected scenario, such as a Web application where it is not practical to maintain state in a long-lived DataContext. Appreciate any insight you care to provide, thanks! The SQL example above is incorrect. The proper syntax for a wildcard is a percent sign, not an asterisk. Your example should be: SELECT * FROM CustomerTable WHERE Phone LIKE '206%' However, the VB example is correct, because in VB the LIKE operator does use an asterisk for the wildcard. @Speednet: I think you'll find it's all a matter of context. If your database backend is Access, the example *is* correct. If you're using something like SQL Server, Oracle, or MySQL; your syntax is correct. Given how many VB examples are shown as written against an Access databse, I assumed this was the case for the original example. However, considering how prevalent other database systems are, a comment refering to the normal syntax for those systems would have been recommended. @Anthony: The Like operator is exclusive to VB. The specific example using LINQ in C# would be written something akin to*: from Contact in CustomerTable where Contact.Phone.StartsWith("206") select Contact; If you wanted to perform more complex pattern matching, I'd recommend using the RegEx capabilities. * I've never actually written LINQ in C#, but the examples in the VS2008 documentation indicate this syntax is appropriate. Testing this out on your own system is highly recommended. YMMV. Void where prohibited. @RonO: I guess I assumed that it was SQL server when you said "In SQL, the LIKE keyword..." Do people still do Access development, now that SQL Express is so prevalent? (And free.) On the subject of the Visual Basic LIKE operator, it is indeed unique to VB (not available in C#). LIKE is many times quicker than using a regular expression, so if you have a comparison that is going to be done multiple times, or if speed is urgent, then LIKE should be used instead of RegExp. Here is an MSDN article comparing RegExp and LIKE: And here is a terrific little blog entry that deals with LIKE vs. RegExp vs. Char(): @Anthony, RonO: It actually is possible to use the LikeOperator in C# if you're determined to do so. Lutz Roeder's .NET Reflector tool is very handy for figuring out how to move between VB and C# code. It can be downloaded for free at The C# equivalent for the code above is: where LikeOperator.LikeString(Contact.Phone, "206*", CompareMethod.Binary) This would also require adding a reference to Microsoft.VisualBasic (which I realize is a weird thing to do in C#) and adding the following two using statements: using Microsoft.VisualBasic; using Microsoft.VisualBasic.CompilerServices; Of course, why do all this when you can just use VB? ;) ここでは、これまでの「 SQL から LINQ への変換 」の投稿をお読みになっていることを前提としています。 LINQ の投稿に関する具体的なリクエストはそれほど多くなかったので、ご提案があれば今からでも歓迎します。 Hi Bill, a great series post. I have a question in SQL i can UPDATE a table with a SELECT from other table(s) i.e. UPDATE tableA SET field1 = tableC.fieldX FROM tableA, tableC WHERE tableA.fieldcommon = tableC.fieldcommon Its possible en LINQ? Greetings, Wilmer Gracias Brother me sirvio¡¡ :)
http://blogs.msdn.com/b/vbteam/archive/2008/05/07/converting-sql-to-linq-part-10-like-bill-horst.aspx
CC-MAIN-2015-48
refinedweb
600
63.9
#include "univ.i" #include "data0data.h" #include "que0types.h" #include "dict0types.h" #include "trx0types.h" #include "row0types.h" #include "pars0sym.h" #include "btr0pcur.h" #include "read0read.h" #include "row0mysql.h" #include "row0sel.ic" Go to the source code of this file. Created 12/19/1997 Heikki Tuuri Open or close cursor operation type Search direction for the MySQL interface Match mode for the MySQL interface Select node states Performs a fetch for a cursor. Performs an execution step of an open or close cursor statement node. Performs an execution step of an open or close cursor statement node. Sample callback function for fetch that prints each row. Prints a row in a select result. Checks if MySQL at the moment is allowed for this table to retrieve a consistent read result, or store it to the query cache.! Read the max AUTOINC value from an index. Performs a select step. This is a high-level function used in SQL execution graphs.. Gets the plan node for the nth table in a join.
http://mingxinglai.com/innodb-annotation/row0sel_8h.html
CC-MAIN-2018-51
refinedweb
172
72.32
- What's New - About - Screenshots - Download - Documentation - Support - Forum - Developers ROOT6 and Backward Compatibility Hi everyone, dear Matt! Matt Walker has posted an extensive review of ROOT and what he would hope the future of ROOT to be. Because I think many of his comments are good ones, and because I have heard some of them from several people in the past, I decided to give the answer to an audience that, how to activate c++1y in Re: how to activate c++1y Hi xhark, Thanks for your question! Simply configure with --enable-cxx14. Cheers, Axel Templates are not the solution Of course one shouldnt use bad... Give us some credit I find your dismissal of templates and a logical class hierarchy a bit concerning: in both cases you make reference to the needs of "novice" physicists, implying that a well designed framework isn't a concern for "real physicists". I can't overemphasize how out-of-touch this claim is with the daily toil of every particle physicist I know (at least those who write code). This isn't a question of "beauty of coding" or writing "nice" code, a badly designed framework means lost productivity. As you say, many physicists prefer to focus more on physics and less on class hierarchy, but that misses the point: C++ was designed to solve problems, not to be beautiful. Templates and inheritance exist because they make coding easier and less buggy. They are absolutely central to C++, and the language continues to evolve under the assumption that they will be used. Leaving such central pieces out of C++ is like leaving verbs out of the English language: it might be fun for a while, and it may even make the language easier for novices, but in the end it's just frustrating and impractical. We shouldn't be blundering around with a hobbled data analysis framework just because software isn't our end-goal, and while it's frustrating to hear physicists dismiss good coding practices, it's bewildering and slightly disheartening to hear that dismissal from a ROOT developer. Furthermore, while I understand that incorporating contemporary software design into the framework is difficult, I think you give physicists far too little credit: we are a clever bunch, if an average software engineer can figure out how to use a template I think we can too. Give us a framework that has templates and a sensible class hierarchy and we'll use it. Unfortunately, you've given physicists a language that can't handle templates, and the framework overall looks like a straw-man argument against object oriented code*. It's no surprise that your average novice physicist doesn't care about these things, the examples they've seen are a disaster. That being said, I'm quite intrigued by the last point: it sounds like a replacement for ROOT is on the horizon. Given that the younger generation is increasingly turning to non-ROOT libraries and languages (from simple things like stl, boost, and python, to new data formats like HDF5 and Protocol Buffers, to frameworks like scipy and matplotlib), I'd tend to agree. But if this is the case, shouldn't the ROOT team be focusing on breaking up the ROOT framework so that the parts can be salvaged? An enormous amount of work went into ROOT, and it doesn't seem logical to flush the entire framework just because it's grown too big to maintain.*This is to say nothing of PyROOT, which gives an unfair introduction to Python. I'm constantly amazed by how many physicists assume that ROOT bugs and segfaults are a shortcoming of the python language. Outside PyROOT, a python module that causes a segmentation fault is universally considered defective: you can't make normal python segfault, and yet somehow PyROOT has managed to make infinite loops of segfaults common. Re: PyROOT Hi Code Monkey, "you can't make normal python segfault" ... well, not to put too fine a point on it, but yes you can. You just have to push it around a little bit rougher than most users do. For example: import sys sys.setrecursionlimit(1 << 30) f = lambda f:f(f) if __name__ == '__main__': f(f) And a problem like this exists by design: it will never be fixed and crashes python3 just a simply. Of course, most users will not run with a recursion limit that high, but it does prove that if you want to make the python interpreter crash, you can. :) On a more serious note, PyROOT is not an extension module, it is a language binding: it exposes C++ and sometimes automatic choices are no choices and bring C++ features with all crashable details into Python. The proper comparison is with ctypes, which makes it even easier (much easier) to create crashes, and that module is in the standard library! If you want users to start out with a cleaner, more pythonistic, Python introduction to ROOT, point them to rootpy.org, as the goal of that project is precisely that. Cheers, Wim Re: Credit Hi Monkey Thanks for your comment! I am not against templates (in general), I am not against a well designed class hierarchy. Both would not make any sense. I am against code breaking changes for the sake of beauty. I have one issue with template: it's currently (i.e. without concepts) impossible to document what types are appropriate as template parameters. In ROOT 5, templates actually mean an inflation of object code: all functions must be instantiated because they might be called through the interpreter. On the other hand a wider use of templates Did that convince you that I like templates? :-) But we have existing code, written by people who want to get physics results. What should we do? We always have to hide the templates if we use them in existing interfaces - which is not a satisfying solution for anyone. Regarding the serialization libraries you mention: none of them can compare with ROOT's. (And most of them are all but new!) They offer a reduced feature set at higher throughput, but you can usually do the same with ROOT (turning off features and gaining speed). I/O is a really tricky business, it's easy to make bold statements and amazingly difficult to get it right, all the way, over decades, on petabyte levels. Once ROOT 6 is out and we can use modern C++, and once the experiments have moved to C++11, I expect that we will polish ROOT's interfaces to bring them into current C++(14?) shape. But we will have to do that in a way that is backward compatible, at least to a large extend. And that's the tricky part. Cheers, Axel Template Concepts +1 Hi Axel, I totally agree with you and I think the roadmap the ROOT-Team is on is sensible. I use ROOT not in the context of physics but in the context of complex systems simulation, e.g., neural networks. Before I found ROOT I tested numerous frameworks for data storage and analysis - and quite honestly when it comes to handling gigabytes of simulation data everything else besides ROOT was simply unusable. The API of ROOT may seem a bit awkward to software engineering "purists", but from a practical point of view I find it extremly efficient to use. Keep on the good work, Jochen Re: +1 Hi Jochen, Thank you for your nice comments! In principle we know that, but it really feels good to have that confirmed from time to time :-) Feel yourself signed up for the next ROOT workshop, by the way (Winter of 2015 in Saas Fee if all goes according to plan). I am really curious to hear more about your use of ROOT! Cheers, Axel.
https://root.cern.ch/drupal/content/root6-and-backward-compatibility
CC-MAIN-2015-18
refinedweb
1,307
66.78
User account creation filtered due to spam. Created attachment 29219 [details] the bug code system version :64-bit Ubuntu 12.04 lts gcc version: 4.6.3 options:gcc source.c -o source When compile and run a program with an error which will cause buffer overflow,the compiler can not dectect it. The program below can compile and run correctly in my system(also works on my friend system which is no ubuntu but with gcc version 4.5.0) #include <stdio.h> #include <string.h> main() { const char *a="123456789abcdef"; char b[10]; int i=0; while((b[i]=a[i])!='\0') ++i; printf("%s,%d\n",b,strlen(b)); return 0; } I attempt to copy a string whose length is 15 to a array whose length is 10,and it compiles and run correctly.The output is "123456789abcdef,15". p.s. This will not succeed in 32-bit linux system. My freinds tests my code in CentOS! The stack protection code cannot guarantee to detect every error.
https://gcc.gnu.org/bugzilla/show_bug.cgi?format=multiple&id=56046
CC-MAIN-2017-39
refinedweb
170
78.35
Provided by: liblcgdm-dev_1.8.10-1build3_amd64 NAME Clogit - log server messages in local log or in system logger SYNOPSIS #include "Clog.h" int Cinitlog (char *cmd, char *logfile) int Clogit (int level, char *func, char *msg, ...) int Cvlogit (int level, char *func, char *msg, va_list ap) DESCRIPTION Cinitlog initializes the server logging routines. Clogit logs server messages in a local log or in the system logger. Cvlogit is the same as Clogit but takes a va_list instead of a variable number of arguments. cmd specifies a string to be prepended to the syslog messages. logfile specifies a path for the server log file. If set to syslog, the system logger will be used. By default, only messages with level at least as important as LOG_INFO will be logged. The level threshold can be changed with the environment variable LOG_PRIORITY. level is associated with the message. The possible values, in order of decreasing importance are: LOG_EMERG LOG_ALERT LOG_CRIT LOG_ERR LOG_WARNING LOG_NOTICE LOG_INFO LOG_DEBUG func is the name of the function that issues the message. msg is a format as in printf. RETURN VALUE These routines return 0 if the operation was successful or -1 if the operation failed. In the latter case, serrno is set appropriately. ERRORS EFAULT logfile is a NULL pointer. ENAMETOOLONG The length of logfile exceeds CA_MAXPATHLEN.
http://manpages.ubuntu.com/manpages/xenial/man3/Clogit.3.html
CC-MAIN-2019-47
refinedweb
220
59.4
A few months ago, we announced our data partner program. Since then, we’ve doubled the number of data sets available to you through this program. Today we have 42 data sets, include many free data sets through Quandl. At launch, these partner data sets were only usable in Quantopian Research. Today, that changes! You can now start using Accern and Quandl data sets in your algorithms: backtesting, paper trading and the Quantopian Open contest. For each of these vendors, the data can be accessed as part of the pipeline API. To use this data, go to quantopian.com/data, click into the page for the data set that interests you and hit the appropriate "Get" button to ensure access to the set. Once you've done this, you're ready to use that partner data set in your algo. Just as you might import pricing data for use in pipeline like: from quantopian.pipeline.data.builtin import USEquityPricing you can access the Accern data for use in pipeline with a similar import: from quantopian.pipeline.data.accern import alphaone_free as alphaone With a premium data set like Accern, the code snippet above is making use of the free sample of the data set. If you want to use the most recent 2 years of data from Accern, you can do so by purchasing a monthly subscription. Including a data set for Quandl is similar. If, for example, you want to use daily VIX prices, you could import as follows: from quantopian.pipeline.data.quandl import yahoo_index_vix In the case of Quandl, there are 17 data sets, all completely free. If you’d like to import a different data set from the VIX, you can pick another from the listing at Quantopian Data, like unemployment data: from quantopian.pipeline.data.quandl import fred_ccsa Note, these initial data sets from Quandl are typically macroeconomic (interest rates, unemployment rates, etc). They are not cross-sectional, i.e. there is not a separate measurement for each individual stock. Therefore, with non-cross-sectional data, the data in a pipeline is associated to every security. With those raw data sets loaded, you can use the data with a built-in pipeline factor. Or use it to create your own custom factor. Attached is a template example using Accern. Be on the lookout for some new shared algorithms using this data - James Christopher has already posted one using VIX data. I look forward to seeing how the community uses this data in algorithms, in the forums and in the contest! Happy coding, Josh
https://www.quantopian.com/posts/using-accern-and-quandl-data-in-your-algorithms
CC-MAIN-2018-05
refinedweb
428
65.22
Difference between revisions of "Python" Revision as of 21:14, 26 October 2018 Python is a general purpose, high level programming language that very commonly is embedded in large applications to automate some tasks by creating scripts or macros. In FreeCAD, Python code can be used to create various elements programmatically, without needing the graphical user interface. Additionally, many tools and workbenches of FreeCAD are programmed in Python. guidelines stress readability of code; in particular, parentheses should immediately follow the function name, and a space should follow a comma. import FreeCAD, Draft p1 = FreeCAD.Vector(0, 0, 0) p2 = FreeCAD.Vector(1, 1, 0) p3 = FreeCAD.Vector(2, 0, 0) Wire = Draft a value or object that can be used as the input of another drawing tool. import Draft, Arch Wire = Draft.makeWire(pointslist, closed=True, face=True) Window = Arch.makeWindow(Wire, name="Big window")
https://wiki.freecadweb.org/index.php?title=Python&diff=339682&oldid=339668
CC-MAIN-2020-50
refinedweb
145
50.94
– 1 Read all the choices carefully, as there may be more than one correct answer, choose all the correct answers for each question. - ________________________ makes Java platform-independent. a) JVM b) Java syntax c) Java API d) bytecodes e) none - Java's keywords includes Null. a) True b) False - Which occupies more number of bits of memory. a) double b) long c) both 4 b) occupies same d) depends on the value assigned e) none - The object is created with new keyword a) at compile-time b) at run-time c) depends on the code d ) none - int x = 0, y = 0 , z = 0 ; x = (+ + x + y – – ) * z + + ; What is the value of " x " after execution ? a) – 2 b) – 1 c) 0 d ) 1 e) 2 - int 4thhouse = 1234 ; System.out.println( 4thhouse ) ; a) 1234 b) displays error as the value assigned is more than the range of integer c) displays error as coding is not as per Java rules e) none - int ++a = 100 ; System.out.println( ++a ) ; What is the output of the above fraction of code ? a) 100 b) displays error as + + a is not enclosed in double quotes in println statement c) compiler displays error as + + a is not a valid identifier d) none - Integer.parseInt( ) method is used to convert an integer value to its string form. a) True b) False - Java supports unsigned data types. a) True b) False - One way of implementing data protection is declaring instance variables as private and methods as public. a) True b) False - How many primitive data types Java defines ? a) 6 b) 8 c) 10 d) more than 10 e) none - The variables declared in a class for the use of all methods of the class are called a) reference variables b) objects c) instance variables d ) none - double STATIC = 2.5 ; System.out.println( STATIC ) ; The above fraction of code a) prints 2.5 b) raises an error as STATIC is used as a variable which is a keyword c) raises an exception e) none - What is the range of data type int ? a) – ( 2^16 ) to ( 2^16 ) –1 b) – ( 2^15 ) to ( 2^15 ) –1 c) – ( 2^31 ) to ( 2^31 ) –1 d) – ( 2^32 ) to ( 2^32 ) –1 e) depends on the operating system on which Java is working f) none - int Integer = 34 ; char String = 'S' ; System.out.println( Integer ) ; System.out.println( String ) ; What would be the output of the above fraction of code ? a) does not compile as Integer and String are API class names b) throws exception c) 34 d) S e) c and d f) none - char k = 'A' ; System.out.println( k * k ) ; The above program raises a compilation error as arithmetic operations are not possible on characters. a) true b) false - System.out.println( Math.floor( Math.random( ) ) ) ; The above statement always prints 0.0 a) True b) False - int x = 99 , y = 100 ; System.out.println( x / y ) ; What is the output of the above fraction of code ? a) does not compile b) 0 c) 0.99 d) none - boolean b = true ; int i = ( int ) b ; System.out.println( i ) ; What is the output of the above fraction of code ? a) 116 ( the ASCII value of character t ) b) 98 ( the ASCII value of character b ) c) does not compile d) throws exception e) none - System.out.println(25/4.0); System.out.println(25.0/4.0); System.out.println(25.0/4); The output of the above three println statements is same result. a) True b) False - Java What is the output of the above program ? a) 0 b) 2.0 c) error as mismatch between constructors d) none - byte b = 50 ; b = b * 2 ; System.out.println( “ b = “ + b ) ; The above fraction of code prints b = 100. a) True b) False - Java a) 20 b) 21 c) 22 d) does not compile e) throws exception e) none ( ) - Constructors can be declared final, if needed perhaps which you must have not tried. a) True b) False - final methods cannot be overridden but overloaded ? a) True b) False 21. 23. ANSWERS Pass your comments and suggestions for the improvement of this "Test Your Java 1". Sir,Is Java pure object oriented as my sir told us,n if not then please give me reason for it.or any blog you have related to it. It is pure object-oriented. sir , i have some knowledgr about java Lopgger. Could you please tell me what is log4j. It is logging mechanism in a Java application supported many tools IDE like Eclipse, NetBeans etc. hello sir… will u explain test 1 question 21…? Subclass overload (or default also) constructor will super class default constructor. But super class does not have default constructor and is also not created by the system as the constructor is overloaded. See this link: que 23. how 22..? it should be 21..!! It is static variable. All object will share the same location. Read this link: sir why does System.out.println(99/100); prints 0? any specific reason? actually it is .99 It is integer division. Make it either 99.0/100 or 99/100.0, you get yours. 20 is bad, too. Output includes strings and they ARE different. 9 is bad, because char IS unsigned. 9) Java does not support “unsigned” datatypes for that matter “unsigned” is not keyword of Java. “char” is implicitly unsigned. Java does support pointers and it does not mean the designers should not use pointers in developing the language. 20) The output is same answer of division quotient. Remember these questions for very beginners. If you think of Strings and String pools, I will modify the question for you and pass your comments. Than for 9 the question should sound as “Is a word ‘unsigned’ used for some Java types definition?” As for 20, you haven’t understood. The outputs include strings “20/4.0” and “20.0/4” And they ARE different. But, as I see, you have changed them already. Thank you. 12 is wrong. Instance variables are not seen from all methods of the class. For example they are not seen from static methods. public class Demo { int x = 10; public static void main( String args[]) { Demo d1 = new Demo(); System.out.println(d1.x); } } main() method is static and still you are able to access x. The difference is you require an object to call from static methods. Because you have created the instance just here. You can’t see it from another static method. The variables seen from anywhere in the class are called Static Class attributes or Static Class Fields or Static Class Variables. Respected sir Recently i attended a interview where people asked some questions like Difference between ByteArrayOutStream and BufferedOutputStream and where do we use them ? so please post a detailed explanation to this to give us some in depth idea on this Thanks in advance Thanks and Regards Venkata Naveen See this may help you. 1. 2. Performance aspect of BufferedInputStream over FileInputStream: 1. 2. 21 answer is b not a In Test Your Java – 1, the answer for 21 question is C. interface Side { String getSide(); } class Head implements Side { public String getSide() { return “Head “; } } class Tail implements Side { public String getSide() { return “Tail “; } } class Coin { public static void overload(Head side) { System.out.print(side.getSide()); } public static void overload(Tail side) { System.out.print(side.getSide()); } public static void overload(Side side) { System.out.print(“Side “); } public static void overload(Object side) { System.out.print(“Object “); } public static void main(String []args) { Side firstAttempt = new Head(); Tail secondAttempt = new Tail(); overload(firstAttempt); overload((Object)firstAttempt); overload(secondAttempt); overload((Side)secondAttempt); } } sir can u please explain this program .In main,how can overload(firstAttempt); wil call overload(Side side) method?and even explain when we call overload((Object)firstAttempt); overload(secondAttempt); overload((Side)secondAttempt);,which methods are called? class Base { public static void foo(Base bObj) { System.out.println(“In Base.foo()”); bObj.bar(); } public void bar() { System.out.println(“In Base.bar()”); } } class Derived extends Base { public static void foo(Base bObj) { System.out.println(“In Derived.foo()”); bObj.bar(); } public void bar() { System.out.println(“In Derived.bar()”); } } class OverrideTest { public static void main(String []args) { Base bObj = new Derived(); bObj.foo(bObj); } } sir! can you please explain how can bobj object call foo() method of base class?as sub class obj is assigned to super class object,the object should call sub class overidden method as per rules! int ++a = 100 ; System.out.println( ++a ) ; What is the output of the above fraction of code ? a) 100 b) displays error as + + a is not enclosed in double quotes in println statement c) compiler displays error as + + a is not a valid identifier d) none Answer: C Sir Explain this. In int ++a = 100, ++a is an identifier. An identifier should not have any special characters, by rule. byte b = 50 ; b = b * 2 ; System.out.println( “ b = “ + b ) ; The above fraction of code prints b = 100. a) True b) False Answer is false. Sir, Please explain this. In runtime, b*2 when multiplied results in int value. An int value cannot be assigned to byte value implicitly. For this reason, it is compilation error. To work correctly, do as following explicit conversion: b = (byte)(b*2); char k = ‘A’ ; System.out.println( k * k ) ; The above program raises a compilation error as arithmetic operations are not possible on characters. a) true b) false answer 9409,,, there is no compilation error ,it prints some integer value tel me sir why it is taken integer value? At runtime, A value is converted into its ASCII integer of 65 and then multiplied. int Integer = 34 ; char String = ‘S’ ; System.out.println( Integer ) ; System.out.println( String ) ; What would be the output of the above fraction of code ? a) does not compile as Integer and String are API class names b) throws exception c) 34 d) S e) c and d f) none how it is working String is a Data Type how ur using Integer and String as a Variable i cant believe , please explain me in detail sir Java API classes are not keywords. So, by principle, it is possible. Java API classes (predefined class of Java) are not keywords. They can be used as variables; but we do not due to confusion and later problems arise. in ques 21 there is error that it is not finding the constructor if you perform this in notepad in wind 7 becouse parametrized constructor are never inherited but if you perform that in eclipse it is showing the result ….. sir plz tell me why its happening output for 21 is B and not C as mentioned in the answer Just write a program and compile, you will come to know. To have smooth compilation, add the constructor Num(){ } this in Num class. as the super class is having parameterised constructor and object of Num class is created using parameterised constructor what is the need of default constructor? JVM requires else it is compilation error. Question num 23 answer is c not b.. when we declare static variables only one copy per class is available. It is given as c only and not b in Test Your Java – I please explain 16,22,24. What is 16, 22 and 24 ? Are they from Test series? Then give the series no.
http://way2java.com/java-questions/test-your-java-1/
CC-MAIN-2017-13
refinedweb
1,914
66.74
Home Page › Forums › Network Management › ZeroShell › Zeroshell Auto Login – Script This topic contains 16 replies, has 0 voices, and was last updated by fnmunhoz 9 years, 6 months ago. - AuthorPosts Hello, I’m write a very basic script in Python ( test version) to automate the web login process of Captive Portal. I publish the version at google code So I’m looking for sugestions to improve the code and to point security and performance issues about the script. Thanks imported_fulvioParticipant Thanks for the wonderful idea. I will put a link in the documentation page. Would it be possible to translate it in Java code. I think a Java authentication program is more portable on communication devices such as a cellular phone or a PDA. Regards Fulvio Hi Fulvio, you are right a Java implementation is more portable than Python, but my clients are 99% windows users (Windows 98 included) and I have to do it more simple as possible to install for them. With Python I use py2exe [1] to package python interpreter, python library and my script, and all files places in one folder. After that I use InnoSetup [2] to make a Installer. Moreover, I think python is a little bit lighter for this task, but it is debatable. I do not know if I would have these advantages with java. But if someone is interested, I think a great idea two implementations. Thanks for the link in the documentation page, and congratulations for the excellent work. [1] [2]. Hello J2fet, thanks for the modification. I’m sure that will help. I Will upload your version in google code, ok?. fnmunhoz, great! Thanks for the upload. Changes: 1. Now the config file stores the password in encoded format and the main script can decode the encoded password directly. No more clear-text password. 2. I’ve created config maker for the non savvy. Future Changes?: 1. CN based redirection support, so that user don’t have to put server= values. 2. User keyboard interrupt to disconnect/renew login. 3. GUI based, maybe bundled together modified OpenVPN GUI for windows package and called it Zeroshell client utility or something… What says you Fulvio? Anyways, the links: Hi!! Please, somebody can explain how can i do to auto login the clients? I read your coments but I dont know how to built the .exe installer or where take the .DLL library… could you explain step by step? Thanks a lot for youre job!!! Explanation on using Zeroshell Captive Portal Auto Login Step 0.1 – Get Step 0.2 – Get Condition 1 – Windows/Linux was installed with python 1.Execute $python ~/zscp-autologin-makecfg.py to create the configuration file needed. Answer the questions. Run once or whenever you feel like doing so. 2.Execute $python ~/zscp-autologin.py 3.Put the line below in user’s ~/.bashrc file for automatic login in Linux (sort of…) python ~/zscp-autologin.py Condition 2 – Windows was not installed with python and you want to have the exe version 1.Download and install python from 2.Download and install py2exe for the same python version that you’ve installed in Step 1 from 3.You need to create a file called setup.py which looks like: from distutils.core import setup import py2exe setup (version = “0.1”, name = “zscp-autologin”, console = [“zscp-autologin.py”]) Then run: C:zscp>python setup.py py2exe The executable and DLLs will appear in the “dist” subdirectory of your current working directory. Proceed with Condition 3, Step 2-4. Condition 3 – You trust ZSCP-autologin so much, that you don’t mind getting the binaries from a stranger I’ve compiled using py2exe-2.6 and put it up at 1.Extract the zip file and cd to it 2.Execute C:zscp>zscp-autologin-makecfg.exe to create the configuration file needed. Answer the questions. Run once or whenever you feel like doing so. 3.Execute C:zscp>zscp-autologin.exe 4.Put the symlink in Windows Startup for automatic login (sort of…) Good luck for any of the 3 conditions… You may check the updates in as well. Thanks a lot!!! Thanks for a very good job!!! this link don´t let me download because i don´t have permiss jejeje I can Believe… I´m clumsy! When I´m compiling, the compiler doesn´t let me put “zscp>” only “python setup.exe py2exe” When it finish compiling i can get only one “.exe”. When I do double click it only appear a black screen from the MSDOS during 1 second and closed. please, what I did wrong? I install the python 2.6 and the py2exe is for the same version. When I´ll go to compile i put the files on “Python 26” folder Don’t double click on the exe instead, use Start->Run->cmd cd C:zscp zscp-autologin.exe That should do… Provided you put the binaries in folder C:zscp. Regarding the compilation, you have to create one exe by one exe, the final zipped binaries were combined to make things easy. I kind of skip the steps in my earlier explanations. Regarding the folder name, you have to change it accordingly. I think thak something I´m doing wrong. Thanks a lot for your patient. Please, Coul you repair the acces to the file named “zscp-autologin.zip” in this link? because I dimiss to try to compilate again the autologin for ZeroShell some errors apear when i ejecute. Thanks a lot!! Please, somebody can compile for me both programs “zscp-autologin-makecfg.py” and “zscp-autologin.py” in exe because i think that i have some trouble o i´m a inept to do it. Thank a lot!!! Permission for the binaries were fixed. Script was modified. Redownload for both scripts please. Good luck. - AuthorPosts You must be logged in to reply to this topic.
https://zeroshell.org/forums/topic/zeroshell-auto-login-script/
CC-MAIN-2019-18
refinedweb
986
69.07
CodePlexProject Hosting for Open Source Software Hi, I want find template where is insert this "Content_ControlWrapper" Below code is inside public class Shapes if ( !displaying.ShapeMetadata.DisplayType.Contains("Admin") ) displaying.ShapeMetadata.Wrappers.Add("Content_ControlWrapper"); "Content_ControlWrapper" this is cshtml file . using above code its display content intemplate. I want find that root template. How can i find it krds lakmal Content.ControlWrapper.cshtml. It's in core/views Shape tracing would have told you that by the way. I want find which template use to display "Content.ControlWrapper.cshtml" When i trace shape. something like this <li> code of Content.ControlWrapper.cshtml </li> I want apply custom css class to this <li> tag Which control or wich template belong to this <li> tag Use the dev tools of your browser to determine a good CSS selector to target that element, then add a rule to your theme's stylesheet. If you can't do that, override the template in your theme and modify the markup by adding a class to the tag. It will then be very easy to add a rule in your css. I have solve issue. I have edit this src\Orchard.Web\Core\Contents\Views @using Orchard.ContentManagement; @using Orchard.Core.Contents; @if (AuthorizedFor(Permissions.EditContent) && Model.ContentItem.Id > 0) { @Html.ItemEditLinkWithReturnUrl(T("Edit").Text, (ContentItem)Model.ContentItem) @Display(Model.Child) } else { @Display(Model.Child) } You shouldn't edit anything in core. You should be able to override that view in your theme instead. bertrandleroy wrote: You shouldn't edit anything in core. You should be able to override that view in your theme instead. And Shape Tracing can do that for you too. Look under alternates, you'll find create buttons. VERY HANDY How can i override from theme. Is there any sample code available? reverand just told you: use shape tracing. what it does is just copy the file into your theme. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/277564
CC-MAIN-2018-05
refinedweb
353
70.39
852/use-of-continue-in-python The break statement is used to "break" the loop. It means that when a break statement is executed, it skips the execution below it in the loop and exits the loop. Ex: for letter in 'HelloWorld': if letter == 'W': break print 'Character is:', letter Output for this code will be: Character is: H Character is: e Character is: l Character is: o The continue statement is used to continue the loop by skipping the execution of code below it i.e., when a continue statement is executed, it skips executing the code below it but goes back to the loop and continues with the next cycle. Ex: for letter in 'Hello': if letter == 'H': continue print 'Character is: ', letter Here's an simple example : for letter in 'Django': if letter == 'D': continue print 'Current Letter:', letter Output will be : Current Letter: j Current Letter: a Current Letter: n Current Letter: g Current Letter: o import random for i in range(20): x = random.randint(-5,5) if x == 0: continue print 1/x continue is an extremely important control statement. The above code indicates a typical application, where the result of a division by zero can be avoided. I use it often when I need to store the output from programs, but dont want to store the output if the program has crashed. Note, to test the above example, replace the last statement with print 1/float(x), or you'll get zeros whenever there's a fraction, since randint returns an integer. I omitted it for clarity. Suppose you need a check before the main code: if precondition_fails(message): continue ''' main code here ''' Note you can do this after the main code was written without changing that code in anyway. If you diff the code, only the added line with "continue" will be highlighted since there are no spacing changes to the main code. def filter_out_colors(elements): colors = ['red', 'green'] result = [] for element in elements: if element in colors: continue # skip the element # You can do whatever here result.append(element) return result >>> filter_out_colors(['lemon', 'orange', 'red', 'pear']) ['lemon', 'orange', 'pear'] Polymorphism is the ability to present the ...READ MORE raw_input fuction is no longer available in ...READ MORE The %s specifier converts the object using ...READ MORE You can try the following in a ...READ MORE You can simply the built-in function in ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE You can get the changing time from ...READ MORE Try below code string = input("Enter a string: ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/852/use-of-continue-in-python
CC-MAIN-2021-43
refinedweb
476
61.67
Hi, I'm trying to create a stand alone script to automate watershed analysis. So far it works up until it gets to the spatial analyst tools but then fails. Can anyone show me where I have gone wrong please? Here is my code: Thanks very much I'm trying to create a stand alone script to automate watershed analysis. So far it works up until it gets to the spatial analyst tools but then fails. Can anyone show me where I have gone wrong please? Here is my code: import arcpy import os import time import sys from arcpy import env from arcpy.sa import * env.workspace = "H:/PhD/PythonLessons/Scripts/testIt" catchmentName = "CatchmentName" catchmentRaster = catchmentName + ".tif" catchmentRasterProj = catchmentName + "proj.tif" catchment100 = catchmentName + "100.tif" catchment100Fill = catchmentName + "100Fill.tif" catchment100Flow = catchmentName + "100Flow.tif" catchment100Acc = catchmentName + "100Acc.tif" catchment100PP = catchmentName + "100PP.tif" catchment100WS = catchmentName + "100WS.tif" folder = "H:/PhD/PythonLessons/Scripts/testIt" folderAdd = "H:/PhD/PythonLessons/Scripts/testIt/" outlet = "outlet.shp" projection = "C:/Program Files (x86)/ArcGIS/Desktop10.0/Coordinate Systems/Projected Coordinate Systems/National Grids/Europe/British National Grid.prj" # identify guage #create pourpoint shapefile # identify catchment outline # select appropriate testIt # have a folder with all of the little ascii testIt for a catchment. This folder will be called *catchment name* print "checking extensions" try: arcpy.CheckExtension("Spatial") == "Available" arcpy.CheckOutExtension("Spatial") print "done" except: print "piddle" print "converting the ascii files to raster files" asciiList = os.listdir(folder) for x in range(len(asciiList)-1, -1, -1): if not asciiList[x].endswith(".asc"): asciiList.pop(x) for file in asciiList: print folderAdd + file print folderAdd + file[:-4] try: arcpy.ASCIIToRaster_conversion(folderAdd+ file, folderAdd+ file[:-4] + ".tif", "INTEGER") except: print "Processing " + file + "FAILED" time.sleep(3) tifList = os.listdir(folder) print tifList for x in range(len(tifList) -1, -1, -1): if not tifList[x].endswith(".tif"): tifList.pop(x) print tifList tifString = "" for tif in tifList: tifString += tif + ";" tifString = tifString[:-1] print tifString print "mosaic to new raster" try: arcpy.MosaicToNewRaster_management(tifString, folderAdd, catchmentRaster, "#", "16_BIT_SIGNED", "#", "1" , "MEAN") except: print "Error with mosaic to new raster" time.sleep(3) print "Giving it the right coordinates" try: arcpy.DefineProjection_management(folderAdd + catchmentRaster, projection) arcpy.ProjectRaster_management(folderAdd + catchmentRaster, folderAdd + catchmentRasterProj, projection, "BILINEAR", "#", "#","#",projection) except: print "Error giving it the right coordinates" time.sleep(3) print "Changing resolution" try: outAggreg = Aggregate(catchmentRasterProj, 10, "MEDIAN", "TRUNCATE", "DATA") outAggreg.save(folderAdd + catchment100) except: print "Error changing resolution" time.sleep(3) print "Filling sinks" try: outFill = Fill(catchment100) outFill.save(folderAdd + catchment100Fill) except: print "error filling sinks" print "Calculating fow direction" time.sleep(3) try: outFlowDirection = FlowDirection(catchment100Fill, "NORMAL") outFlowDirection.save(folderAdd + catchment100Flow) except: print "Error calculating flow direction" print "Calculating flow accumulation" time.sleep(3) try: outFlowAccumulation = FlowAccumulation(catchment100Flow) outFlowAccumulation.save(folderAdd + catchment100Acc) except: print "Error calculating flow accumulation" print "Creating pour point snap" time.sleep(3) try: outSnapPour = SnapPourPoint(outlet, catchment100Acc, 5, "FID") outSnapPour.save(folderAdd + catchment100PP) except: print "Error snapping pour point" time.sleep(3) print "Creating watershed" try: outWatershed = Watershed(catchment100Flow, catchment100PP) outWatershed.save(folderAdd + catchment100WS) except: print "Error creating watershed" time.sleep(3) print "Creating files for SHETRAN prepare" try: arcpy.RasterToASCII_conversion(catchment100, folderAdd + catchmentName + "100DEM.txt") arcpy.RasterToASCII_conversion(catchment100WS, folderAdd + catchmentName + "100Mask.txt") except: print "error creating files for SHETRAN prepare" time.sleep(3) print "Done!" raw_input("close to exit") Thanks very much Appending _sa should not help with your way to import the functions, but try it anyway. Try also to call arcpy.Fill_sa(...) etc.. I think this would be also better readable. I've tried arcpy.Aggregate_sa(...) but it still doesn't work. I don't get an error message apart from my own: "Giving it the right coordinates" "Changing resolution" "Error changing resolution" "error filling sinks" "Calculating fow direction" "Error calculating flow direction" "Calculating flow accumulation" "Error calculating flow accumulation" "Creating pour point snap" "Error snapping pour point" "Creating watershed" "Error creating watershed" "Creating files for SHETRAN prepare" "error creating files for SHETRAN prepare" "Done!" Any other ideas? Thanks Do you have an arcinfo-license? aggregate needs it. The error message is: 'module' object has no attribute 'Aggregate_sa@ Thanks This didnt work (also the AttributeError :mad:): arcpy.Aggregate_sa('ws', 'Aggrega_ws2') This was ok: arcpy.sa.Aggregate("ws",5,"MAXIMUM","EXPAND","DATA") ERROR 999999: Error executing function. failed to open raster dataset Failed to execute <aggregate>. Also, I'm trying to do this as a stand alone script and not in the python window. Thanks! Dan, this changed at 10.x. The Spatial Analyst licensed functions need to be access the new python-esque way: If you have a 3D license, you can access the raster tools included in 3D using the _3d suffix. Should not matter. My guess is your raster is getting saved somewhere (scratch workspace) besides where you think it is. Your script seems to imply that your processing results are expected in "H:/PhD/PythonLessons/Scripts/testIt" but you haven't set the scratch workspace to that location. For best results when using Spatial Analyst tools its best when they are sent to the same folder: Setting the current and scratch to the same folder allows temporary grids to be renamed instead of copied - this can save a lot of time with big rasters. for my spatial analyst tools. It now gives the error 'failed to open raster dataset'. I'm not sure why it won't open it. I checked that it would open in ArcMap 10 and it does that just fine. I've also checked that the file name is right. I checked the license and that I'd imported the right modules. Also, when I run the, as well as an error it creates a new folder within the workspace called 'aggrega_catc1' but I thought that setting the scratch workspace to the env workspace would stop this? Thanks again To see if it's a path issue, make sure it exists. (If you don't specify a full path, it will look in env.workspace for it.) In the same manner, the output is also written to the current workspace, so instead of you can just specify This is a temporary raster; all tools write a temporary raster to the scratch workspace when raster tools run. As I said the point of setting the workspaces the same is so when the tool successfully runs this temp raster is renamed (instead of copied) to your output raster when you execute outraster.save. Thanks for explaining the temporary files. What I found was I was trying to create a grid and it didn't like it. So I created a filegeodatabase for my results to go into. This seems to have solved my issue. All of my original data is in GRID from arcgis 9.3 I plan to convert it, but I wanted to get the python scripts working first. The grid data format is very path sensitive - you need to be very careful to avoid long grid names or names that do not follow the very restrictive grid/coverage naming rules. On the plus side, the grid data format is usually faster to process (though this depends on the tool). I've finally resolved this issue. It turns out that the arc modules wouldn't run on the H drive that I was using so I saved everything to the C drive instead and it all ran fine. Here's the code for future reference: Thanks for the help! This seems like it should really be a bug report.
https://community.esri.com/thread/46743-stand-alone-script-fails-using-spatial-analyst-tools
CC-MAIN-2019-18
refinedweb
1,249
51.24
what is the best way to create exit a program? For example: This program uses a while loop and then checks the inputed value before it enters it again. Is there a shorter better way to do this? Should I use a character or a number? This comes up a lot in my programs and I want to use the fastest, best method. Code: //grades #include <iostream.h> int main() { int count = 0; int total = 0; int average; int grades = 0; cout << "Please enter a grade(enter -1 to quit): \n"; cin >> grades; while(grades != -1) {total += grades; count++; cout << "Please enter a grade(enter -1 to quit): \n"; cin >> grades;} average = total / count; cout << "Class average is: " << average << endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/4714-its-all-good-printable-thread.html
CC-MAIN-2015-06
refinedweb
122
81.83
CodePlexProject Hosting for Open Source Software If I have a migration file with multiple methods that are to be executed then all the migrations are run, but the version number in Orchard_Framework_DataMigrationRecord is not correct. For example, if I have the following methods in my Migration file, when I update both columns will be added, but the version number in the Orchard_Framework_DataMigrationRecord table will be 5 rather than 6. public int UpdateFrom4() { SchemaBuilder.AlterTable("RecentBlogPostsPartRecord", table => table .AddColumn("Test2", DbType.DateTime) ); return 5; } public int UpdateFrom5() { SchemaBuilder.AlterTable("RecentBlogPostsPartRecord", table => table .AddColumn("Test3", DbType.DateTime) ); return 6; } I have debugged the code in the DataMigrationManager class and the dataMigrationRecord.Version property seems to be set to the correct value, but when the SQL update statement is executed it has the wrong version number. I'm finding it difficult to see what is going wrong because it just seems to be in the NHibenate code. This looks to me to be a bug. Anyone else seeing this? I suspect this is something to do with the NHibernate versioning. I added an additional property to the DataMigrationRecord class and associated table and that property seems to work fine. I think you are right. A column named "Version" has a pre-defined meaning for NHibernate wrt to versioning. I would open a bug about this (nice find!) public class DataMigrationRecord { public virtual int Id { get; set; } public virtual string DataMigrationClass { get; set; } public virtual int Version { get; set; } } Okay, thanks. Bug opened here Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/271740
CC-MAIN-2017-51
refinedweb
286
58.18
New submission from andrea crotti <andrea.crotti.0 at gmail.com>: I am not really sure that it is a bug, but for me it's at least not the expected behaviour. In short supposing I have two namespace packages ab and ac (as seen in the tar file), this works perfectly: import sys from os import path sys.path.append(path.abspath('ab')) sys.path.append(path.abspath('ac')) from a.b import api as api_ab from a.c import api as api_ac But this doesn't: import sys from os import path sys.path.append(path.abspath('ab')) from a.b import api as api_ab sys.path.append(path.abspath('ac')) from a.c import api as api_ac And raises an ImportError from a.c import api as api_ac ImportError: No module named c Which means that if you actually append all the paths containing package resources before trying to import something it works, but if you interleave the path mangling, it won't.. Is this a bug or maybe I'm doing something wrong? ---------- assignee: tarek components: Distutils files: namespace.tar.gz messages: 153076 nosy: andrea.crotti, eric.araujo, tarek priority: normal severity: normal status: open title: namespace packages depending on order versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2 Added file: _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________
http://mail.python.org/pipermail/new-bugs-announce/2012-February/012819.html
CC-MAIN-2013-20
refinedweb
229
70.39
I did say in the main() function is where the problem is. I am getting a few errors. I altered the code some....... int main() { int getInput(int *input); int n = 0; Type: Posts; User: campermama I did say in the main() function is where the problem is. I am getting a few errors. I altered the code some....... int main() { int getInput(int *input); int n = 0; I think I have been working on this too long.....in the Main Function area I can't get this to work, I am sure it is probably something simple but I am not seeing it, are you??? Thanks!!!! ... Terve Nouneim, I am also new to programming, but I saw that you said you are Finnish, I work for a Finnish company and I have been to Finland many, many times. Just wanted to say hi. Hope you can... How about, search the array and if the value is found return the location of the value, that is the value in position 1 or 6 or ?. How do I do that? Thanks!! I think the problem was I was showing a result for f(0) and that was not part of this recursive, f(1)=2, f(2)=2, f(n)=2*f(n-1) + f(n-2) So I changed my code to just show f(1), f(2), and... Yes, I know....but I am not supposed to define that many variables, just n>=1. Any ideas? Ok, not sure how to do that. Would it be another if statement? I need a way for this recursive to end when the item is not in the list. Can you help? Thanks!! template <typename T> class List { private: T * array;//memory for array will be... This is what I have: #include <iostream.h> //f(1)=2, f(2)=2, f(n)=2*f(n-1) + f(n-2) int f(int n) { What can I replace these with? I would rather change the program to work with my compiler, is this possible? Of course I would need help! :D I am using Metrowerks Code Warrior, I had somebody helping me with this. It compiles on his compiler (not the same as mine) but I am getting like 99 errors! Can't open dos.h, and can't open... Consider the language in the alphabet {a,b} defined by the grammar: <S> = a<S>a | b<S>b | a | b What are the 5 character strings in this language. Not sure what this means? Is it: aa... Not sure if I am on the right track with the code below. This is what I need, I know I am missing stuff, just not sure how to put it in here! Write a recursive implementation, the operation should... Yes it compiles fine but after you enter the string of characters etc. it goes into an infinite loop, I don't know why? Can anybody see what is wrong with this? It gets stuck in a loop. I think I am too tired!! Thanks! #include <iostream.h> #include <string.h> //used for strlen bool InLanguage(char s[],... Does the following recurrence relation = 50, am I doing this right? S(1)=5; S(N)=S(N-1) + 5 for N>1 Would this be the correct order to insert a new node at P into a doubly linked list before the the node pointed to cur. Thanks! P->Next = Cur; P->Before = Cur->Before; P->Before->Next = P;... I am just supposed to write a recursive function, just not sure if this is "good" or not. If it was in a program would it work? I am still learning this. Does this look right? Thanks!! Void DoIt(int N) { if ( N > 0) { cout << N << " "; DoIt(N + 1); } } Is this what you are looking for? Obviously I had help with this!! #ifndef NEWSTRING_H #define NEWSTRING_H #include <list> #include <iostream> using namespace std; Still trying to figure out how to show the length in case 4: In case 5: returns 1 (true) if string2 is a substring of string1 and 0 (false). How do I do an if with a substring? Does anybody know... Case 4 and 5 weren' t there because I am trying to figure out how to do it. I have made an attempt at 4, but not sure how to show the length :confused: Case 5, I have no idea how to start that... In this code I got case, 1, 2, 3 in the switch, but I am having trouble with 4 and 5. #include <iostream> #include <list> #include <NewString.h> // NewString class definition using...
https://cboard.cprogramming.com/search.php?s=6132bab040d839a0224a880f30d97eed&searchid=2593548
CC-MAIN-2020-05
refinedweb
773
93.03
I have some python methods with instructions. For example, parameter combinations to be tested. In a count-part of my code, I use a parser for the doc-string of the function that I use so that a configuration file is no longer needed. The information I need are solely contained in the functions doc string. I wonder if this is a good and common practice in industry? update: I have a long list of experiments to do, represented by functions. For each function, I have some test cases (different for different functions). When I create the function, the test cases are defined. I put them in the doc string and parse them when I do the test. def my_function1(param1,param2) '''param1: 10, 20, 40, 60; param2: 5, 10, 20, 40;''' return something def my_function2(param1,param2,param3) '''param1: 10, 20, 40, 60; param2: 5, 10, 20, 40; param3: 100, 200''' return something . . . Edit: Here are the PEP suggestions: I am fairly confident that this is not common practice, and my instinct is that it is not good practice either. First, some example Docstrings from Python standard library modules. When I type help for combinations: Signature: combinations(iterable, r) --> combinations object Docstring: Return successive r-length combinations of elements in the iterable. combinations(range(4), 3) --> (0,1,2), (0,1,3), (0,2,3), (1,2,3) This is the help for sum: Signature: sum(sequence[, start]) -> value Docstring: Return the sum of a sequence of numbers (NOT strings) plus the value of parameter 'start' (which defaults to 0). When the sequence is empty, return start. These docstrings elaborate in English. The goal of a docstring according to Python core developers seems to be to explicitly show how to use the methods, and to clear up any doubt. I would suggest that your practice is wrong because you are mixing in different goals, different kinds of logic into your docstrings. If I were you, I would isolate and separate your goals instead of combining everything into your docstrings. I would keep the logic of your test cases, and of your configuration in separate places designated for that logic. In general in programming, I believe keeping your different goals isolated is part of what "best practices" encapsulates. I should admit, though, that from reading your question I'm not confident that I completely understand what you're asking, or what you're doing with your testing and configuration.
https://codedump.io/share/b9PhLeE4Jf6T/1/is-it-a-good-practice-to-parse-doc-string
CC-MAIN-2018-26
refinedweb
409
62.17
Tải bản đầy đủ Thạc sĩ - Cao học Kinh tế CHIẾN lược CHIẾM LĨNH THỊ PHẦN cà PHÊ của TRUNG NGUYÊNe STRATEGY TO CONTROL COFFEE MARKET OF TRUNG NGUYEN I. Introduction The world has proven that coffee has actually became an industry with total transaction value is $ 80 billion worldwide, ranking second after oil industry in terms of good value and go beyond the gold, silver and precious stones to become the invested good. The industry has expanded and contains elements of finance, trade, investment, tourism, culture, knowledge economy, ecotourism, coffee travel. In Vietnam, coffee is day by day becoming one of the most important economic sectors contributed $2 billion per year in GDP. From coffee, many companies have developed and success including Trung Nguyen. Trung Nguyen is a marvelous success of brand building in Vietnam in recent years. Within less than 20 years, from a small warehouse in Buon Ma Thuot to process coffee, Trung Nguyen has developed and presented its good in all regions of the country. Trung Nguyen has made a spectacular entry in the history of buiding brand in Vietnam. Trung Nguyen Coffee Corporation (called as: Trung Nguyen Coffee) was established in 1996 as a new coffee brand in Vietnam and has quickly built up reputation and becomes one of the most familiar coffee brands for consumers in the domestic and international markets. After more than ten years, Trung Nguyen Coffee has developed and emerged into a powerful group with six members working in major industries namely: coffee manufacturing and processing, trade tea; franchising and distribution services as well as retail. Trung Nguyen currently contains nearly 2000 employees, 3 factories and 5 branches from the north to south of Vietnam together with one joint venture company placed in Singapore. Furthermore, the company also indirectly creates jobs for more than 15,000 employees through 1000 system franchising coffee shops across the country. Trung Nguyen so far has developed many products namely: creative coffee, weasel and legendee coffee, Passiona low-caffeine coffee, and G7 instant coffee. Please see Trung Nguyen’s products in the Table as below: 1 | Page Table 1. Products of Trung Nguyen coffee Instant Coffee G7 Cappuccino Weasel Gourmet Blend Legendee Passiona A research showed that Trung Nguyen is the coffee brand obtaining the biggest consumer base in Vietnam market in the last 3 years from 2009-2011. Nearly 10 million out of 17 million Vietnamese households purchased Trung Nguyen’s Diagram 1. Market share of G7 coffee products for in-home consumption. The researched also indicated that G7 (3in1) instant coffee has leaded the Vietnamese instant coffee market with 38% market-share in total in 2011 (see the diagram above). The competitive advantages of Trung Nguyen coffee: the company is facing competitive advantage enormously by being actively control input source, manufacturing system with factories in the north and south of Vietnam which help to save costs in term of prices and quality. Declaration and positioning: During the earlier development process, Trung Nguyen Coffee is considered sugar water with flavored coffee. After years of development, Trung Nguyen coffee has always insisted his reason by giving the message and positioning “it is true with Trung Nguyen". This statement is confirmed quality is the 2 | Page leading factor that Trung Nguyen has always pursued by the coffee products have high quality , position in the market for coffee in Vietnam and around the world . What has been achieved always require Trung Nguyen to constantly move forward. To maintain the advantage as well as leading coffee brand in Vietnam, Trung Nguyen has always been awarded of strategic importance for the company. This will be driven growth path in the future. To get a proper comprehensive strategy, the company has a detailed analysis of these factors. Detailed content analysis will be presented in the following section. II. 1. Impacted Factors Live cycle: Vietnam coffee industry began to form and develop from the late 19th century by the French who make the foundation. However, over the centuries, coffee and coffee industry Vietnam has not really developed until after 1990 -2000 years after Vietnam participation and integration into the world economy. Therefore, the coffee industry is assessed as beginning development stages. Trung Nguyen Company provides this assessment based on evaluating specific data as follows: According to statistics, coffee exports in 2012 reached 1,732,156 tons, valued at $ 3,672,823,086, an increase of 37.8% in volume and 33.4% in value compared with the same period last year. In 2012, Vietnam ranks the top coffee exporters to the U.S. market, with 203,516 tones, achieving value at $ 459,616,328, an increase of 46.8% in volume and 34.7% in value compared with the same period on a year before, accounting for 12.5% of total export value. The other bigger import markets of Vietnam coffee include Germany and Spain respectively, worth $ 427,178,275 and $ 216,281,513, an increase of 28.2% and 59.3%. The average export price in 2012 is $ 2,120 / ton, down from an average price of exports in 2011 reached $ 2,191 / ton. Currently the supply of raw materials in Vietnam has a stable stage and continues to rise. This proves that the coffee industry is growing fast (see diagram). According to the data shown we can see, café area and yields are increasing steadily in the years from 3 | Page 2004 to 2014. If the new 2004 when the country began to increase acreage, production at 700 thousand tons, about 1200 thousand tons in 2010 to 2013, the number was close to 1,800 thousand tons (more than 2 times). Period 2006 - 2010 production increased but not significantly from 2010-2013 to increase the yield (see table below). This suggests that the café area and yields are still rising. Table 2 – Quantity of Vietnam coffee in phase 2005 – 2010, source: BCT However, the actual exports of coffee in the recent time do not increase or increase but no stability. The numbers of small coffee exporters in Vietnam are very crowded with over 100 businesses but they have strong competition resulting in reducing price. This situation form Trung Nguyen Coffee the ideas of thinking about the unification 4 | Page Vietnam exports so that the export value will be higher both in quality and quantity. From the thoughts that inspired Trung Nguyen to build a proper strategy to shape coffee exports Vietnam. Besides export markets, coffee consumption in the domestic market also showed some very encouraging data. Speed domestic coffee consumption is relatively high growth and stability, crop forecasts in 2010/2011 production of domestic coffee consumption will reach 12.1 % growth over the same period last year. Instant coffee market goes up According to the British Coffee Association, in 2000, the share of instant coffee consumption only accounts for 13 % of total coffee consumption and only 10 years later, this figure exceeded 40 %, the current instant coffee continues to accelerate strongly and threatened proportion of the product consumed traditional roasting coffee. Two nations with the highest rate of consumption of instant coffee are England with 81 % and Japan with 63 %, while in the traditional U.S. market, sales of instant coffee accounts for only 9 %. According to data from the International Coffee Organization (ICO), instant coffee consumption in Vietnam grew 31 % in 2010, reaching 1,583 million packs of instant coffee which accounts for 38.5 % of total consumption, roasted coffee grind up about 61.5 %. Growth of instant coffee production increased by only about 4.5 % but the growth of instant coffee consumption in Vietnam has reached double digits. Diagram 2 – Total instant coffee of Viet Nam (Ton) 5 | Page Source: MARD These data were analyzed in very detail and thereby shape the strategic of Trung Nguyen coffee for the domestic market as well as exports. This strategic orientation of Trung Nguyen Group will become Vietnam's leading coffee and coffee contributes to the development of Vietnam towards sustainability. 2. Competitors Up to the current period, the instant coffee market Vietnam is ongoing "nettle race" between three typical companies including VinaCafe (Bien Hoa Coffee - VinaCafe JSC); Nescafe (Nestle - Switzerland) and G7 (Trung Nguyen Company). So far, Trung Nguyen is still the market leader in Vietnam instant coffee but to maintain and strengthen the position may also exporting instant coffee to other countries that are extremely important targets. After the target formed, Trung Nguyen has formed strategies developing instant coffee containing spirit and style of Vietnamese people and will offer to the world. To implement this strategy, Trung Nguyen has focused on the study of national taste. This study divided into 03 main markets: the taste of Asia – expect to export to Asian countries; Taste and Europe; taste of America (North America) for soluble coffee exports to two continents. Trung Nguyen also actively building a network of international distributors - has branches in Singapore and manufacturing supply chain. When completed, this system will help companies to secure supplies as well as distribution system completed. For roasting coffee export industry, Trung Nguyen is currently competing in the 3 major competitors including Vietnam Coffee Corporation and over 100 small businesses as analyzed and company also has a strategy ( see part 1 ) . 3. Customers In term of export market, Vietnam's coffee exports mainly to the European region, official statistics in 2013 coffee exports to the region reached 568.0 thousand tons with a turnover of 1.2 billion (down 13.7% in volume, turnover decreased by 15.6% compared with 2012). There are 13 markets in the European coffee imported Vietnam coffee and there were 11 import market has fallen comparing to previous years, only 2 6 | Page markets have increased in 2013 including British market (increased by 13.1% in volume, 6.9% increase in turnover) and Russia (increased by 11.2% in volume, an increase of 13.0% in turnover). Asia is the second largest region importing Vietnam coffee in 2013, reaching 269.0 thousand tons with a turnover of 598.9 million, down 21.8 % in volume and 20.6 % of the metal value compared with 2012. Trung Nguyen has 11 regional markets of Asia import coffee from Vietnam, in which Japan and China are the two leading importers, specifically exports to Japan reached 78.1 thousand tons turnover of U.S. $ 167.6 million, to China reached 37.1 thousand tons with a turnover of 96.2 million. Notably, exports to India and Israel show signs of growth compared to 2012, exports to India increased by 3.8 % and 4.8 % of turnover, to Israel with 11.0 % growth increasing and 16.8 % of turnover. Besides, the company also targets the African market. This is a new market and potential. For the turnover, the market has accounted for over 10 % of export revenues of Trung Nguyen. It is founded that the maintenance and development of traditional markets and new markets is vital for development of Trung Nguyen coffee. Trung Nguyen's strategy is to focus research, search and maximize the market. 4. Products for replacement The current market appears many drinks that can replace coffee for example: • First, the bottled product made from chocolate contains a large amount of increased tone, stimulates the brain produces endorphins - an endogenous hormone ( often called the " hormone of happiness " ) makes people manic happy, fun and increase energy reserves and can affect drinking coffee • Second, red apples contain vitamins and fiber, are suitable to eat morning, on an empty stomach. In the list of “substitute coffee " also have oatmeal , cereal , fish, chicken, green tea ... So outside café , customers can choose the drinks to gain weight fair in each breakfast. 7 | Page Besides, there are also a lot of other bottle drinks can replace coffee. However, if you look at the fact that most customers use coffee as a habit to replace the crops that will not affect the consumption of coffee. However, these analyzes also put Trung Nguyen on new opportunities. Strategic diversification from coffee products was formed. Trung Nguyen is studying the market and will provide bottled coffee products are very handy and compete directly with existing drinks. 5. Supply resources: Supply is the coffee growing areas are mainly distributed in the Central Highlands and some other localities like Quang Tri, Binh Phuoc or Dien Bien province. Currently the Central Highlands is the largest coffee plantation where are the majority of the country's area. Other provinces also started planting coffee like Quang Tri as 1% of the total area. The new province in the north also started planting coffee but untapped. Therefore these areas are not located in the local map of Vietnam coffee plantation. Details of the coffee growing regions are shown in Figure below. 6. Competitors: Beside the domestic competitors, currently, there are some foreign investors who have been or prepared to invest in Vietnam. Therefore, they will become competitors of Trung Nguyen. 8 | Page II. PEST Analysis 1. Economic Environment According to economists, the economic environment is the entire economic factors such as employment, income, inflation; interest rates, productivity, and wealth affect the buying behavior of consumers and the organization. Overall, Vietnam’s economy in 2013 is growing stable. Production growth and the positive changes going on in the last year, however, economic growth is still low. Stable macroeconomic Environment stability: The inflation index is remained stable. The consumer price index (CPI) is slowed down in November, at 0.34 % compared to October, up 5.5 % compared with 12/ 2012 increased 5.78 % compared to the same period in 2012. Thus, inflation in 11/ 2013 is slow at the second lowest since 2003 back here (after 2009). Interest rates have fallen steadily, stabilizing the exchange rate, balance of payments improved. Ceiling deposit interest rate has been lowered from 14 % to 12 % interest rate credit fell from a year earlier, this trend is being pushed imaginative direction and flexibility of market fluctuations and take know of inflation. This is an opportunity for companies to access concessional loans of banks, maintain and expand production. The infrastructure of the economy as well as factors is affecting the purchasing power of the market. If poor infrastructure increases the costs of production and consumption of products, product price increases, which will reduce the products competitiveness in the market. Once the service distribution and promotion has been slow due to the impact on markets of infrastructure will make the product difficult to access or slow access to the company's products. Sector Environment Due to the slowing economy in most developing countries, so the situation is declining export coffee. Coffee exports of Vietnam in 2013 reached 1.3 million tons with an export turnover reached $ 2.7 billion, down 24.8 % in volume and 25.9 % in value. If compared with the previous year, 2013 was the year that coffee industry has been a significant decline in export volume and export value. 9 | Page Review within five years, exports of Vietnam's coffee growing growth in the first 4 years (from 2009 to 2012) with an average growth rate of 17.7 % / year. This suggests that the main market imported goods is increasingly popular coffee Vietnam and the number of export markets in coffee growing expanded (2008, coffee exports to Vietnam 74 market, by the end of 2013 had reached 86 markets). However, the world economic situation difficult , consumer demand for this item in the import market is significantly reduced due to the economic downturn , natural disasters diseases ( like rain ice , water shortage , rust ) that seriously affects the productivity and quality of coffee products ... leading to the country's coffee exports in 2013 we both volume decline , both in terms of metal value . 2. Political Environment Currently, Vietnam's political stability is considered. This is a very basic element to attract foreign investment and is highly appreciated. Survey locations with the most attractive investment investors Japan by the Trade Promotion Organization of Japan (JETRO) done this clearly acknowledged. And the factors attractiveness of this business is due to political stability. Policy innovation, opening with political stability, a safe living environment, security is the underlying cause that amount of capital foreign direct investment into Vietnam. In November 2013, this capital gain of $ 10 billion, an increase of 9.9 % compared to the same period last year. Export turnover increased by 24.5 % in 11 months, a growth rate much higher growth rate under high goals and many times the GDP growth rate. Unstable political situation in Vietnam has decisive significance in the economic development, employment income for workers, increasing consumer demand of society. This also impact on the creation and implementation of the strategies the Vietnamese enterprises. 3. Social Environment Vietnam's social strides bass levels on human index. In addition, information systems and communications to people are very good. Therefore Trung Nguyen needs more conditions to reach customers. 4. Technology Environment 10 | Page These factors create opportunities and challenges coexist forcing businesses to learn skills to make marketing strategies. As one of the inputs of the production process, so this is an important factor to decide the production of effective corporate structure or not. So that also affect the marketing activities of the Central Highlands are using technological factors to promote the sale of products on the market. Trung Nguyen has applied the new achievements of the equipment and machinery to produce the product has quality efficiency of both facilities. On the other hand the impact of science and technology to the advertisements and stitched level communication about the product. Science has developed to meet the demand for consumer products in both quality and quantity. Science and technology also creates new manufacturing resources effectively to help businesses reduce production time and improve production efficiency. So the challenge is constantly changing production technology. With regard to the conduct of scientific research and the application of scientific and technological progress , which is the basis to find useful ways of processing milk suit different consumer tastes . This is also one of the challenges learn to market faster through the channel advice on science and technology policy. III. Strategy and Implementation Plan Overall, Trung Nguyen is trying to diversify its products to meet the needs of enjoying coffee of all kinds of customers with emphasis on instant coffee and coffee export. Instant coffee: Trung Nguyen will have many products and different prices to meet the tastes of all classes. Instant coffee will be distributed at the department stores, supermarkets. Trung Nguyen will also link with one large distribution company to distribute products efficiently. Instant coffee products will be distributed in the city before they are available in countryside. Real research shows that consumers enjoyed kind of Cappuccino instant coffee of Trung Nguyen. According to the survey of over 1000 customers, it shows very positive reaction results. This kind of coffee had attractive, very gently scent, in a line with the people and is mainstream product. Hence, Trung Nguyen hope in the long and medium term, instant coffee of Trung Nguyen will account for 35% instant coffee market share. Instant coffee of Trung 11 | Page Nguyen desires to achieve the purpose that is to become the favorite products in this market. Coffee shops: This product confirms Trung Nguyen coffee brand, and thus it only focuses on customers with average income or more. Trung Nguyen coffee shops will have different prices with product lines such as: • Common products: including products as vitality coffee, aspirations coffee, conquest coffee. • Medium products: including products as Passion coffee filter coffee, roasted coffee seeds… • Advanced products: including products as Diamond Collection café, Legendee café, Classic Blend café … Trung Nguyen coffee shops have youthful, luxury, warm style to serve the young that like coffee, middle and upper class. Trung Nguyen not only shows customers how to enjoy coffee but also gives information about coffee types, effects and practice to make the most delicious coffee. Contents are as follows: • Trung Nguyen will introduce the healthy benefits of drinking coffee: Coffee seeds contain antioxidant minerals; using coffee in the morning is effective in increasing the metabolism, helps burn calories; coffee is also good for • cardiovascular system… Making the coffee at Trung Nguyen café shop raises the curiosity, inspiration and creates its own culture of coffee. Those actions will act on specific strategies of Trung Nguyen coffee is a supplier of high-quality coffees. Café will not only products alone but also has its own value. Analysis shows that the company's strategy has always been concretized through vision and mission. So, I would like to conclude by the company's strategic vision statement and mission of the company: Vision: To become a conglomerate promoted the rise of Vietnam’s economy, maintaining the autonomy of national economic and inspire, demonstrate a desire to explore the Great and conquer Vietnam. 12 | Page Mission: Build leading brands by bringing coffee to enjoy creative inspiration and pride in bold style Trung Nguyen Vietnamese culture. 13 | Page REFERENCE 1. Strategy and development program, Chapter 12, Marketing Management, Philip kotler, Custom Edition for University of Phoenix, Upper Saddle River, New Jersey 07458 2. Main impact of coffee, March 2013, 3. Export status of Vietnam coffee in 2013, 07/03/2014 4. Trung Nguyen and distribution system in Vietnam, 09 February 2006 5. Overview of Vietnam coffee, 22.3.13 14 | Page Tài liệu liên quan vinacafe GIÀNH LẠI THỊ PHẦN CÀ PHÊ TAN Một số lí luận cơ bản về chiến lược thâm nhập thị trường quốc tế của doanh nghiệp Xây dựng chiến lược kinh doanh xuất khẩu cà phê cho cty cà phê phước an đến năm 2015 Chiến lược phát triển thị trường xuất khẩu của Việt Nam sang các nước ASEAN Tài liệu bạn tìm kiếm đã sẵn sàng tải về (522.5 KB) - CHIẾN lược CHIẾM LĨNH THỊ PHẦN cà PHÊ của TRUNG NGUYÊNe Tải bản đầy đủ ngay × 9houz
https://text.123doc.org/document/5185769-chien-luoc-chiem-linh-thi-phan-ca-phe-cua-trung-nguyene.htm
CC-MAIN-2019-09
refinedweb
3,650
53
The QMouseDriverFactory class creates mouse drivers in Qtopia Core. More... #include <QMouseDriverFactory> The QMouseDriverFactory class creates mouse drivers in Qtopia Core. Note that this class is only available in Qtopia Core. QMouseDriverFactory is used to detect and instantiate the available mouse drivers, allowing Qtopia Core to load the preferred driver into the server application at runtime. The create() function returns a QWSMouseHandler object representing the mouse driver identified by a given key. The valid keys (i.e. the supported drivers) can be retrieved using the keys() function. Qtopia Core. See also keys(). Returns the list of valid keys, i.e. the available mouse drivers. See also create().
https://doc.qt.io/archives/qtopia4.3/qmousedriverfactory.html
CC-MAIN-2019-26
refinedweb
107
61.83
There should be a toolchain up with Result fairly soon; it would be great if we could get some early testing of this potential incompatibility to see how much of a problem it really is. [Accepted with Modifications] SE-0235: Add Result to the Standard Library There should be a toolchain up with I don’t think this is accurate. Result is a value that is often produced by a success or a failure. This distinction is subtle but important and is why I am disappointed by the change from Value / Error to Success / Failure. I think type names should focus on describing the value they represent, they should not focus on describing the process that produced the value. The mistake of Success and Failure is that it emphasizes the process rather than the value that is represented. There's nothing specific about the standard library here, but yes, Result.Result doesn't work as a way to reference a type named "Result" in a module named "Result", because name lookup prioritizes types over modules right now. EDIT: I don't think this is good behavior but someone would have to find the time to sit down and fix it. Another way to resolve the ambiguity is to use scoped imports, which are preferred over regular imports: import enum Result.Result. This is good to know, thanks for sharing! Currently, lookup prefers declarations from the current module over imported names. The generalization of that rule would be that declarations from module M are preferred over declarations from modules that M depends on. Since Swift is the unique root in the dependency tree, that would innoculate us against the specific problems of introducing a new top-level type name. Unfortunately, every module implicitly imports Swift, so that rule doesn't work here! Why not? It would mean that a Result in a module you import would always hide the Result from the standard library, because you’re importing two different modules but they have a dependency so they have a shadowing relationship, which is what we want (from a source-compatibility perspective). It’s good that everything depends on Swift in this case. Hm. I was thinking that the correct rule is "look in every import to see what it thinks 'Result' means, recursively" (and that's roughly what's implemented once you've started looking in modules), but if the correct rule is "if this name is shadowed on any path, ignore it in favor of the shadowing decl", then we do get the desirable source compat behavior. It does make it harder to refer to Swift.Result, though, and I don't like that adding an import can change the meaning of existing type names that way (rather than just adding overloads or making things ambiguous). We should probably split off discussing changes to module name lookup into a separate thread. It does make it harder to refer to Swift.Result, though, and I don't like that adding an import can change the meaning of existing type names that way (rather than just adding overloads or making things ambiguous). I think these are just costs we have to accept. We should probably split off discussing changes to module name lookup into a separate thread. That's a reasonable request. But we might have to consider changes here soon if this is a seriously compatibility problem, which it's shaping up to be. That is true in the same sense that Int is a value that is often representing a number (e.g. but might have come from C and therefore represent a boolean value). If we had wanted a semantics-free type like Optional, we'd have gone for a type like Either<Left, Right>. I think type names should focus on describing the value they represent, they should not focus on describing the process that produced the value. The mistake of Successand Failureis that it emphasizes the process rather than the value that is represented. I argued for Success and Failure because the names Value and Error describe only the type constraints on the corresponding parameter, which is redundant with what's written in the declaration of Result and a waste of potential expressive power. Type parameter and associated type and variable names should describe the role that the named entity plays in context, so that they illuminate code at their use site. Success and Failure do that much better than Value and Error, IMO. Was result() considered as a name for get()? I think that might be a more descriptive name at the point of use. I would like to see some sort of a guide on how to migrate codebase easily from use of Result<Value> with just one parameter. Maybe it is possible to use typealias or something. If this was discussed earlier it's nice to have a link. The former is better code! I'd rather see something more descriptive than value here, like barrelCount or whatever the value represents, but it's better than .success(success). Muffin-man patterns always make it harder for me to think about what code actually means, and again, represent a lost opportunity for expressiveness. Yes, you're mapping the failure value, which is an Error of some kind. In any case, the design is locked. And the capture value naming is probably a mistake on my part. But I don't think anyone should be looking to the standard library for style advice anyway. Now that this was mentioned, I personally never thought about it myself. However this feels like a missalignment if we renamed Error to Failure but kept mapError and not made it mapFailure. Can the core team clarify if it‘s intended or maybe an oversight? Now we still have a chance to fix this but this has to happen ASAP. mapError and flatMapError were methods from the Value / Error realm but since we decided differently we should align the decision everywhere and make it mapFailure / flatMapFailure. Good catch by @usr213123! Well I would disagree with that. Sure the stdlib is not the holy grail but there is a ton of good code and techniques in there to learn from. Here is just one good example where @Ben_Cohen shows how we can in the future apply techniques already used in stdlib to our own code to improve performance:
https://forums.swift.org/t/accepted-with-modifications-se-0235-add-result-to-the-standard-library/18603?page=3
CC-MAIN-2019-09
refinedweb
1,066
61.26
Interrupting urlopen in Python Overview While the Python ecosystem is replete with libraries for fetching data over the web, none of them give you an easy way to interrupt requests before the queried server has returned the response headers. As more often than not servers will only output response headers once they have the full response at hand, this does not make it possible to release resources early. Let's say I want to get a resource from a web server. This resource is not static, and takes a long time to generate - several seconds. It is possible that within those few seconds I decide I do not want the resource any more. To ease the load on the web server, I want to close the connection - I know the server will detect this and stop it's work. As is often the case, this web server doesn't output anything (including response headers) until it has the full response at hand. Unfortunately, while most Python libraries for fetching web content allow us to split the work in multiple calls (send request, read data, close request) the initial send request typically does not return until the response headers are available. This is the case for urlopen, httplib and, as far as I call tell, request Sending a request In fact the highest level library that will allow us to do this is socket. This means we are going to build the HTTP request ourselves - this is straightforward, though for more complicated cases httplib might be useful. The following method will build an HTTP GET request: from urlparse import urlparse def http_request(url): url_p = urlparse(url) if url_p.path: path = url_p.path else: path = '/' if url_p.query: path = path + '?' + url_p.query request = [ "GET {} HTTP/1.1".format(path), "Host: {}".format(url_p.netloc), "Connection: close", "User-Agent: here-be-dragrons/0.1" ] return "\r\n".join(request) + "\r\n\r\n" Note that the request must end with an empty line, and that the line separator should be "\r\n". You can add more headers there as needed, though those are sufficient. Now that we have a request string, we need to send it. Using the socket library this is straightforward: import socket def send_request(url): url_p = urlparse(url) if ':' in url_p.netloc: (host, port) = url_p.netloc.split(':') else: host = url_p.netloc port = 80 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: sock.connect((host, int(port)) sock.sendall(http_request(url)) except Exception as e: sock.close() raise e And that's it, the request has been send. To check whether the reply has arrived we can use the select library: import socket def wait_for_request(sock, test_interrupt, sleep_time=0.1) rl = [] while (len(rl) == 0): test_interrupt() rl, wl, xl = select.select([self._sock], [], [], sleep_time) test_interrupt is a function which will raise an exception should the query be interrupted. The caller should make sure to close the socket properly when this happens. Reading the result Once the data is ready, we can read it with socket.recv. This however means we would have to parse the HTTP request ourselves. Depending on which server we are contacting, this is a bit more difficult than generating the HTTP query was. Instead of parsing it ourselves, we can us httplib.HTTPResponse. While the documentation suggests this class is not instantiated directly by the user, it doesn't say we shouldn't - the class' definition is documented and, lo and behold, the first argument to that is a socket. Using this reading the server response is easy: from httplib import HTTPResponse def read_response(sock): r = HTTPResponse(sock) try: r.begin() status = r.status body = r.read() finally: r.close() # Note that this does not close the socket. return (status, body) That's all - we can now get the response status code and the response body. A complete example using the various functions we've implemented: class GiveUp(Exception): pass def test_interrupt(): if event_has_happened(): raise GiveUp() sock = send_request('') try: wait_for_request(sock, test_interrupt) (status, body) = read_response(sock) print "We got response code " + str(status) + " and body " + str(body) finally: sock.close() Conclusion Python libraries are not always the best when it come to this kind of low-level optimization but with a bit of effort it is usually possible to achieve what we want.
http://aliceh75.github.io/interrupting-urlopen-in-python
CC-MAIN-2017-30
refinedweb
715
65.42
I have a program which runs slightly slower than I would like it to run. I profiled it, and it spends almost 100% of the time in 1 subroutine which is not surprising, as this is the only subroutine which is reading text input. Below is the subroutine, and a demonstration of the size of the input. # returns the $limit largest files from the flist file, # full path to file in $name sub process_flist { my ($name, $limit) = @_; my ($nlines, $total, @lines, @size); open(my $fh, '<', $name) or die("Error opening file `$name': $!\n"); while (<$fh>) { my @f = split / /; # skip files that have a space or other whitespace # characters in their name next if @f > 10; # store file size in bytes and the full path to the file push @lines, $f[4] . '/' . $f[1]; } $nlines = scalar @lines; { # disable warnings because the array to be sorted has # the following format "12345/path/to/file" # Perl would complain this is not a number # but the <=> comparison operator will handle such # input properly # this is needed so the files can be sorted # with a single pass through # the flist file no warnings 'numeric'; $total = sum(@lines); $limit = min($limit, $nlines); @lines = (sort {$b <=> $a} @lines)[0 .. ($limit - 1)]; } # returns the number of files, their cumulative size, # and the $limit largest files return ($nlines, $total, @lines); } [download] find /tgt -type f -name input | xargs wc -l 197898 .../input 213267 .../input 240331 .../input 194063 .../input 191862 .../input 179495 .../input 218041 .../input 1434957 total [download] 51 opt/src/t.tar 100444 1247464676 290119680 283320 NA 1 0xbe2d 0x4000 +0006 [download] The program runs for around 40 seconds with this input - 7 input files, but with 100 input files of similar size it runs for around 25 minutes. I have 2 pictures of the profiler output - Can the runtime be reduced ? I am not proficient in interpreting the output of the profiler, so I can't really figure out whether it can be improved, or it is simply so I/O intensive that not much can be done. The input files represent a list of files backed up from a particular client, and they are not ordered in any meaningful way. Below code is apparently faster (at the cost of some readability) than split/if ... while (<$fh>) { next unless m/^[0-9]+ ([^ ]+) [0-9]+ [0-9]+ ([0-9]+)/; push @lines, $2 . '/' . $1; } ... [download] What happens if you return an array reference instead of an array at the end of this function? (Depending on how large $limit is, you might cause Perl to copy a lot of data.) What happens if, instead of reading the whole file into an array and then sorting it, you keep track of the largest n lines you've seen and read the file line by line? Improve your skills with Modern Perl: the free book. The latter suggestion is more promising. Looking at the timings, the bulk of the time is in the split (20.4 s) and the push @lines (16.2s) Ah, I didn't see there was a second image. Good catch.] I'm clutching at straws here, but if a lot of the filenames contain multiple whitespace characters, then you might get a very slight improvement doing this: my @f = split / /, $_, 11; # skip files that have a space or other whitespace # characters in their name next if @f > 10; [download] The third parameter to split tells it not to keep looking for split points once it's found.
http://www.perlmonks.org/?node_id=1001087
CC-MAIN-2016-30
refinedweb
581
72.39
Ajax and the Spring Framework with TIBCO General Interface By Brian Walsh 01 Jul 2006 | TheServerSide.com Introduction Ajax. There are at least three separate tracks to consider: Communications and messaging, user interface components, and client side scripting. Since in the Ajax world the server no longer sends down html to the browser, your developers need to agree on a message format. The user's expectations of a dynamic UI are high. They want a desktop experience and Web simplicity. You will need to develop or obtain components to meet many requirements: Legacy integration, micro-content, predictive fetch, drill down, visual effects, specialized and generic UI widgets. Finally, your developers need to integrate all of the above and inject your organization's value add and business rules. You can start by downloading random chunks of JavaScript and integrating them with the browser's XMLHttpRequest object using Notepad or vi as the main productivity tools. Certainly five years ago this was the case. Some organizations produced great work; others produced un-maintainable hodgepodges unreadable to all but the original authors. Where GI fits in Alternatively, you can take advantage of mainstream adoption of Ajax by leveraging others' work. TIBCO Software's Ajax tools, known as TIBCO General Interface and "GI" for short, provide a solidly engineered set of JavaScript libraries to power Ajax solutions. In addition the GI libraries also power the visual tools TIBCO provides for rapidly authoring these solutions. GI's individual JavaScript class objects are integrated into a unified framework that executes at the client, not the server, using MVC principles to generate and update the GUI. Accordingly, DHTML and JavaScript generation occurs using the client CPU, not the server CPU. The Ajax framework is comprised of a well thought-out class structure that provides an intelligent abstraction of the browser event model, a wide variety of UI components, a client-side data cache and a service architecture for messaging. Solution Objectives Below we will examine in detail how GI MVC running in the client works with Spring MVC, a leading Java server framework. You will see how GI can extend and coexist with your Spring JSP investment. Before we jump into it lets review the technical requirements of this use case.. Spring Spring's MVC implementation The Spring framework () includes an elegant MVC (model-view-controller) module for building Web applications. Spring is fairly unique in its ability to support plugins at all levels in its stack, from database access to transaction control to MVC layer. In addition to standard JSP and JSTL, many different view technologies have been used with Spring (Struts, Velocity, Freemarker, XSLT, Web Flow,etc.). You can choose which MVC layer to use. Furthermore you can choose which rendering engine to use within the MVC layer. The Spring framework supports all these views equally, without preference, so customizing the framework to support a new one is a common use-case. Simplified sequence diagram of Spring MVC Lets get started Below we will show you how to get integrate GI with Spring's MVC. Before we integrate with GI, we need to verify Spring in a 'stock' MVC/JSP environment. See the spring tutorial Spring MVC step by step. It's important that you understand this example because we will use this specific application as a basis for GI integration. Our starting points are the latest and greatest versions of Spring, Xstream and GI. Our use case requires that we display a price list of products and modify the prices by entering a validated percentage the system will apply to all prices. Then, the price system will validate the percentage input and display error messages if the percentage is invalid. Spring setup Customizing build for your environment Review the resources section below to see what components you will need to install on your system. At a minimum you will need Java (1.4.2 or higher), Ant (1.6.2), Spring (1.2.8) and Tomcat (5.5.x) and your favorite IDE. You'll also need to customize the build.properties and log4.properties files for your environment. See the Spring MVC step by step tutorial and ensure you can view the JSP pages. Spring Architecture Spring's View layer Spring configuration in detail Take a look at the application context defined at src/main/Webapp/Web-INF/gi2spring-servlet.xml. Here we can see the two controllers established for the two actions (list products and increase products). Both have the product service (productManager) passed to them. The price increase controller also has a form and a validator associated with it. success view and form view values configurable as opposed to hardcoded. This is a best practice for Spring controllers in any event. It will become important shortly as we configure Spring to interoperate with GI . Click for full-size image One last configuration to examine is the view resolver. Click for full-size image The view resolver is a plugin that tells Spring how to render the output from a controller. Here the view properties returned from the controller ("hello" and "priceincrease" ) get mapped to actual JSPs ("/Web-INF/jsp/hello.jsp" and "/Web-INF/jsp/priceincrease.jsp") . The view class is the component that in turn actually takes the output from the controller and renders it. Configuring support for XML output Since GI is a view technology, it makes sense to start our GI integration here at the view resolver. Any Ajax implementation makes certain demands of the server. A key pre-requisite is a structured message. GI, like most Ajax libraries, is highly optimized around this concept (though GI also provides a less optimal means to consume and display server generated chunks of HTML as well). Therefore we need Spring to produce a simple, consistent XML payload. Spring makes our life easy in this regard. Spring can chain view resolvers, passing the requested view to a series of resolvers until it one responds with a view that can render the model. Our first change will be to add a XMLViewResolver. The XMLViewResolver is a Spring provided class that reads a configuration file of view names that we want rendered into XML. Click for full-size image The views.xml configures our view implementation, a single class that renders xml. Our system now looks like this ... Create XML from model using XStream It doesn't take much code to create an XML document. Click for full-size image This xml structure maintains the basic contract between the controller and view, a simple map of arbitrary objects. The client application would use the XPath "/map/entry/*[position()=1]" to access the map's set of keys. Similarly the XPath /map/entry/*[position()=2] provides the value set. The same contract applies to exceptions. Exceptions are simply entries in the map using exception class name as the key. Click for full-size image Now we have extended the system to incorporate our new view. Controller layer Now that we have two different rendering technologies in the view layer we need to indicate which one to use. One way to do this would be to define a request parameter or session attribute that would be maintained at each layer. This would require changes to our existing controllers and creates a contract with the client. A requirement like this scattered layers and platforms can introduce errors and overhead. A better alternative is to simply reconfigure the controllers creating new instances of each desired controller class, each configured with the new GI view name. Our final configuration change completes the GI enabled Spring application. Our controller layer now supports two different paths, one mapping for GI in addition to the original JSP. Click for full-size image GI Setup You will need to download the community edition of the GI toolkit () as a pre-requisite. This will provide both the run time support for our application's client pages and a browser based development IDE. JSX directories "JSX" is GI's arbitrary namespace convention. It stands for JavaScript and XML, the key technologies GI uses. You'll see it used throughout GI solutions. To get GI optimally set up, put GI's JSX and JSXAPPS directories at the root of the Webapp directory. The JSX directory is a simple copy and paste of the /TIBCO/gi/3.1/JSX directory. The JSXAPPS directory contains a single subdirectory with your application in it. This directory convention is a requirement of GI. For additional productivity during development, you may wish to copy the GI Builder directory into the Webapp directory. Simply copy the doc, GI_Builder directories and GI_Builder.html from /TIBCO/gi/3.1 to the root of your Webapp directory. You will not want to deploy the GI Builder assets to your production Webserver. However, having them in this location allows you to edit GI artifacts in place without a constant copy and merge cycle from the TIBCO installation directories to your workspace. Your directory structure should look like this: GI Development Ajax UI Design Requirements Our first step in design is to sketch out our UI design. We start from the existing legacy JSP pages as a starting point. Since a major goal of Ajax is to provide multiple views without refreshing the entire page, we need to select a motif or theme to convey this to the user. A fairly conservative choice is a tabbed pane displayed in a standard header/body/footer layout. The table of prices we would like displayed in a sortable list. One additional requirement is to display a manageable, friendly error details tab. GI user application architecture Below is simplified hybrid diagram of our client application (showing only the classes, methods and properties we interact with). It is important to note that most of the files below are generated from GI Builder. The output of GI Builder is mostly XML models of GUI definitions and client-server communication processes. These models, served as .xml files from any HTTP server, are interpreted at runtime by the GI Framework so as to generate instances of GI JavaScript class objects on the client. Thus the developer's job is mainly concentrated on visually assembling GUI components then linking and orchestrating the services. One does get into JavaScript programming using the JavaScript APIs to the GI class objects for purposes of implementing client-side behaviors and interaction logic. Value adds on the client include parsing the Spring exception message and marshalling form attributes. Click for full-size image Message Properties / JSS The first part of building any UI layer is to define a configurable message store to enable configurable display strings external from the codebase. GI is no exception in this regard. JSXAPPS/gi2spring/jss/messages.jss defines a level of indirection for messages. JSS is short for JavaScript Style Sheets. This is a convention unique to GI that encapsulates concepts of CSS for dynamic styling, but extend the concept to any property of any object - not just the visual ones. In the same way a jsp author would code " Automated creation of JSS For convenience we developed a command line ant task tool to perform this conversion look for Properties2Jss.java under srctoolsjava. After applying Properties2Jss this becomes: Mapping Rule Service Next step in construction is to handle the messages created by the XMLView to and from the server. GI provides a high level service (jsx3.net.Service) for you to extend. The GI Service has several responsibilities: - Marshalling UI screen component content to a request message - Sending that request to the server - Un-marshalling the response message to UI components - Calling user provided event handlers (success, error, timeout) The Service interprets a set of message transformation and object binding rules stored in a rules/*.rule file. These rule files are created with the XML Mapping Utility, a rule editor within GI Builder. Rules files enable message formats, urls, data, and UI mappings to change independently of the JavaScript controller that orchestrates them. In order to use the rule editor we save a copy of the server output in JSXAPPS/gi2spring/xml as displayprices.xml. Next, we'll need to create a XML mapping rule for each of the three flows coming from the server (price list, price increase and error). Creating the Mapping Rule Our application is not a SOAP Web service so we choose XML/XHTML/Schema rule type. Our Spring process will be invoked using a GET action with a standard URL. Therefore there is no outbound document we'll pass to Spring. Our inbound document is pointed at our sample document. Placing the Mapper Log into trace level gives us additional insight into the process. Click for full-size image Parsing the document produces: Click for full-size image Mapping the response to the List component Of particular importance in this document is the list of products. We will define a rule to transform this document from the response format to a CDF format suitable for the jsx3.gui.List. CDF stands for Common Data Format. CDF is a GI convention for normalized client-side data so that multiple GUI components can provide views into single sets of data (e.g. the same data can be viewed as a List, a Chart, and a Menu, etc... all at the same time just by binding those GUI components to the CDF object). The List infers a flat, non-hierarchical set of nodes. We do that by selecting one of the Product Nodes and defining a CDF rule. Let's create one in the XML Mapping Utility rule editor. For the list node we specify that the rule should create a new CDF document and place it in GI's data cache with a name of "priceList". This key is important as we have specified the list gui component with the same key. Click for full-size image Click for full-size image Next we define the rows in the document. Finally we define the attributes on the record. Note the attribute names are the same as the path property in the list column. Click for full-size image Specifying the server URL Finally we add the endpoint and method to the rule set. The endpoint "hello_gi.htm" was configured in the Spring "urlMapping" bean above. JavaScript integration Now we are ready to orchestrate the mapping rule with our application. First save the mapping rule file in /rules then with the Operation (Transaction) node selected, press the Generate button in the XML Mapping Utility. This produces several JavaScript functions for you. Paste the resulting JavaScript into your js/logic.js file. Modify the service.onSuccess code block as follows: service.onSuccess = function(objEvent) { //update status setStatus("The service call was successful."); // repaint fields whose data was updated by the rule gi2spring.getJSXByName('productList').repaint() ; }; Here we specify what we want the application to do when the response is returned. In this case we want to update our status (footer area) and repaint the product list. Validating and marshalling server request We take a hybrid approach to sending our price increase request. At this time the jsx3.net.Service class does not support mapping to legacy forms. We will need to validate the user's input and create our form. The intersection between client side and server validation is fairly arbitrary. At the end of the day you will need to support both. A common sense approach is to implement type checking on the client and leave business rules on the server. This is the approach we have taken here. However, your situation may differ and you may elect to implement rules in the client as well. GI supports both. // validate current input ... var jsxPercentage = gi2spring.getJSXByName('percentage') if (! jsxPercentage.doValidate() ) { // find the errors in our .jss dynamic properties var message = jsxPercentage.getServer() .getDynamicProperty("typeMismatch.percentage" , jsxPercentage.getValue() ) ; // show the error displayError( 'percentage', message) ; return ; // early return } Within the priceIncrease service we leverage properties defined in the service when we create our form. // create a form var form = new jsx3.net.Form(method,endpointURL) ; // form.server = objService.getServer() ; form.setField("percentage",jsxPercentage.getValue()) ; Handling server validation errors For each response we get from the server we interrogate it for errors. Since we have modeled our UI jsxnames and jss properties after the server, it is a simple matter of unmarshalling the server's response and returning a collection of error messages complete with resolved arguments. service.onSuccess = function(objEvent) { //extract response from event var objHTTP = objEvent.target; var doc = objHTTP.getResponseXML() ; var errorMessages = extractErrorMessages(doc,objHTTP.server); if ( errorMessages.length == 0 ) { gi2spring.getJSXByName('tabHome').doShow() ; setStatus("The priceincreaseService call was successful."); gi2spring .getJSXByName('tabErrors') .setVisibility(jsx3.gui.Block.VISIBILITYHIDDEN,true) ; // call the same service as the home tab eg.displayPricesService.processInbound(doc) ; } else { // update [field]Error for (var field in errorMessages) { // show the error displayError( field, errorMessages[field]) ; } . . . Testing A key point to this architecture is that all of the model, business logic and controller code in the server remains unaltered. Existing JUnit suites remain in place a can be developed independently of GI. Functional testing of the look and feel of the GI screens remain a province of WinRunner and other client side tools. However, TIBCO has already performed the testing on GI's class libraries for multiple browser versions, security settings, plugins etc. This greatly reduces the number and nature of your testing. Conclusion It took much more time to document this application than it did to write it. The following table illustrates the alignment of key client and server components. We were able to develop this application by writing exactly one server side class. The rest was accomplished via configuration changes to the Spring application context. On the client side we had slightly more work. We generated three mapping rules, developed one method to resolve Spring exceptions to client side messages and wrote some orchestrating glue code. In the future even more of the process can be automated as GI adds additional features including Form support into the mapping tool. Resources Minimum requirements - Java homepage: - Ant homepage: - Spring homepage: - Tomcat homepage: - Xstream homepage: Java Development environment - Eclipse homepage: Other Resources - Spring MVC Tutorial: - TIBCO General Interface Developer Community: - XStream Tutorial: Source - Project zip: AjaxandSpring.zip About the Author Brian Walsh is the founder of bwalsh.com, a Portland, Oregon consulting firm specializing in Internet and network-enabled product strategies and development. His areas of expertise include enterprise architecture, technical evaluations, infrastructure, software engineering and database design. Walsh's recent clients belong to a wide variety of industry segments; retail banking, insurance to telecos and network management firms. Always enjoying the hands-on approach, he divides his time between policy issues and technical challenges.
http://www.theserverside.com/news/1364355/Ajax-and-the-Spring-Framework-with-TIBCO-General-Interface
CC-MAIN-2015-32
refinedweb
3,124
56.55
am. 2016年12月02日50分35秒 I would be a bit reluctant to use nested classes here. What if you created an abstract base class for a "multimedia driver" to handle the back-end stuff (workhorse), and a separate class for the front-end work? The front-end class could take a pointer/reference to an implemented driver class (for the appropriate media type and situation) and perform the abstract operations on the workhorse structure. My philosophy would be to go ahead and make both structures accessible to the client in a polished way, just under the assumption they would be used in tandem. I would reference something like a QTextDocument in Qt. You provide a direct interface to the bare metal data handling, but pass the authority along to an object like a QTextEdit to do the manipulation. 2016年12月02日50分35秒 You would use a nested class to create a (small) helper class that's required to implement the main class. Or for example, to define an interface (a class with abstract methods). In this case, the main disadvantage of nested classes is that this makes it harder to re-use them. Perhaps you'd like to use your VideoDecoder class in another project. If you make it a nested class of VideoPlayer, you can't do this in an elegant way. Instead, put the other classes in separate .h/.cpp files, which you can then use in your VideoPlayer class. The client of VideoPlayer now only needs to include the file that declares VideoPlayer, and still doesn't need to know about how you implemented it. 2016年12月02日50分35秒 One way of deciding whether or not to use nested classes is to think whether or not this class plays a supporting role or it's own part. If it exists solely for the purpose of helping another class then I generally make it a nested class. There are a whole load of caveats to that, some of which seem contradictory but it all comes down to experience and gut-feeling. 2016年12月02日50分35秒 Well, if you use pointers to your workhorse classes in your Interface class and don't expose them as parameters or return types in your interface methods, you will not need to include the definitions for those work horses in your interface header file (you just forward declare them instead). That way, users of your interface will not need to know about the classes in the background. You definitely don't need to nest classes for this. In fact, separate class files will actually make your code a lot more readable and easier to manage as your project grows. it will also help you later on if you need to subclass (say for different content/codec types). Here's more information on the PIMPL pattern (section 3.1.1). 2016年12月02日50分35秒 sounds like a case where you could use the strategy pattern 2016年12月02日50分35秒 Sometimes it's appropriate to hide the implementation classes from the user -- in these cases it's better to put them in an foo_internal.h than inside the public class definition. That way, readers of your foo.h will not see what you'd prefer they not be troubled with, but you can still write tests against each of the concrete implementations of your interface. 2016年12月02日50分35秒 We hit an issue with a semi-old Sun C++ compiler and visibility of nested classes which behavior changed in the standard. This is not a reason to not do your nested class, of course, just something to be aware of if you plan on compiling your software on lots of platforms including old compilers. 2016年12月02日50分35秒 You should use an inner class only when you cannot implement it as a separate class using the would-be outer class' public interface. Inner classes increase the size, complexity, and responsibility of a class so they should be used sparingly. Your encoder/decoder class sounds like it better fits the Strategy Pattern 2016年12月02日50分35秒 One reason to avoid nested classes is if you ever intend to wrap the code with swig () for use with other languages. Swig currently has problems with nested classes, so interfacing with libraries that expose any nested classes becomes a real pain. 2016年12月02日50分35秒 Another thing to keep in mind is whether you ever envision different implementations of your work functions (such as decoding and encoding). In that case, you would definitely want an abstract base class with different concrete classes which implement the functions. It would not really be appropriate to nest a separate subclass for each type of implementation. 2016年12月02日50分35秒
http://www.91r.net/ask/332.html
CC-MAIN-2016-50
refinedweb
754
59.43
19 April 2007 12:26 [Source: ICIS news] MUMBAI (ICIS news)--Bank of America (BoA) on Thursday said project timelines for all ethylene crackers point to a cyclical decline in the market between 2009 and 2011, based on aggregate growth in nameplate capacity of 24%. The brokerage said that a supply-driven downturn is likely to occur despite ongoing project delays in ?xml:namespace> BoA expects global ethylene demand growth of 4.2% to outpace capacity growth of 3.4% in 2007, adding it expects an increase of $0.02/lb in the April US ethylene contract, following a $0.015 hike in March that was the first bump upwards in seven months. The brokerage said it remains cautious on ethylene names like Nova Chemicals, Lyond
http://www.icis.com/Articles/2007/04/19/9021851/boa-expects-200911-ethylene-downturn.html
CC-MAIN-2014-35
refinedweb
126
65.12
? Check the documentation for nn.LSTM and pack_padded_sequence() / pad_packed_sequence() You don't need that for loop for Thank you for your reply. I got to work PackedSequences: def forward(self, input, hidden, lengths): embeddings = self.encoder(input) packed = pack_padded_sequence(embeddings, lengths, batch_first=True) output, hidden = self.rnn(packed, hidden) output, _ = pad_packed_sequence(output, batch_first=True) Now, I am confused how it is possible to apply linear decoder to only non-padded elements and the feed them to the loss function? Is there a "pytorch" proper way of doing it or the masking/padding is mandatory? you can derive a mask from the result and use it to mask both the result and the loss (if you use the option for not averaging it) Thx a lot for your help! I have another question - how it is possible to make a custom RNN compatible with packedsequence? Custom RNNs cant be made compatible with packedsequence without a significant amount of code. See the inbuilt RNN implementation for example: The easiest way to make a custom RNN compatible with variable-length sequences is to do what this repo does -- but that won't be compatible with packedsequence so it won't be a drop-in replacement for nn.LSTM. The packedsequence approach is fairly specific to the implementation in CUDNN. To be clear, when you say "easiest way to make a custom RNN compatible with variable-length sequences is to do what this repo does" do you mean this part of the code, where he multiplies any output outside of the time-limit (time < length) by zero? I've copied the relevant bit of code below: time < length mask = (time < length).float().unsqueeze(1).expand_as(h_next) h_next = h_next*mask + hx[0]*(1 - mask) c_next = c_next*mask + hx[1]*(1 - mask) To make sure I'm understanding the RNN PackedSequence code correctly, is this the code you're referring to? From what I understand, this code is doing the dynamic batching algorithm, proposed in this post? PackedSequence
https://discuss.pytorch.org/t/batch-processing-with-variable-length-sequences/3150
CC-MAIN-2017-47
refinedweb
333
51.28
I had a request to retrieve RSS data using SAP Data Hub and store this in SAP Vora. There are many ways to do achieve this, here’s how I did it. Data Hub Pipeline - Docker with Beautiful Soup 4 & Pandas - Python Operator using Beautiful Soup 4 - Vora Avo Ingestor - Vora Disk Table Python is great for scraping RSS feeds, we can wrap our code in a custom operator and then associate that with a suitable docker image that contains the required libraries. Create a Docker Image First we need to create a docker that contains the required python libraries, and associate this with some appropriate tags that we will link to our operator # Use an official Python 3.6 image as a parent image FROM python:3.6.4-slim-stretch # Install python libraries RUN pip install requests RUN pip install pandas RUN pip install beautifulsoup4 RUN pip install lxml Custom SAP Data Hub Python Operator I have tested the operator with various RSS feeds and it appears to be reliable. import requests import pandas as pd from bs4 import * url = "" resp = requests.get(url) soup = BeautifulSoup(resp.content, features="xml") items = soup.findAll('item') news_items = [] for each_item in items: news_item = {} news_item['RSS_TITLE'] = each_item.title.text news_item['RSS_DESC'] = each_item.description.text news_item['RSS_LINK'] = each_item.link.text news_item['RSS_DATE'] = each_item.pubDate.text news_items.append(news_item) # Use a Pandas Dataframe to pass as CSV df = pd.DataFrame(news_items) df = df.to_csv(index=False, header=True, sep=";") # Create Data Hub Message attr = dict() attr["message.commit.token"] = "stop-token" messageout = api.Message(body=df, attributes=attr) api.send("outmsg", messageout) If we connect this to the WireTap component we can quickly see that data is being retrieved and structured as required. Vora Avro Ingestor Using the Vora Avro Ingestor is a great way to receive structured information into Vora. I needed to use fixed length fields below, this has the advantage of working with HANA Smart Data Access (SDA). { "name": "RSS_FEED", "type": "record", "fields": [ { "name": "RSS_TITLE", "type": "fixed", "size": 128 }, { "name": "RSS_DESC", "type": "fixed", "size": 2500 }, { "name": "RSS_LINK", "type": "fixed", "size": 128 }, { "name": "RSS_DATE", "type": "fixed", "size": 16 } ] } For completeness I have captured the properties of the Vora Avro Ingestor, and highlighted the fields that I changed. Executing this pipeline will now retrieve the RSS data amd automatically create the table within SAP Vora, we can easily verify the table has been created with the SAP Vora Tools or the new Metadata Explorer. The Data Preview shows us what is now stored in the SAP Vora disk engine.
https://blogs.sap.com/2018/11/09/scraping-rss-feeds-with-sap-data-hub/
CC-MAIN-2018-47
refinedweb
426
55.34
Introduction Baetle stands for Bug And Enhancement Tracking LanguagE. It is an ontology that describes the information kept in BugDatabases such as Bugzilla, Jira and others do. The aim of this is to enable a number of things: - SPARQL end points on a bug database that can be queried (see SPARQLexamples ) - Bugs in one open source project that refer to bugs in other open source projects, and that specify their dependency on these - A format for bug databases to exchange their bug data ( see N3examples ) - Relating bugs to software artifacts so that these issues can be tracked - others? See DesignQuestions and UseCases Licence An ontology such as this is of no use if it is not done communally, fully openly and without any desire to collect royalties of any sort from anyone. This is a language to help diverse groups communicate. As such it should belong to everyone. Currently the license is new BSD. A CC license seems to be very common with Ontology developers. Probably it should be both. A CC license for the ontology and a BSD license for the testing code. Participating I have set up a mailing list for discussion of the ontology. Writing a good ontology can, surprisingly enough, be a lot of work, at least more work than is feasible for one person not working on it full time. A lot of the work comes from having to understand the different needs of different people in the community, writing out test cases or mappers to different databases, and working well with OtherOntologies. So volunteers helpers are welcome. I have given everyone on the mailing list with a gmail account access to the wiki. Ask me if you would like a gmail account. History This idea has been developing for some time. Henry Story (bblfish) first outlined the advantages such an ontology can have in Google Video introduces the semantic web. The first details of this ontology were written out in the blog entry Baetle: Bug And Enhancement Tracking LanguagE Details Current thinking (version 0.001) is that the ontology should look something like this The namespace used in the above diagram correspond to the followng ontologies: - wf: WorkFlowOntology - awol: Atom Owl Ontology - foaf: Friend Of a Friend - skos: Simple Knowledge Organisation Systems Ontology - doap: Description of a Project - sioc: Semantically Interlinked Online Communities - dct: known as dcterms, or Dublin Core Terms
http://code.google.com/p/baetle/
crawl-002
refinedweb
397
51.38
Junit.) Consider the following little Java test class import junit.framework.*; import java.util.*; public class Tester extends TestCase { public Tester(String name) {super(name);} private List list = new ArrayList(); public void testFirst() { list.add("one"); assertEquals(1, list.size()); } public void testSecond() { assertEquals(0, list.size()); } } Some people may not realize this, but both tests pass - and will pass in whichever order they are run. This is the case because to run this JUnit creates two instances of Tester, one for each testXXX method. The list field is thus freshly initialized for test method run. Now some people think this is a bug in JUnit, but it isn't - it's a conscious design decision. (For more on this kind of thing watch out for Kent's new book.) The basic design of JUnit has its origins in a testing framework that Kent Beck built in Smalltalk. (Actually to call it a framework was a bit of a misnomer - Kent never shipped it out as a framework. He preferred people to build it themselves since it would only take an hour to two - that way they wouldn't be afraid to change it when they wanted something different.) One of the key principles in JUnit is that of isolation - that is no test should ever do anything that would cause other tests to fail. Isolation provides several advantages. - Any combination of tests can be run in any order with the same results. - You never have a situation where you're trying to figure out why one test failed and the cause is due to the way another test is written. - If one test fails, you don't have to worry about it leaving debris that will cause other tests to fail. That helps prevent cascading errors that hides the real bug. Now JUnit provides other mechanisms that support isolation - in particular the setUp and tearDown methods that are run at the beginning and end of each test method. To use this for my simple example you do this. public void setUp() { list = new ArrayList(); } Most of the time you don't need to use tearDown since the setUp can do any reinitialization that you need. You could isolate your test methods by having all the state be in local variables and not use fields at all. However this would mean duplicating your setUp code in every test - and you know how much I despise duplication. Critics of the JUnit approach have argued that since you have setUp and tearDown you don't need a fresh object each time. You can just make sure you reinitialize all your fields in those methods. Fans of the JUnit approach argue that this may be true, but many people initialize in fields, and you might as well provide this greater degree of isolation. After all an important part of framework design is to make it easy to the right thing (isolation) and hard (but not impossible) to do things that cause problems. After all what's the cost of doing it? The main argument about the cost of the JUnit approach is based on the extra objects that are created, both the JUnit test cases and all the other objects created in the setup and field initializers. Most of the time I think this argument is bogus. There's a lot of fear about creating lots of objects, but most of the time it isn't justified - it's based on an outdated mental model of how object allocation and collection work. Certainly there are environments where object creation could be an issue and Java was one them in its early days. However modern Java can create objects with virtually no overhead, it's no longer an issue. (It wasn't in Smalltalk for longer, which is why Kent and Erich didn't worry about it.) So most of the time just don't worry about creating objects. That said, mostly doesn't mean alwaysly. One good example of an object you don't want to create frequently is a database connection. This does make sense to share, but sharing across all the test methods in one test case class isn't enough - you'll want to share it across much more than that. A cheap and nasty way to do this is with static variables. Generally it's wise to shy away from statics, but often they're fine in the context of a test run - although I still prefer to shun them. JUnit actually provides a very flexible mechanism for sharing test fixture objects - the TestSetup decorator. This allows you to setup some common state for any test suite, which gives you a lot of more flexibility about sharing state across groups of tests - much more so than just sharing across the methods in a single test case class. Perhaps the biggest problem with TestSetup is that finding information on it is so hard that that I almost expected to see "beware of the leopard" in the documentation. And there is a leopard around - if you use TestSetup you are breaking isolation and broken isolation tends to lead to awkward to find bugs. Don't use it unless you really, really need it. (But if you do this forum thread gives you some hints on using it, as does J.B. Rainsberger's new book ) .) A second objection to the JUnit approach is that it isn't intuitive - in that the mechanism it uses to pull this off is tricky to understand. I sympathize with this, the Pluggable Selector pattern isn't well known, and a design style that uses unfamiliar patterns is often uncomfortable. On the whole I like the JUnit approach because I think the isolation and ease of test writing outweighs the esoteric implementation. But good people disagree with me. Cedric Beust's TestNG doesn't do this, perhaps more surprisingly the popular NUnit implementation doesn't do it (although Jim now regrets that decision). The following NUnit test causes a failure. [TestFixture] public class ServerTester { private IList list = new ArrayList(); [Test] public void first() { list.Add(1); Assert.AreEqual(1, list.Count); } [Test] public void second() { Assert.AreEqual(0, list.Count); } } If you're using a framework that works in this style, I strongly recommend you initialize all your instance variables in a setup method. That way you'll keep your tests isolated, and avoid some debug induced hair removal. I don't happen to agree with reusing the test case instance - but I don't think those that made that decision have a single-digit IQ, have some complex financial killing in mind, or are embarking on some strange behavior with their lower torsos. They called the design trade-off differently - and I think life is better when we can respectfully disagree over the fluid nature of software design.
http://martinfowler.com/bliki/JunitNewInstance.html
CC-MAIN-2013-48
refinedweb
1,143
69.62
Important: Please read the Qt Code of Conduct - Qt creator does not get a good build config What's wrong with QtCreator?? Can't it just get a path and work on it? I have just reinstalled my Windows 7 64-bit OS a week ago, so I had to reinstall QtCreator. I used the online installer and during the install it all goes fine, but then comes troubles. When I tried to configure a Qt version and compiler it would never get it the right way. I'm trying to use a mingw32 and mingw64 Qt build. The first time I set up the compilers, it got them correctly, but when I set the Qt version it would say "no qmlscene installed" and "no qmlview installed". And I was like "what?, they're right there in their folders!!!!!!!!!!!". OK then, the installation might had some issues that I wasn't aware of, so going to reinstall QtCreator. Uninstalling all Qt things, deleting remaining files and reinstalling (again 1) QtCreator. Now the good news is: Qt versions are correctly recognized but when I open the "Tools > Options" window QtCreator complains about a missing DLL, libwinpthread-1.dll. I thought it was quite not a big deal since the rest of QtCreator was working, but when I tried to compile a well known to work project that I had Qt complained with "Unknown module(s) in QT: gui core". Again, "what??? they're right there in their folders, just where they're supposed to be!!!!!!!!!!!". Dammit, another reinstall is going to happen. So, uninstalling again and deleting configuration files and anything remaining of Qt, and reinstalling QtCreator (again 2). Another installation gone without any issues during the install process, but troubles still won't go away. QtCreator no longer complies about the missing libwinpthread-1.dll, but now it wouldn't recognize not any Qt version. I have unpacked the original mingw32 and mingw64 builds and I run the qtbinpatcher.exe included, it goes fine, but when I add the Qt version in QtCreator it would say "qmake does not exist or it's not an executable", for both mingw32 and 64 builds. This time the issue is even more stranger since I manually ran a cmd prompt session and run "C:\Qt\5.1.1\mingw32\bin\qmake.exe -v" and I got a proper output: @C:\Users\T3STY>C:\Qt\5.1.1\mingw32\bin\qmake.exe -v QMake version 3.0 Using Qt version 5.1.1 in C:\Qt\5.1.1\mingw32\lib@ Again.. uninstalling..... reinstalling (again 3) and I am right now sitting in front of a clean-clean QtCreator installation. Please guys, QtCreator is driving me crazy, and I don't know what to do. Can anyone tell me what do I do wrong? It looks to me like I'm doing the very same steps over and over, but QtCreator will always complain about a different thing, not even once would give me the same error. Also, if any Qt developers are around reading this: please, make QtCreator easier to configure!!! It's unbelievable that I have to use a qtbinpatcher thing to get a "working" qt version while all I should do is copy-paste a damm folder inside any Qt installation subfolder (like "C:\Qt\Versions\I_put_my_folder_here" ). What are you installing? Qt creator only? Or the whole sdk? When installing the whole SDK from "here": you might have to check of the MinGW Qt lib version to install and also you to select the MinGW compiler suite. QtCreator only. I'm not interested in the MinGW build provided during the install process, I have downloaded my own builds from here: (I have the first two packages, qt 5.1.1 x32 and x64 with opengl). Right now though I'm reinstalling QtCreator with the mingw build provided with the installer. I hope at least this one works... but it's taking ages to download at 130KB/s... Well, it is certainly fine to install some other unofficial builds, but how should someone know this. Those builds come with Qt creator versions. I would expect that they are at least consistent as long as you do not want to combine with something else. My issue was that I needed a free cross platform 64-bit compiler to use with QtCreator and since Qt installer does not provide one I have downloaded that version. But anyway, that's not the point. The point is that QtCreator is messing up things with no reason. In any application, if you repeat the very same steps over and over, they should give you the same results. In QtCreator I'm always getting a different error that I can't understand how to fix. Here, I have just installed the mingw32 build provided in the Qt online installer. I have just took the very same project that I know it was working fine last time, I have deleted the .pro.user configuration (to start as it was brand-new project) and tried to compile with such mingw32 buiild. The result: @C:\Users\T3STY\Documents\Qt\Projects\qcp_test1\qcustomplot.h:36: error: QWidget: No such file or directory #include <QWidget> ^@ It cannot find the QWidget library... wow... that's great. How come could this happen?? I mean, aren't at least the official builds supposed to be working on first install?? EDIT I'd like to point out that when uninstalling I also deleted the configuration files left in C:\Users\T3STY\AppData\Roaming\QtProject but this doesn't change a thing. BTW, creating a new empty project and coding a simple QApplication with a simple QWidget window gives the same result. I have also just noted that Qt won't save the compiler ABI configuration. I explicitly set the x64 compiler to be for 64-bit, but on restart it gets back to default (32-bit). Guys, no one else has any ideas? I reinstalled QtCreator about 10 times now and I'm still stuck with QtCreator not working properly... At last, I managed to get it to work... I didn't use any particular steps, I just kept on reinstalling and configuring compilers and Qt versions.
https://forum.qt.io/topic/33849/qt-creator-does-not-get-a-good-build-config/5
CC-MAIN-2021-49
refinedweb
1,040
73.27
Python 3. Throughout this article we'll talk about the various os and subprocess methods, how to use them, how they're different from each other, on what version of Python they should be used, and even how to convert the older commands to the newer ones. Hopefully by the end of this article you'll have a better understanding of how to call external commands from Python code and which method you should use to do it. First up is the older os.popen* methods. The os.popen* Methods The os module offers four different methods that allows us to interact with the operating system (just like you would with the command line) and create a pipe to other commands. These methods I'm referring to are: popen, popen2, popen3, and popen4, all of which are described in the following sections. The goal of each of these methods is to be able to call other programs from your Python code. This could be calling another executable, like your own compiled C++ program, or a shell command like ls or mkdir. os.popen The os.popen method opens a pipe from a command. This pipe allows the command to send its output to another command. The output is an open file that can be accessed by other programs. The syntax is as follows: os.popen(command[, mode[, bufsize]]) Here the command parameter is what you'll be executing, and its output will be available via an open file. The argument mode defines whether or not this output file is readable ('r') or writable ('w'). Appending a 'b' to the mode will open the file in binary mode. Thus, for example "rb" will produce a readable binary file object. In order to retrieve the exit code of the command executed, you must use the close() method of the file object. The bufsize parameter tells popen how much data to buffer, and can assume one of the following values: - 0 = unbuffered (default value) - 1 = line buffered - N = approximate buffer size, when N > 0; and default value, when N < 0 This method is available for Unix and Windows platforms, and has been deprecated since Python version 2.6. If you're currently using this method and want to switch to the Python 3 version, here is the equivalent subprocess version for Python 3: The code below shows an example of how to use the os.popen method: import os p = os.popen('ls -la') print(p.read()) The code above will ask the operating system to list all files in the current directory. The output of our method, which is stored in p, is an open file, which is read and printed in the last line of the code. The of this code (in the context of my current directory) result is as follows: $ python popen_test.py total 32 0 Nov 9 09:13 subprocess_popen_test.py os.popen2 This method is very similar to the previous one. The main difference is what the method outputs. In this case it returns two file objects, one for the stdin and another file for the stdout. The syntax is as follows: popen2(cmd[, mode[, bufsize]]) These arguments have the same meaning as in the previous method, os.popen. The popen2 method is available for both the Unix and Windows platforms. However, it is found only in Python 2. Again, if you want to use the subprocess version instead (shown in more detail below), use the following instead: The code below shows an example on how to use this method: import os in, out = os.popen2('ls -la') print(out.read()) This code will produce the same results as shown in the first code output above. The difference here is that the output of the popen2 method consists of two files. Thus, the 2nd line of code defines two variables: in and out. In the last line, we read the output file out and print it to the console. os.popen3 This method is very similar to the previous ones. However, the difference is that the output of the command is a set of three files: stdin, stdout, and stderr. The syntax is: os.popen3(cmd[, mode[, bufsize]]) where the arguments cmd, mode, and bufsize have the same specifications as in the previous methods. The method is available for Unix and Windows platforms. Note that this method has been deprecated and the Python documentation advises us to replace the popen3 method as follows: As in the previous examples, the code below will produce the same result as seen in our first example. import os in, out, err = os.popen3('ls -la') print(out.read()) However, in this case, we have to define three files: stdin, stdout, and stderr. The list of files from our ls -la command is saved in the out file. os.popen4 As you probably guessed, the os.popen4 method is similar to the previous methods. However, in this case, it returns only two files, one for the stdin, and another one for the stdout and the stderr. This method is available for the Unix and Windows platforms and (surprise!) has also been deprecated since version 2.6. To replace it with the corresponding subprocess Popen call, do the following: The following code will produce the same result as in the previous examples, which is shown in the first code output above. import os in, out = os.popen4('ls -la') print(we.read()) As we can see from the code above, the method looks very similar to popen2. However, the out file in the program will show the combined results of both the stdout and the stderr streams. Summary of differences The differences between the different popen* commands all have to do with their output, which is summarized in the table below: In addition the popen2, popen3, and popen4 are only available in Python 2 but not in Python 3. Python 3 has available the popen method, but it is recommended to use the subprocess module instead, which we'll describe in more detail in the following section. The susbprocess.Popen Method The subprocess module was created with the intention of replacing several methods available in the os module, which were not considered to be very efficient. Within this module, we find the new Popen class. The Python documentation recommends the use of Popen in advanced cases, when other methods such like subprocess.call cannot fulfill our needs. This method allows for the execution of a program as a child process. Because this is executed by the operating system as a separate process, the results are platform dependent. The available parameters are as follows: subprocess.Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0) One main difference of Popen is that it is a class and not just a method. Thus, when we call subprocess.Popen, we're actually calling the constructor of the class Popen. There are quite a few arguments in the constructor. The most important to understand is args, which contains the command for the process we want to run. It can be specified as a sequence of parameters (via an array) or as a single command string. The second argument that is important to understand is shell, which is defaults to False. On Unix, when we need to run a command that belongs to the shell, like ls -la, we need to set shell=True. For example, the following code will call the Unix command ls -la via a shell. import subprocess subprocess.Popen('ls -la', shell=True) The results can be seen in the output below: $ python subprocess_popen_test.py total 40 56 Nov 9 09:16 subprocess_popen_test.py Using the following example from a Windows machine, we can see the differences of using the shell parameter more easily. Here we're opening Microsoft Excel from the shell, or as an executable program. From the shell, it is just like if we were opening Excel from a command window. The following code will open Excel from the shell (note that we have to specify shell=True): import subprocess subprocess.Popen("start excel", shell=True) However, we can get the same results by calling the Excel executable. In this case we are not using the shell, so we leave it with its default value ( False); but we have to specify the full path to the executable. import subprocess subprocess.Popen("C:\Program Files (x86)\Microsoft Office\Office15\excel.exe") In addition, when we instantiate the Popen class, we have access to several useful methods: The full list can be found at the subprocess documentation. The most commonly used method here is communicate. The communicate method allows us to read data from the standard input, and it also allows us to send data to the standard output. It returns a tuple defined as (stdoutdata, stderrdata). For example, the following code will combine the Windows dir and sort commands. import subprocess p1 = subprocess.Popen('dir', shell=True, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p2 = subprocess.Popen('sort /R', shell=True, stdin=p1.stdout) p1.stdout.close() out, err = p2.communicate() In order to combine both commands, we create two subprocesses, one for the dir command and another for the sort command. Since we want to sort in reverse order, we add /R option to the sort call. We define the stdout of process 1 as PIPE, which allows us to use the output of process 1 as the input for process 2. Then we need to close the stdout of process 1, so it can be used as input by process 2. The communication between process is achieved via the communicate method. Running this from a Windows command shell produces the following: > python subprocess_pipe_test.py 11/09/2017 08:52 PM 234 subprocess_pipe_test.py 11/09/2017 07:13 PM 99 subprocess_pipe_test2.py 11/09/2017 07:08 PM 66 subprocess_pipe_test3.py 11/09/2017 07:01 PM 56 subprocess_pipe_test4.py 11/09/2017 06:48 PM <DIR> .. 11/09/2017 06:48 PM <DIR> . Volume Serial Number is 2E4E-56A3 Volume in drive D is ECA Directory of D:\MyPopen 4 File(s) 455 bytes 2 Dir(s) 18,634,326,016 bytes free Wrapping up The os methods presented a good option in the past, however, at present the subprocess module has several methods which are more powerful and efficient to use. Among the tools available is the Popen class, which can be used in more complex cases. This class also contains the communicate method, which helps us pipe together different commands for more complex functionality. What do you use the popen* methods for, and which do you prefer? Let us know in the comments!
https://stackabuse.com/pythons-os-and-subprocess-popen-commands/
CC-MAIN-2019-43
refinedweb
1,805
73.37
Latest build : v0.6 CraftBukkit: 1.5.2 Easily create blocks with custom effects ! Persistence beetween restart ! Do not need Spout or SpoutCraft ! This plugin provides an API for plugin developers to create blocks with custom effects. It's like creating new blocks but without new textures. You've got the total control of what you can do. There are some new events like when the player walks on your block, when he right/left click on, and soon much more ! Commands & Permissions DEVELOPER PART How to create my custom block ? Plugin Solution - Download the BlockAPI.jar and add it to your build path as a library - Add this line depend : [BlockAPI] to your plugin.yml - Create a new class that extends CustomBlock - ( You should do this part, but you can jump it ) Change its identifier, its name, its blockID ( the block id in what it will render to and by default its properties such as item to destroy... ), add its effect ( to get the methods see here) - Add your custom block BlockAPI blockAPI =(BlockAPI) plugin.getServer().getPluginManager().getPlugin("BlockAPI"); if(blockAPI==null) //Here handle that BlockAPI isn't installed on this server else blockAPI.addMyCustomBlock(myCustomBlock); //OR blockAPI.addMyCustomBlocks(myCustomBlockList): - Say to your users that if they want to use your plugin they must download this one - Now that's OK ! No plugin just the class - Create a new class that extends CustomBlock with the constructor with no args - You can make calls to BlockAPi methods or Bukkit API methods - ( You should do this part, but you can jump it ) Change its identifier, its name, its blockID ( the block id in what it will render to and by default its properties such as item to destroy... ), add its effect ( to get the methods see here) - Then compile it as a .class file - Add your class file in plugins/BlockAPI/Blocks/ and your custom block will be automatically added - You're done then if ou want to share it with the bukkit community post it on the forum Methods To see all the methods of CustomBlock and all the BlockAPI methods, see here Sample Otherwise this is a sample TrampolineBlock : public class TrampolineBlock extends CustomBlock{ public TrampolineBlock() { super("trampo"); setName("Trampoline"); setBlockID(1); setMaxStackSize(128); ArrayList<String> desc = new ArrayList<String>(); desc.add("Jump ! Jump !"); setDescription(desc); setDrops(BlockAPI.getItem(this)); } @Override public void walk(PlayerMoveEvent event){ event.getPlayer().setFallDistance(0); event.getPlayer().setVelocity(event.getPlayer().getVelocity().setY(1)); } } and here then the code added in onEnable() in the plugin class if you use the plugin solution, otherwise that's it. BlockAPI blockAPI =(BlockAPI) plugin.getServer().getPluginManager().getPlugin("BlockAPI"); blockAPI.addMyCustomBlock(new TrampolineBlock()); Render Block in inventories will now render as normal Blocks but their name will change to your custom block name and the quantity is handled in the title to avoid to limit maxStackSize To do - Create some way for users ( not developers ) to create a custom block ( see Skript for implementation ) - Add new textures. - Add sound support ( see Pl3xMidiPlayer for implementation ) Facts - Date created - Jun 01, 2013 - Category - - Last update - Jun 07, 2013 - Development stage - Release - License - Mozilla Public License 1.0 (MPL) - Curse link - BlockAPI - Downloads - 2,415 - Recent files - R: BlockAPI v0.6 for 1.5.2 Jun 07, 2013 - R: BlockAPI v0.5 for 1.5.2 Jun 05, 2013 - R: BlockAPI v0.3 for 1.5.2 Jun 02, 2013 - R: BlockAPI v0.2 for 1.5.2 Jun 01, 2013 Is there any updated version of this? I'm trying to make a custom block that acts as a hydroelectric generator and would love to use this but I need some documentation or an updated version for 1.8.7. If anyone has any information please let me know! ZBDH7a9bee69dae5b85ac9ad277f4dd0f621 @Hartorn: Go can u give me updated file? Hello man ! I found this plugin some time ago, and really think it is a useful and good one. I modified it a bit, and updated it to the latest version. Is it alright if I take it over and continue maintaining it ? Do you allow me to put it online ? Have never seen a dwarf ? Just look down. Pegasus Plugin ZBDH21bfd314dec1599cd2fc7eb30d12c268 is vid now coming ? @sonicwolfsspeed: Go Sorry, i'm sick currently then i did half the video part but can't do the rest..... hmm just gone wate for the vid :/ @sonicwolfsspeed: Go EDIT : There is no persistnce if you use these characters currently Ya it is possible ( i used \u00XX character ) : @_Justyce_: Go just plz try it D: @sonicwolfsspeed: Go It's may possible but that's not the aim of this plugin. The aim is to provide an easy way to re-use current blocks to recreate new one with different effects i just want to collor name blocks :D like (nametags)
http://dev.bukkit.org/bukkit-plugins/blockapi/
CC-MAIN-2015-40
refinedweb
802
64.2
CodePlexProject Hosting for Open Source Software I'm having difficulties converting the new BlogEngine v2.0 into a web app I seem to be getting methods with the same names and namespaces in different files like: /admin/Widgets/Menu.ascx.cs and /admin/Settings/Menu.ascx.cs where both is under the "admin.Settings" namespace and both has methods like "protected string Current(string pg)" which are duplicating on compile. An online guide or solution would be much appreciated Many thanks! Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://blogengine.codeplex.com/discussions/243706
CC-MAIN-2017-43
refinedweb
117
69.28
Building A React/Redux/Elm Bridge Elm’s ngReact Sometimes it’s the small and incremental changes that matter. A lot has been said about Elm. An impressive number of articles, talks and posts have been praising Elm as the future of Front End development (alongside with PureScript and ClojureScript). The really nice thing about Elm, is the fact that you don’t have to go “full Elm” to leverage the benefits. The benefits include a great language, tooling, community as well as great ideas and concepts. Especially the community aspect and the interaction within the community is nice. It appears that everyone is either pair programming or otherwise collaborating, at least from an outsider perspective. Elm also has an amazing on-boarding strategy. Evan Czaplicki wrote about it in How to Use Elm at Work, where he describes detailed strategies on how to introduce Elm into the workplace. The previously mentioned fact that you don’t have to go full Elm, but start out small by embedding Elm into an existing project, lowers the barrier to entry significantly. Which means that introducing Elm into your React based project is simpler than you might think. We will take a look the different migration strategies, which include integrating into a barebones JavaScript project, migrating from React to Elm and finally how to even gradually migrate from Redux to Elm. Embedding Elm in some big JavaScript project is not very hard. It’s like three lines of JS, but it feels like no one knows about them! How to Use Elm at Work, Evan Czaplicki While it sounds like a complex undertaking, the interop part is really just a couple lines of code, as stated in the above comment. By adding an output option to the elm-make command, we get the compiled JavaScript file sans the HTML part. elm-make Counter.elm --output=Counter.js The following snippet is taken from the Elm language guide — JavaScript Interop section. <div id="main"></div> <script src="main.js"></script> <script> var node = document.getElementById('main'); var app = Elm.Main.embed(node);. </script> Communicating between JavaScript and Elm can be achieved in two ways: via ports and via flags. Getting the two to communicate requires a couple of adaptions on both, the Elm as well as the JavaScript side. Including having to define a port module port module Counter exposing (..) and adding ports for listening and sending from and to JS. port check : Int -> Cmd msg port counter : (Int -> msg) -> Sub msg Every communication between Elm and JavaScript runs through port, where we can interact via commands and subscriptions. In the above example we send data to the JS side through the check port. We now have the option to send an integer by calling check 1 The subscription part is covered by the counter port. We’re subscribing to any changes to the counter value coming from the JS side. To reiterate on our previous HTML snippet, we can now initialize the counter f.e. In this very specific case we define an initial counter value of 3 and then subscribe to any changes on that counter. <script> var node = document.getElementById('counter'); var app = Elm.Counter.embed(node); app.ports.counter.send(3) app.ports.check.subscribe(function(count) { console.log('receiving data...', count); }) </script> We could also listen to changes and multiply the current count by 3 f.e. <script> // ... app.ports.check.subscribe(function(count) { app.ports.counter.send((count*3)) }) </script> As seen with the previous examples, we can easily interact with JavaScript from an Elm perspective and vice versa. Elm also makes sure that invalid data is rejected upfront, preventing bad data from entering an Elm application in the first place. The possibilities to interact don’t stop there, though. react-elm-components is a specific library aimed at introducing components written in Elm into a React codebase. In reality, it simply wraps the previously seen code into a React component, ensuring that the Elm component is connected with the correct DOM node and handles the sending as well as the subscribing to that component. All in all, it’s just a couple lines of code, that enable us to smoothly bridge Elm to React. The same example can be written like this now. import React from 'react' import { render } from 'react-dom' import Elm from 'react-elm-components' import { Counter } from './Counter' const setupPorts = ports => { ports.check.subscribe(count => ports.counter.send((count*3))); } const CounterComponent = () => <Elm src={Counter} ports={setupPorts} /> render(<div> <CounterComponent /> </div>, document.getElementById('app')) For a full implementation check the react-elm-components example. All these examples highlight the fact that we’re able to introduce the smallest component into an existing JavaScript project without having to go through a complex setup. Add this to the fact that we can always revert back, makes incorporating Elm an interesting undertaking. These features are the bridge to introducing incremental changes. Opening up the way for slowly migrating a project to a different framework, or in this case a different language. But as you can imagine, things don’t stop there. Someone in the Elm community thought about how to connect Redux with Elm. This makes sense actually, considering how far Redux is widespread in the JavaScript and especially React world. Christoph Hermann wrote a module simply entitled redux-elm-middleware which enables us to slowly migrate an existing redux codebase to Elm. Let’s build a Counter reducer, just to get a feel for the idea. port module Reducer exposing (..) import Redux import Task exposing (..) import Process import Json.Encode as Json exposing ( object, int ) port increment : ({} -> msg) -> Sub msg port decrement : ({} -> msg) -> Sub msg subscriptions : Model -> Sub Msg subscriptions _ = Sub.batch [ decrement <| always Decrement , increment <| always Increment ] As seen in the interop example, we define a port module as well as the required ports, which we need for being notified when an increment or decrement action has been dispatched. -- MODEL type alias Model = { count : Int } init : Int -> ( Model, Cmd Msg) init count = ( { count = count }, Cmd.none ) encodeModel : Model -> Json.Value encodeModel { count } = object [ ( "count", int count ) ] The only really interesting part in the next section is encodeModel, where we tell Elm what the shape of our model should look like. If the passed in data doesn’t fit the defined model, it will be rejected straight away and fail on the JavaScript side. -- VIEW view : Model -> Html Msg view model = div [] [ button [ onClick Decrement ] [ text "-" ] , div [] [ text (toString model) ] , button [ onClick Increment ] [ text "+" ] ] -- ACTIONS type Msg = NoOp | Increment | Decrement -- UPDATE update : Msg -> Model -> ( Model, Cmd Msg ) update action model = case action of Increment -> ( { model | count = model.count + 1 }, Cmd.none ) Decrement -> ( { model | count = model.count - 1 }, Cmd.none ) NoOp -> ( model, Cmd.none ) main = Redux.program { init = init 0 , update = update , encode = encodeModel , subscriptions = subscriptions } All that is left to do, is to define the actions and the update function. This implementation is very similar to the original Counter example, no extra knowledge required. The only other interesting aspect is that we use Redux.program here. The JavaScript part will consist of a connected Counter Component. import React from 'react' import { render } from 'react-dom' import { applyMiddleware, createStore, combineReducers } from 'redux' import { connect, Provider } from 'react-redux' import { compose } from 'ramda' import createElmMiddleware, { reducer as elmReducer } from 'redux-elm-middleware' const reducers = combineReducers({ elm: elmReducer, }) const elmStore = window.Elm.Reducer.worker() const {run, elmMiddleware} = createElmMiddleware(elmStore) const store = createStore(reducers, {}, compose( applyMiddleware(elmMiddleware), )) run(store) There is a lot going on in here. We’re accessing Elm.Reducer via window and passing it on to createElmMiddleware, which returns us the run and elmMiddleware functions. We then create the store and apply elmMiddleware to the Redux applyMiddleware function and finally call run with the created store. The rest of the code is React specific. const Counter = ({ count = 0, Inc, Dec }) => ( <div> <button onClick={Inc}>+</button> <p>Current count: {count}</p> <button onClick={Dec}>-</button> </div> ) const EnhancedCounter = connect( ({elm}) => ({ count: elm.count }), dispatch => ({ Inc: () => dispatch({ type: 'INCREMENT' }), Dec: () => dispatch({ type: 'DECREMENT' }), }), )(Counter) render( <Provider store={store}> <EnhancedCounter /> </Provider>, document.getElementById('app') ) If you’ve been wondering how to migrate your Redux or React application to Elm, all of this has already been thought through by the community. The easiest way to get started is to actually try it out. Take a look at the redux-elm-middleware example for a more detailed implementation. You might still be wondering what we gain from all this. In short, besides the fact that we’re able to introduce a functional language into the project, we also get pure state and effect handling out of the box while still being able to benefit from the Redux eco-system. Elm’s strength is being able to interop with JavaScript while at the same time isolating any bad data away from Elm itself. This approach comes with a price obviously, which includes having to type complex JSON objects for example. You might want to keep this in mind. Finally, do you remember ngReact? In hindsight it’s sounds trivial, but ngReact solved one problem, migrating an existing Angular application to React. react-elm-components and redux-elm-middleware open up a smooth way for introducing Elm into an existing project similar to ngReact. Build something small. Get it into production. And then you can see wether you like it or not. Richard Feldman, ReactiveConference 2016 Very special thanks to Christoph Hermann and Oskar Maria Grande for providing feedback. Any questions or Feedback? Connect via Twitter Links redux-elm-middleware example Elm Guide on JavaScript Interop react-elm-components example Elm Guide on JSON Interop
https://medium.com/javascript-inside/building-a-react-redux-elm-bridge-8f5b875a9b76
CC-MAIN-2018-39
refinedweb
1,613
55.95
%load_ext autoreload %autoreload 2 %matplotlib inline %config InlineBackend.figure_format = 'retina' from IPython.display import YouTubeVideo, display YouTubeVideo("3qnX1OXQ3Ws") Deterministic Randomness In this section, we'll explore how to create programs that use random number generation in a fashion that is fully deterministic. If that sounds weird to you, fret not: it sounded weird to me too when I first started using random numbers. My goal here is to demystify this foundational piece for you. Random number generation before JAX Before JAX came along, we used NumPy's stateful random number generation system. Let's quickly recap how it works. import numpy as onp # original numpy Let's draw a random number from a Gaussian in NumPy. onp.random.seed(42) a = onp.random.normal() a 0.4967141530112327 And for good measure, let's draw another one. b = onp.random.normal() b -0.13826430117118466 This is intuitive behaviour, because we expect that each time we call on a random number generator, we should get back a different number from before. However, this behaviour is problematic when we are trying to debug programs. When debugging, one desirable property is determinism. Executing the same line of code twice should produce exactly the same result. Otherwise, debugging what happens at that particular line would be extremely difficult. The core problem here is that stochastically, we might hit a setting where we encounter an error in our program, and we are unable to reproduce it because we are relying on a random number generator that relies on global state, and hence that doesn't behave in a fully controllable fashion. I don't know about you, but if I am going to encounter problems, I'd like to encounter them reliably! Random number generation with JAX How then can we get "the best of both worlds": random number generation that is controllable? Explicit PRNGKeys control random number generation The way that JAX's developers went about doing this is to use pseudo-random number generators that require explicit passing in of a pseudo-random number generation key, rather than relying on a global state being set. Each unique key will deterministically give a unique drawn value explicitly. Let's see that in action: from jax import random key = random.PRNGKey(42) a = random.normal(key=key) a WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) DeviceArray(-0.18471184, dtype=float32) To show you that passing in the same key gives us the same values as before: b = random.normal(key=key) b DeviceArray(-0.18471184, dtype=float32) That should already be a stark difference from what you're used to with vanilla NumPy, and this is one key crucial difference between JAX's random module and NumPy's random module. Everything else about the API is very similar, but this is a key difference, and for good reason -- this should hint to you the idea that we can have explicit reproducibility, rather than merely implicit, over our stochastic programs within the same session. Splitting keys to generate new draws How, then, do we get a new draw from JAX? Well, we can either create a new key manually, or we can programmatically split the key into two, and use one of the newly split keys to generate a new random number. Let's see that in action: k1, k2 = random.split(key) c = random.normal(key=k2) c DeviceArray(1.3694694, dtype=float32) k3, k4, k5 = random.split(k2, num=3) d = random.normal(key=k3) d DeviceArray(0.04692494, dtype=float32) Generating multiple draws from a Gaussian, two ways To show you how we can combine random keys together with vmap, here's two ways we can generate random draws from a Normal distribution. The first way is to split the key into K (say, 20) pieces and then vmap random.normal over the split keys. from jax import vmap key = random.PRNGKey(44) ks = random.split(key, 20) # we want to generate 20 draws draws = vmap(random.normal)(ks) draws DeviceArray([-0.2531793 , -0.51041234, 0.16341999, -0.03866951, 0.85914546, 0.9833364 , -0.6223309 , 0.5909158 , 1.4065154 , -0.2537227 , -0.20608927, 1.1317427 , -0.92549866, 1.035201 , 1.9401319 , 0.34215063, 1.6209698 , 0.49294266, 0.5414663 , 0.10813037], dtype=float32) Of course, the second way is to simply specify the shape of the draws. random.normal(key, shape=(20,)) DeviceArray([ 0.39843866, -2.626297 , -0.6032239 , -2.081308 , 0.00854138, 0.76385975, 0.79169536, 1.0279497 , 0.5869708 , -0.87620246, 1.3288299 , 1.7267487 , 0.786439 , -2.752421 , 1.0341094 , -0.2926419 , -0.21061882, -1.1115512 , -0.96723807, 0.12201323], dtype=float32) By splitting the key into two, three, or even 1000 parts, we can get new keys that are derived from a parent key that generate different random numbers from the same random number generating function. Let's explore how we can use this in the generation of a Gaussian random walk. Example: Simulating a Gaussian random walk A Gaussian random walk is one where we start at a point that is drawn from a Gaussian, and then we draw another point from a Gausian using the first point as the starting Gaussian point. Does that loop structure sound familiar? Well... yeah, it sounds like a classic lax.scan setup! Here's how we might set it up. Firstly, JAX's random.normal function doesn't allow us to specify the location and scale, and only gives us a draw from a unit Gaussian. We can work around this, because any unit Gaussian draw can be shifted and scaled to a N(\mu, \sigma) by multiplying the draw by \sigma and adding \mu. Knowing this, let's see how we can write a Gaussian random walk using JAX's idioms, building up from a vanilla Python implementation. Vanilla Python implementation For those who might not be too familiar with Gaussian random walks, here is an annotated version in vanilla Python code (plus some use of the JAX PRNGKey system added in). num_timesteps = 100 mu = 0.0 # starting mean. observations = [mu] key = random.PRNGKey(44) # Split the key num_timesteps number of times keys = random.split(key, num_timesteps) # Gaussian Random Walk goes here for k in keys: mu = mu + random.normal(k) observations.append(mu) import matplotlib.pyplot as plt plt.plot(observations) [<matplotlib.lines.Line2D at 0x7f00546a5f70>] Implementation using JAX Now, let's see how we can write a Gaussian random walk using lax.scan. The strategy we'll go for is as follows: - We'll instantiate an array of PRNG keys. - We'll then scan a function across the PRNG keys. - We'll finally collect the observations together. from jax import lax def new_draw(prev_val, key): new = prev_val + random.normal(key) return new, prev_val final, draws = lax.scan(new_draw, 0.0, keys) plt.plot(draws) [<matplotlib.lines.Line2D at 0x7f00545e0bb0>] Looks like we did it! Definitely looks like a proper Gaussian random walk to me. Let's encapsulate the code inside a function that gives us one random walk draw, as I will show you how next to generate multiple random walk draws. def grw_draw(key, num_steps): keys = random.split(key, num_steps) final, draws = lax.scan(new_draw, 0.0, keys) return final, draws final, draw = grw_draw(key, num_steps=100) plt.plot(draw) [<matplotlib.lines.Line2D at 0x7f005455c430>] A note on reproducibility Now, note how if you were to re-run the entire program from top-to-bottom again, you would get exactly the same plot. This is what we might call strictly reproducible. Traditional array programs are not always written in a strictly reproducible way; the sloppy programmer would set a global state at the top of a notebook and then call it a day. By contrast, with JAX's random number generation paradigm, any random number generation program is 100% reproducible, down to the level of the exact sequence of random number draws, as long as the seed(s) controlling the program are 100% identical. Because JAX's stochastic programs always require an explicit key to be provided, as long as you write your stochastic programs to depend on keys passed into it, rather than keys instantiated from within it, any errors you get can be fully reproduced by passing in exactly the same key. When an error shows up in a program, as long as its stochastic components are controlled by explicitly passed in seeds, that error is 100% reproducible. For those who have tried working with stochastic programs before, this is an extremely desirable property, as it means we gain the ability to reliably debug our program -- absolutely crucial especially when it comes to working with probabilistic models. Also notice how we finally wrote our first productive for-loop -- but it was only to plot something, not for some form of calculations :). Exercise 1: Brownian motion on a grid In this exercise, the goal is to simulate the random walk of a single particle on a 2D grid. The particle's (x, y) position can be represented by a vector of length 2. At each time step, the particle moves either in the x- or y- direction, and when it moves, it either goes +1 or -1 along that axis. Here is the NumPy + Python loopy equivalent that you'll be simulating. import jax.numpy as np starting_position = onp.array([0, 0]) n_steps = 1000 positions = [starting_position] keys = random.split(key, n_steps) for k in keys: k1, k2 = random.split(k) axis = random.choice(k1, np.array([0, 1])) direction = random.choice(k2, np.array([-1, 1])) x, y = positions[-1] if axis == 0: x += direction else: y += direction new_position = np.array([x, y]) positions.append(new_position) positions = np.stack(positions) plt.plot(positions[:, 0], positions[:, 1], alpha=0.5) [<matplotlib.lines.Line2D at 0x7f00544aa760>] Your challenge is to replicate the brownian motion on a grid using JAX's random module. Some hints that may help you get started include: - JAX arrays are immutable, so you definitely cannot do arr[:, 0] += 1. random.permutationcan be used to identify which axis to move. random.choicecan be used to identify which direction to go in. - Together, the axis to move in and the direction to proceed can give you something to loop over... - ...but without looping explicitly :), for which you have all of the tricks in the book. def randomness_ex_1(keys, starting_position): # Your answer here! pass from dl_workshop.jax_idioms import randomness_ex_1 final, history = randomness_ex_1(keys, starting_position) plt.plot(history[:, 0], history[:, 1], alpha=0.5) [<matplotlib.lines.Line2D at 0x7f005438e250>] Exercise 2: Stochastic stick breaking In the previous notebook, we introduced you to the stick-breaking process, and we asked you to write it in a non-stochastic fashion. We're now going to have you write it using a stochastic draw. To do so, however, you need to be familiar with the Beta distribution, which models a random draw from the interval x \in (0, 1). Here is how you can draw numbers from the Beta distribution: betadraw = random.beta(key, a=1, b=2) betadraw DeviceArray(0.16624227, dtype=float32) Now, I'm going to show you the NumPy + Python equivalent of the real (i.e. stochastic) stick-breaking process: import jax.numpy as np num_breaks = 30 keys = random.split(key, num_breaks) concentration = 5 sticks = [] stick_length = 1.0 for k in keys: breaking_fraction = random.beta(k, a=1, b=concentration) stick = stick_length * breaking_fraction sticks.append(stick) stick_length = stick_length - stick result = np.array(sticks) result DeviceArray([2.70063013e-01, 8.35481361e-02, 6.01774715e-02, 6.99718744e-02, 8.14794526e-02, 6.04477786e-02, 1.81718692e-01, 2.25469150e-04, 5.12378067e-02, 2.01937854e-02, 1.84609592e-02, 1.12395324e-02, 1.01102807e-03, 1.38746975e-02, 6.61318609e-03, 1.91624713e-04, 1.52852843e-02, 3.68715706e-03, 1.40792108e-03, 2.03626999e-03, 1.53146144e-02, 3.01029743e-03, 2.75874767e-03, 2.52541294e-03, 1.86719827e-03, 6.96112751e-04, 3.96613643e-04, 6.45141071e-03, 2.69659120e-03, 1.77769631e-04], dtype=float32) Now, your task is to implement it using lax.scan. def randomness_ex_2(key, num_breaks, concentration: float): # Your answer here! pass # Comment out the import to test your answer! from dl_workshop.jax_idioms import randomness_ex_2 final, sticks = randomness_ex_2(key, num_breaks, concentration) assert np.allclose(sticks, result) Exercise 3: Multiple GRWs Now, what if we wanted to generate multiple realizations of the Gaussian random walk? Does this sound familiar? If so... yeah, it's a vanilla for-loop, which directly brings us to vmap! And that's what we're going to try to implement in this exercise. from functools import partial from jax import vmap The key idea here is to vmap the grw_draw function across multiple PRNGKeys. That way, you can avoid doing a for-loop, which is the goal of this exercise too. You get to decide how many realizations of the GRW you'd like to create. def randomness_ex_3(key, num_realizations=20, grw_draw=grw_draw): # Your answer here pass from dl_workshop.jax_idioms import randomness_ex_3 final, trajectories = randomness_ex_3(key, num_realizations=20, grw_draw=grw_draw) trajectories.shape (20, 1000) We did it! We have 20 trajectories of a 1000-step Gaussian random walk. Notice also how the program is structured very nicely: Each layer of abstraction in the program corresponds to a new axis dimension along which we are working. The onion layering of the program has very natural structure for the problem at hand. Effectively, we have planned out, or perhaps staged out, our computation using Python before actually executing it. Let's visualize the trajectories to make sure they are really GRW-like. import seaborn as sns fig, ax = plt.subplots() for trajectory in trajectories[0:20]: ax.plot(trajectory) sns.despine()
https://ericmjl.github.io/dl-workshop/02-jax-idioms/03-deterministic-randomness.html
CC-MAIN-2022-33
refinedweb
2,291
57.57
- Code: Select all def update_hand(self): x = 0 for card in self.hand: print(x) card.rect.y = self.screen_rect.bottom - card.surf.get_height() if card.selected: card.rect.y -= self.card_bufferY card.rect.x = x x += self.card_bufferX The repo: My thought process is x is 0,100,200 etc. throughout the loop, and gets set to card.rect.x, so i am not sure why sometimes the first card x value is set at 100, and sometimes 200, when the first card should be set at 0? The cards are laid out different each time too and not sure why. EDIT: self.cards being set int tools.States.set_cards()
http://python-forum.org/viewtopic.php?f=26&t=11129
CC-MAIN-2014-52
refinedweb
111
78.45
cheapskate Experimental markdown processor. See all snapshots cheapskate appears in Module documentation for 0.1.0.5 Cheapskate This is an experimental Markdown processor in pure Haskell. (A cheapskate is always in search of the best markdown.) It aims to process Markdown efficiently and in the most forgiving possible way. It is about seven times faster than pandoc and uses a fifth the memory. It is also faster, and considerably more accurate, than the markdown package on Hackage. There is no such thing as an invalid Markdown document. Any string of characters is valid Markdown. So the processor should finish efficiently no matter what input it gets. Garbage in should not cause an error or exponential slowdowns. This processor has been tested on many large inputs consisting of random strings of characters, with performance that is consistently linear with the input size. (Try make fuzztest.) Installing To build, get the Haskell Platform, then: cabal update && cabal install This will install both the cheapskate executable and the Haskell library. A man page can be found in man/man1 in the source. Usage As an executable: cheapskate [FILE*] As a library: import Cheapskate import Text.Blaze.Html toMarkdown :: Text -> Html toMarkdown = toHtml . markdown def If the markdown input you are converting comes from an untrusted source (e.g. a web form), you should always set sanitize to True. This causes the generated HTML to be filtered through xss-sanitize's sanitizeBalance function. Otherwise you risk a XSS attack from raw HTML or a markdown link or image attribute attribute. You may also wish to disallow users from entering raw HTML for aesthetic, rather than security reasons. In that case, set allowRawHtml to False, but let sanitize stay True, since it still affects attributes coming from markdown links and images. Manipulating the parsed document You can manipulate the parsed document before rendering using the walk and walkM functions. For example, you might want to highlight code blocks using highlighting-kate: import Data.Text as T import Data.Text.Lazy as TL import Cheapskate import Text.Blaze.Html import Text.Blaze.Html.Renderer.Text import Text.Highlighting.Kate markdownWithHighlighting :: Text -> Html markdownWithHighlighting = toHtml . walk addHighlighting . markdown def addHighlighting :: Block -> Block addHighlighting (CodeBlock (CodeAttr lang _) t) = HtmlBlock (T.concat $ TL.toChunks $ renderHtml $ toHtml $ formatHtmlBlock defaultFormatOpts $ highlightAs (T.unpack lang) (T.unpack t)) addHighlighting x = x Extensions This processor adds the following Markdown extensions: Hyperlinked URLs All absolute URLs are automatically made into hyperlinks, where inside <> or not. Fenced code blocks Fenced code blocks with attributes are allowed. These begin with a line of three or more backticks or tildes, followed by an optional language name and possibly other metadata. They end with a line of backticks or tildes (the same character as started the code block) of at least the length of the starting line. Explicit hard line breaks A hard line break can be indicated with a backslash before a newline. The standard method of two spaces before a newline also works, but this gives a more "visible" alternative. Backslash escapes All ASCII symbols and punctuation marks can be backslash-escaped, not just those with a use in Markdown. Revisions In departs from the markdown syntax document in the following ways: Intraword emphasis Underscores cannot be used for word-internal emphasis. This prevents common mistakes with filenames, usernames, and indentifiers. Asterisks can still be used if word internal emphasis is needed. The exact rule is this: an underscore that appears directly after an alphanumeric character does not begin an emphasized span. (However, an underscore directly before an alphanumeric can end an emphasized span.) Ordered lists The starting number of an ordered list is now significant. Other numbers are ignored, so you can still use 1. for each list item. In addition to the 1. form, you can use 1) in your ordered lists. A new list starts if you change the form of the delimiter. So, the following is two lists: 1. one 2. two 1) one 2) two Bullet lists A new bullet lists starts if you change the bullet marker. So, the following is two consecutive bullet lists: + one + two - one - two List separation Two blank lines breaks out of a list. This allows you to have consecutive lists: - one - two - one (new list) The blank lines break out of a list no matter how deeply it is nested: - one - two - three - new top-level list Indentation of list continuations Block elements inside list items need not be indented four spaces. If they are indented beyond the bullet or numerical list marker, they will be considered additional blocks inside the list item. So, the following is a list item with two paragraphs: - one two The amount of indentation required for an indented code block inside a list item depends on the first line of the list item. Generally speaking, code must be indented four spaces past the first non-space character after the list marker. Thus: - My code {code here} - My code {code here} The following diagram shows how the first line of a list item divides the following lines into three regions: - My code | | +-----+ Content to the left of the marked region will not be part of the list item. Content to the right of the marked region will be indented code under the list item. Regular blocks that belong under the list item should start inside the marked region. When the first line itself contains indented code, this code and subsequent indented code blocks should be indented five spaces past the list marker: - { code } { more code } Raw HTML blocks Raw HTML blocks work a bit differently than in Markdown.pl. A raw HTML block starts with a block-level HTML tag (opening or closing), or a comment start <!-- or end -->, and goes until the next blank line. The whole block is included as raw HTML. No attempt is made to parse balanced tags. This means that in the following, the asterisks are literal asterisks: <div> *hello* </div> while in the following, the asterisks are interpreted as markdown emphasis: <div> *hello* </div> In the first example, we have a single raw HTML block; in the second, we have two raw HTML blocks with an intervening paragraph. This system provides flexibility to authors to use enclose markdown sections in html block-level tags if they wish, while also allowing them to include verbatim HTML blocks (taking care that the don't include any blank lines). As a consequence of this rule, HTML blocks may not contain blank lines. Clarifications This implementation resolves the following issues left vague in the markdown syntax document: Tight vs. loose lists A list is considered "tight" if (a) it has only one item or there is no blank space between any two consecutive items, and (b) no item has blank lines as its immediate children. If a list is "tight," then list items consisting of a single paragraph or a paragraph followed by a sublist will be rendered without <p> tags. Sublists Sublists work like other block elements inside list items; they must be indented past the bullet or numerical list marker (but no more than three spaces past, or they will be interpreted as indented code). ATX headers ATX headers must have a space after the initial ###s. Separation of block quotes A blank line will end a blockquote. So, the following is a single blockquote: > hi > > there But this is two blockquotes: > hi > there Blank lines are not required before horizontal rules, blockquotes, lists, code blocks, or headers. They are not required after, either, though in many cases "laziness" will effectively require a blank line after. For example, in Hello there. > A quote. Still a quote. the "Still a quote." is part of the block quote, because of laziness (the ability to leave off the > from the beginning of subsequent lines). Laziness also affects lists. However, we can have a code block, ATX header, or horizontal rule between two paragraphs without any blank lines. Link references Link references may occur anywhere in the document, even in nested list contexts. They need not be at the outer level. Tests The tests subdirectory contains an extensive suite of tests, including all of John Gruber's original Markdown tests, plus many of the tests from Michel Fortin's mdtest suite. Each test consists in two files with the same basename, a markdown source and an expected HTML output. To run the test suite, do make test To run only tests that match a regex pattern, do PATT=Orig make test Setting the environment variable TIDY=1 will run the expected and actual output through tidy before comparing them. You can run this test suite on another markdown processor by doing PROG=myothermarkdown make test Benchmarks To run a crude benchmark comparing cheapskate to pandoc, do make bench. Set the BENCHPROGS environment variable to compare to other implementations. License The library is released under the BSD license; see LICENSE for terms. Some of the test cases are borrowed from Michel Fortin's mdtest suite and John Gruber's original markdown test suite. Changes * Added NFData and Generic instances for basic types (aisamanra). * Use -auto-exported instead of -auto-all for prof options. * Add 'dingus' flag and make cheapskate-dingus deps conditional (MarcelineVQ). Previously cheapskate would pull in unneeded dependencies when built with Cabal < 1.24. cheapskate 0.1.0.5 (22 Apr 2016) * Bump base to allow GHC 8 (Leif Warner). * Bumped data-default upper bound (Leif Warner). * Removed bad prof-options. cheapskate 0.1.0.4 (28 May 2015) * Bump blaze-html version bound. cheapskate 0.1.0.3 (08 Dec 2014) * Allow building with base-4.8.0.0 (RyanGLScott). cheapskate 0.1.0.2 (08 Dec 2014) * Increased upper bounds for text (RyanGlScott), mtl. * Fixed usage message in command-line utility (cdosborn). * Added flag to build `cheapskate-dingus`. * Dingus: extract version from Paths_cheapskate. * Fixed compiler warnings. * Added `(<?>)`, made string in `ParseError` describe what is expected. * On parse failure, return error with greatest position. This generally gives more useful messages. cheapskate 0.1.0.1 (10 Mar 2014) * Increased version bounds for text, blaze-html. * Made pImage more efficient, avoiding backtracking. cheapskate 0.1 (05 Jan 2014) * Initial release.
https://www.stackage.org/lts-6.11/package/cheapskate-0.1.0.5
CC-MAIN-2018-22
refinedweb
1,703
65.12
DZone Snippets is a public source code repository. Easily build up your personal collection of code snippets, categorize them with tags / keywords, and share them with the world JavaScript IsDate Function JavaScript function that determines if a variable is a proper Date object including a valid date. function isDate (x) { return (null != x) && !isNaN(x) && ("undefined" !== typeof x.getDate); } Charles Beebe replied on Tue, 2012/05/15 - 11:36am @Snippets Manager: thanks for the help -- this was better than the answers I found on SO! Jason McDonald replied on Wed, 2007/05/16 - 8:11pm Snippets Manager replied on Sun, 2009/03/01 - 9:43pm
http://www.dzone.com/snippets/javascript-isdate-function
CC-MAIN-2014-35
refinedweb
105
60.85
Important: Please read the Qt Code of Conduct - Singleton Context Issues Hi everyone, let start with some simplified code: ScreenManager.qml: @pragma Singleton import QtQuick 2.3 Item { function createScreen(fileName, parentItem, obj){ var newScreen = Qt.createComponent(fileName); newScreen.createObject(parentItem, obj); } } @ Main.qml: @ Component.onCompleted: { ScreenManager.createScreen("AnyScreen.qml", root, {"source": "qrc:/images/blabla.png"}) }@ Even though I pass a root reference to the singleton function, it doesnt consider my newly created object "AnyScreen.qml" as a child of my root. It seems like because the creation context is within the singleton, I just get a blank screen instead of what I had before. Anyone know how I could solve this problem ? I just found the problem: @var newScreen = Qt.createComponent("qrc:/" + fileName);@ If i dont prefix with qrc:/ it will try to load the file from qrc:/singletonfolder/ Good news is that everything is working !!!
https://forum.qt.io/topic/47410/singleton-context-issues
CC-MAIN-2020-40
refinedweb
147
51.24
WSGI Middleware to "Bring Your Own IDentity" to a web server Project description README for ARPA2 WSGI middleware WSGI enables middleware, which is a perfect place for enforcing ARPA2 Identities, ranging from authentication to authorisation through access control. Background links: - InternetWide Identity Design - ARPA2 Identity (formerly known as DoNAI) supports many forms and several tricks - ARPA2 Selector - Access Control with efficient implementation and a design for backend protocols. HTTP SASL handled with WSGI SASL Most protocols that require authentication make use of the SASL protocol. Maybe it is better to say that SASL is a kind of tunnel that passes through a protocol to exchange authentication in a flexible manner. Most importantly, the mechanisms can be plugged into software independent of the protocol that uses SASL and one infrastructure can be shared by all the protocols. HTTP has not had SASL implemented. We have specified HTTP SASL to fix that, and stop the ongoing need for authentication systems built into the (rather insecure) JavaScript namespace. We are implementing it with a few Java demonstrations, as well as a browser extension with native messaging so we may reuse desktop credentials from a trusted application outside the browser. Yes, that includes one's Kerberos login and yes, it enables the use of pseudonyms during realm crossover. We are hoping to develop a Python WSGI component for the server side. It would sit between the web server and WSGI application, and detect 401 or 407 responses, annotate them with SASL authentication option and hope to find a browser that responds to the option of SASL authentication. Current status: There is no reasonable support for server-side SASL at this moment. We have asked others to help out with this. HTTP User handled with WSGI User Authentication users are client identities. Most protocols also recognise a user on the server side, but HTTP does not. And if it does, it is supposed to be the same as the client identity, which is thereby forced to fall under the server's realm. We see the resuls everywhere; the web asks you to create an account on many sites you visit. This is not practical. We therefore proposed an extension to HTTP that explicitly indicates the User as part of the resource name space identification on the server or, in URI terms, of the authority. If we move towards peer-to-peer HTTP service, this is going to be helpful for routing requests, perhaps in combination with encrypted portions in the initial TLS messages. This extension to HTTP is straightforward, and it has been implemented for WSGI in the WSGI_User class. This reads the User header, removes %-escapes, applies a syntax check that defaults to the NAI syntax with a default-on option to also allow empty strings, and if it matches, passes the value in a LOCAL_USER environment variable and will signal the impact on caching by adding a Vary: User response header. By default, WSGI-User implements backward compatibility with two older habits: - Basic authentication with a username and empty password - Local server convention for /~usernamepaths Both are a bit cluttered in comparison to the User header. Basic authentication conflates client and server identities and invalidates intermediate caching; local server conventions do not permit inferencing in clients or by their human users (and makes them resort to inconsistent, deductive reasoning). Current status: The code performs well in our test programs. Please try it live and report to us! Bring Your Own IDentity We are considering a BYOID mechanism based on Diameter servers hosted under domain names. This is not a web-only technology, so we are not limited to HTTP and can make a choice for a more dedicated technology. Diameter is the sequel to RADIUS; its security is better so it can be used for such realm crossover purposes; indeed, there are SRV records in DNS for this kind of purpose. Diameter's support for bulk interactions and routing of requests and responses has also improved. Finally, it is easier to extend with notions such as SASL fields. With this in mind, a server receiving a client identity john@example.com can lookup the Diameter server for example.com and relay SASL traffic to the realm. It does not need any local credentials to allow for this to work; all it needs to do is use TLS for trust in the link to the backend Diameter server. This is not the only way in which we think BYOID can be achieved. It will me much more powerful once we get our projects for Kerberos going: Impromptu Realm Crossover (KXOVER) and TLS-KDH. The advantage of these mechanisms is that a crossover relation is made between realms, not just for individual queries of individual users. This makes it extremely efficient for bulk use, but it will also take longer to get established. It is useful though; TLS-KDH authenticates thousands of times faster than tradition public-key certificates, and it resists quantum computers; KXOVER currently cannot match that last point, but will develop in that direction too. SASL is the short-term solution that can integrate seemlessly with this same approach. ARPA2 Access Control The ACL setup that we envision is flexible, generic and fast. More importantly, it is suitable for realm crossover uses. We are building libraries to support the general evaluation of this model, along with change subscription to keep dependents informed. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/arpa2.wsgi/
CC-MAIN-2020-10
refinedweb
923
51.48
A palindrome is a string which on reversal, is same as the string initially. In the above example of reversing a string, we see two strings: - FACE - MADAM On reversing the strings, we get FACE as ECAF while MADAM stays unaffected by reversal. Thus, MADAM is a palindrome. Similarly, we have different strings like RADAR, ANNA and LEVEL. We can also have multiple WORD palindromes like - My gym - Red rum, sir, is murder - Step on no pets So, how to solve this problem. Firstly, keep your IDE closed, and pick up a copy and a pen to write down the pseudo-code. For solving this problem, we have two approaches # APPROACH 1: Brute Force Yes, Let’s get down to brass tacks. Simply put, this will have three steps. Pseudo Code: - Store the string // input - Iterate over the string backwards // for loops - Check if the strings are equal This is the easiest solution possible. We are calculating the entire possibility of reversal of string and then comparing it. The code will go like this: Code: #include <iostream> include < string.h > using namespace std; int main() { char string1[20], stringReversed[20]; int i, length, j; int flag = 0; cout << "Enter a string: "; cin >> string1; length = strlen(string1); for (i = length - 1, j = 00. >= 0, j < length; i--, j++) { stringReversed[j] = string1; } for (i = 0; i < length; i++) { if (string1[i] != stringReversed[i]) { flag = 1; break; } } if (flag) { cout << string1 << " is not a palindrome-“ « endl; } else { cout << string1 << " is a palindrome- “« endl; } return 0; } We first reverse the string by using two variables, I which starts from the front and j which starts from the back. This reversal is then completed, after which we evaluate if the strings are equal or not. The run time of this solution is O(n) and this increases as the size of the string increases. # APPROACH 2: Optimized Solution In the above approach, what if instead of iterating the whole string and then using another loop to compare, we only iterate half a string and compare in the same loop? Since we know that the first half will be equal to second half in a palindrome, we use this as the improvement while devising our second algorithm. Pseudo Code: - Iterate over the string once, with one variable at the front and other variable at the back. - Check and compare the variables at each iteration - If all iterations are passed, we have a palindrome. Code: The run time of this solution is O(n/2) . And as clearly visible, we use a lot less looping. #include <iostream> #include <string.h> using namespace std., ' int main() { char string1[20]; int i, length; int flag = 0; cout << "Enter a string:"; cin >> string1; length = strlen(string1); for (i = 0; i < length; i++) { if (string1[i] != string 1[length - i - 1] { flag = 1; break; } if (flag) { cout << stringl<< " is not a palindrome" << endl; else { cout << string 1 << " is a palindrome"; return 0; } This is how we solve the check palindrome problem. Report Error/ Suggestion
https://www.studymite.com/check-string-is-palindrome-problem/
CC-MAIN-2020-45
refinedweb
503
78.59
30 June 2009 11:00 [Source: ICIS news] SINGAPORE (ICIS news)--Here is Tuesday’s end of day Asian oil and chemical market summary from ICIS pricing. CRUDE: Aug WTI $72.16/bbl up 67 cents Aug BRENT $71.74/bbl up 67 cents/bbl Crude futures strengthened on Tuesday afternoon, after hitting new eight month highs earlier in session. Unrest in ?xml:namespace> NAPHTHA: Asian naphtha prices closed $33/tonne higher Tuesday. First half August price indications were pegged at $643.50-644.50/tonne CFR (cost and freight) Japan, second half August at $637.50-638.50/tonne CFR Japan and first half September at $633.00-634.00/tonne CFR Japan. BENZENE: Prices firmed further in the afternoon with a deal heard for any August loading at $825/tonne FOB (free on board) TOLUENE: Prices edged lower to $775-785
http://www.icis.com/Articles/2009/06/30/9228469/evening-snapshot-asia-markets-summary.html
CC-MAIN-2015-14
refinedweb
143
76.32
Hi there, I am new here and would appreciate any advice. I am using the latest versions of rails / ruby / rvm etc as of about 2 days ago (when I set everything up). I created a new view (~app/views/stories/new.html.erb) by creating a blank document placing some code in it and saving it. When I try to view it however () it reports a routing error. Is there something I am missing here? Do I need to somehow report the presence of this new view? I noted that in the text the index.html.erb view (which displays correctly - although I have to actually type "~stories/index" as the url rather than it just being found at "stories/ ") is created automatically when I generated the controller. I couldn't find any other way to generate a new view - except by using a scaffold? It's quite frustrating as it means I can no longer continue with the tutorial in the book! Thanks for any advice you can give me. Well, in order to get to the bottom of your problem, post the contents of your routes.rb file, stories_controller.rb, and the whole error message. As far as generating views, it is possible when you generate your controllers. The following link gives you a great overview of using the command line generators. Thanks for taking the time to help me - here are the contents of the files you requested. Please let me know if there is anything else which may be useful. routes.rb: Shovell::Application.routes.draw do get "stories/index" # The priority is based upon order of creation: # first created -> highest priority. # Sample of regular route: # match 'products/:id' => 'catalog#view' # Keep in mind you can assign values other than :controller and :action # Sample of named route: # match 'products/:id/purchase' => 'catalog#purchase', :as => :purchase # This route can be invoked with purchase_url(:id => product.id) # Sample resource route (maps HTTP verbs to controller actions automatically): # resources :products # Sample resource route with options: # resources :products do # member do # get 'short' # post 'toggle' # end # # collection do # get 'sold' # end # end # Sample resource route with sub-resources: # resources :products do # resources :comments, :sales # resource :seller # end # Sample resource route with more complex sub-resources # resources :products do # resources :comments # resources :sales do # get 'recent', => 'welcome#index' # See how all your routes lay out with "rake routes" # This is a legacy wild controller route that's not recommended for RESTful applications. # Note: This route will make all actions in every controller accessible via GET requests. # match ':controller(/:action(/:id))(.:format)' end stories_controller.rb: class StoriesController < ApplicationController def index @story = Story.find(:first, :order => 'RANDOM()') end def new @story = Story.new end end Error message: Routing Error No route matches [GET] "/stories/new" Try running rake routes for more information on available routes. Rake routes output: stories_index GET /stories/index(.:format) stories#index Hey mate, In your routes.rb file you currently only have get "stories/index" you'll either need to add stories/new or just add resources :stories and rake. Yup, as t.ridge says you don't have a route defined for your new method. You can erase most of the routes file as it is all comments. To get things working you can change your routes file to the following: Shovell::Application.routes.draw do resources :stories end
https://www.sitepoint.com/community/t/simply-rails-2-page-160-problem/14384
CC-MAIN-2015-48
refinedweb
562
56.55
Web If you’re not yet familiar with our Early Access Programs, or if you want to find out what features were added in WebStorm 2018.2 EAP, check out this page. Here are some highlights from WebStorm 2018.2 EAP #6 (182.3341.1): - New Extract React component refactoring - Improved support for React namespaced components and PropTypes - Option to configure a global File Watcher available in all projects - New Rerun failed tests action for Karma, Jest, and Mocha - Improved support for the Angular CLI’s schematics - Manage logs on the new Logs tab in the Node.js run/debug configuration Extract React component We’re very excited about this feature! WebStorm WebStorm_2<< We hope you like this new feature and get a lot of use out of it! Please share your feedback with us in the comments below or on our issue tracker. Support for React namespaced components WebStorm now has better support for React components that have a namespace in their name. You can now get appropriate code completion and navigation to the definition for these components in JavaScript and TypeScript files. Better support for PropTypes Code completion is now more precise when you use PropTypes like shape, oneOf, instanceOf, or arrayOf. Here are a couple of examples: Here in the completion we have the values listed using PropTypes.oneOf: And here we have completion for array methods because PropTypes.arrayOf is used: Global File Watchers With File Watchers, you can run command-line tools like Prettier automatically when you change or save a file. Before, it was only possible to configure and use a file watcher in a specific project. If you wanted to use the same watcher in a different project, you had to repeat the whole configuration. But now you can create and store File Watchers on the IDE level, and easily enable them in different projects. To create a global File Watcher, open the IDE Preferences (from the project or from the Welcome screen) and go to Tools | File Watchers, click the + button, and select Custom from the list. Now configure the File Watcher for the tool you want to use (for more information on how to do that, see our documentation). Once you’ve saved the new File Watcher, you can decide if it’s going to be available only in the current project – select Project in the right column or for all projects – then select Global. Note that the global File Watchers will be available in all projects, but disabled by default. If you want to use it, go to the project’s Preferences | Tools | File Watchers and select the check-box next to it. Let’s create a new global File Watcher for Prettier following the steps described in the Prettier docs. What we want to change is the path to Prettier – for every project where we want to enable this File Watcher, we want to use Prettier installed in the project’s node_modules folder. To do that, we use a macro in the path: $ProjectFileDir$/node_modules/.bin/prettier. Rerun failed tests There’s a new Rerun Failed Tests action available when you run tests with Karma, Jest, or Mocha in WebStorm. As the name suggests, it allows you to rerun only those tests that have failed, instead of running all tests after the fix. Run more Angular schematics using the Angular CLI integration With the New… – Angular CLI… action in the Project view, you can now generate code using schematics that are defined in libraries like @angular/material (note that you need to use Angular 6 for this). Previously, you could only generate components, services, and other blueprints defined in the Angular CLI itself. In addition, WebStorm now shows the schematic’s description and provides code completion and description for available options. _11<< Please see the Release Notes for a full list of issues. WebStorm Team 5 Responses to WebStorm 2018.2 EAP, 182.3341: extract React component, global file watchers, rerun failed tests EZ says:June 21, 2018 Hi, can you also let us specify options for ng new? ng new is used in the create project dialog shown here: Angular CLI has several command line options that you are not able to specify in the current WebStorm version. Can you add an additional box on the form there to let us add some of these options when creating a project? Ekaterina Prigara says:June 22, 2018 Hi, thank you for the feedback! Please vote for this issue and follow it for updates: EZ says:June 25, 2018 It says “You can’t vote for the issue”. The latest activity says it’s been assigned to someone – does that mean it’s being actively worked on? Ekaterina Prigara says:June 26, 2018 To vote for a feature, please login into our tracker. No, we are not working on that right now, but it might be implemented by the WebStorm 2018.2 release. EZ says:July 23, 2018 Doesn’t look like it made it ): Maybe 2018.2.1?
https://blog.jetbrains.com/webstorm/2018/06/webstorm-2018-2-eap-182-3341/
CC-MAIN-2021-04
refinedweb
839
60.55
README SørensenSørensen Sørensen is a modern, i18n-friendly hotkey library for the Web. By modern, we mean that it uses the KeyboardEvent.code property for matching keys. That means that physical keys, as opposed to the character produced by the key, is used for defining key bindings. Other KeyboardEvent properties tend to be very flaky, because they can change depending on the Keyboard Mapping used. By i18n-friendly, we mean that it provides utility methods that use the Keyboard Map API to convert between key code values and strings to be presented to the user, such as converting a KeyQ value to A on a French keyboard, and then converting Z to KeyW. InstallInstall Just install the library using your package manager of choice. Sørensen has no external dependenices, but it assumes a reasonably modern browser, as it uses ES7 features. npm i --save @sofie-automation/sorensen or: yarn add @sofie-automation/sorensen UseUse import sorensen from '@sofie-automation/sorensen' await sorensen.init() sorensen.bind('Tab+KeyT', (e) => { // Sorensen extends the KeyboardEvent object with `comboChordCodes` and `comboCodes` console.log(e) e.preventDefault() }, { ordered: true, exclusive: true }) For a comprehensive list of possible KeyboardEvent code values and their corresponding keys see KeyboardEvent: code values on MDN. Additionally, Sørensen allows special, "virtual" codes, for conveniance when using modifiers and the Enter key. These are:
https://www.skypack.dev/view/@sofie-automation/sorensen
CC-MAIN-2022-05
refinedweb
222
54.02
. OMA rocks. Not quite as cool as EAS if you have a smartphone though ;-). Ya, OMA is cool. My problem is that on my phone (Sanyo 8100), I have a bookmark set to the Inbox part. Everytime I come back to that bookmark after not using it, I get a page that says I’ve been inactive for too long and I have to go back to the main page. Not a HUGE problem, but something like 2 extra clicks and page reloads — not as speedy to do on a cell phone. Then again, I know my device isn’t supported yet, so who am I to really complain? I’d really like to see OMA work over SSL. While there are many people that still deploy OWA using Basic authentication and no SSL, it’s still a security risk. Plus, if you do want to implement SSL for OWA and do not have a FE/BE setup, there are several things you have to do in order to get OMA working. I’ve got OMA deployed (in test lab) and have accessed it using PocketPC 2003 and it looks terrific and is pretty fast. Perhaps SSL won’t work because of cell phones, but I think something should be done to address security concerns. Ben, Unless I’m doing something stupid and being taken in by a false padlock on my browser, I’m pretty sure OMA does work over SSL – its deployed here and works fine for me surfing in on a smartphone 2003 system. The only downside – and I am 99.9% certain this is us being dumb and not looking into the problem properly yet anyway – is that it constantly "forgets" what the default domain is to authenticate users against. I don’t even remember doing anything "fancy" with IIS to make it work either. We are using a fe/be configuration though. Regards Rob Moir I have deployed OMA in our enterprise environment, and my disenchantment with the product is that it is locked down to using only the default namespace. With 20000 users, we have six SMTP namespaces, which means for someone to use OMA, I have explicitly give them a primary namespace address, even though they don’t work for the primary namespace’s company. It is similar to the ActiveSync limitation because of its ties to the /exchange vdir. I wish I could deploy AS and OMA to custom vdirs/namespaces, just like I can do with OWA. It is possible to change OMA aspect/look&fill ? It function great on my smartphne(Treo 600) but is it possible to make more nice interface? Thanks A few answers to questions above. OMA+SSL As Rob says above, there should be no problem getting OMA to work with SSL. All devices listed as supported by OMA, have also been tested using SSL again IIS. OMA vdirs It is possible to rename or create new vdirs for OMA and EAS just as it is for OWA. It’s easiest to do this through ESM. You can also make OMA and EAS use some other vdir than /exchange to access backend data by using the "HKLMSYSTEMCurrentControlSetServicesMasSyncParameters ExchangeVDir" registry key on the FE machine. Set it to ‘/mailboxDataAccess’, or whatever the name of the BE vdir that is exposing DAV+OWA is. Modifying OMA UI It is true that E2003 OMA UI on PPC/Smartphone isn’t as good looking as it could be. OMA was written with the goal of having it work on as many mobile devices as possible, and making it look great on richer devices wasn’t a high priority (especially not for PPC/Smartphone, where great Exchange access through Pocket Outlook and EAS is available). Unfortunately there is no good way of modifying the OMA UI. Iffy: If you change your bookmarked URL to not include the session ID (the weird string of characters) then you shouldn’t get the session timed out message everytime you use the bookmark. Hope this helps. You can open up usage to other than the default namespace by creating new vdirs as Kristian states above. You may need to provide the users with a default namespace address but you can supply an additional SMTP address that will actually be used by the user when supplying credentials. See the nice article on hosting at as a way to get this done. Help! My new Nokia 6820 Phone is limiting my USERNAME field in OMA to 16 characters. This isn’t long enough for me to enter my full domainusername. So I can’t access OMA!! Is there a way to embed the username in the URL ie. Why would Nokia limit this field to 16 characters anyway? I can’t seem to get OMA to work. I always get this message: A System error has occurred while processing your request. Please try again. If the problem persists, contact your administrator. I’ve tried the solution that Microsoft suggested at but I still get the same message. I know that my Exchange Virtual Directory isn’t using SSL or Uses Form-Based Authentication. I’m running Exchange Server 2003 and Windows Server 2003 on the same box and trying to connect with a Nokia 3650. Any ideas? The list of devices that have been tested (and are supported by Microsoft) with OMA can be found at. To get support for all these devices, you need to install the latest available ‘Device Update’ (see link) on your Exchange Front-End server. Microsoft continuously releases new Device Updates adding support for more devices. The Nokia 6820 mentioned above is not on the supported list. This can mean it was never selected to be tested, or that Microsoft couldn’t make it work well with OMA. It indeed sounds strange that the username input field for Basic Authentication would be limited to 16 characters. The Nokia 3650 mentioned above is on the supported list, but only if you have installed the most recent DU (Device Update). You could try looking in the Event Log of your Exchange server to see if there are any OMA events giving more information than the end user error message. I have set up OMA at home and when I try to access it on my Sony Ericsson T616 I get the following message: 500: Web Service problem: Please contact teh service provider. I can access it fine using Internet Explorer, I think the error is being generated because I don’t have an SSL certificate from one of the providers specified in my T616. Has anyone else run into a similar problem? Thanks Simon Hi Simon, Alex, Thanks for getting back to me. I’m running Windows Server 2003/Exchange Server 2003 I didn’t install URLScan because I read it wasn’t needed due to the enhanced security features of IIS 6. AT&T Wireless is my service provider, on my Sony Ericsson T616 under "Connect > WAP Options > Security > Trusted Certificates" I have the following 4 certificates listed: Verisign GlobalSign CyberTrust; Baltimore Entrust.net WAP I have a test certificate from Verisign however the Issuer is different than that of a regular Verisign certificate. I don’t really have a spare $300+ to spend on an SSL certificate from one of these trusted authorities and thus far I have been unable to find out how to add Trusted or Client certificates to the phone. Naturally I would rather not access OMA without SSL Simon Hi Simon, As I explained in my previous post, the list of certs installed on your client device is likely irrelevant, as the WAP gateway is probably doing all negotiation and encryption. If the problem is indeed due to SSL handshake failing, you have two options: – Purchase a server SSL certificate from a well-known, trusted authority – Negotiate with AT&T Wireless to install your custom certificate signature on their WAP gateway. What you are hitting on, is an issue of WAP 1.x protocol falling short on providing adequate SSL facilities. Since pre-WAP2.0 phones can not speak SSL directly, translation facility was offered on gateway machines on mobile operators’ network edge. It makes sense for an operator to configure their gateways to only trust well-known certificate issuers, in order to protect their customers from spoofing attacks. Unfortunately this means that sites which use custom certificates, or certificates with inconsistent information (like your Verisign test one) may not be accessible from any devices on said carriers’ network. This is fixed with direct SSL capabilities of WAP 2.0; however both your device and the carrier gateway must support this feature for it to work. Please keep in mind that SSL negotiation failure is a likely suspect for your experiencing the HTTP 500 error, but it may not be the actual problem after all. There is not enough information here to really troubleshoot. I’d suggest giving our PSS folks a call if you want to explore other avenues of resolving this issue. Good luck! Alex Alex – I have looked hi and low for the Exchange Activesync component of 2003…alas I’ve found everything else Mobile but that…did MS forget to put in on the CD? Ron Hi guys, I’ve been looking to get OMA working on my Exchange 2003 Server for quite some time now. I have installed Device update 4 and have gotten to the stage that I can log on, get a list of folders, but when I click on Inbox, I get a message saying that I have been inactive for 20 minutes or am trying to back up 8 pages of data???? This happens using my SonyEriccsson T610 and IE6. Does anyone know what this might mean? Many Thanks, Ross. Ron, Exchange ActiveSync is installed and enable by default with Exchange 2003. I am having the same multiple namespace + OMA access problem that Scott has. Kristian pointed out adding the registry entry (similar to Q817379). But that registry only points to a different VD which points to a specific (one) SMTP namespace. Is there another way around this. Thanks Jason: You’re going to have to give us more information about your scenario for us to be able to help out. If you need someone to talk to about the problem, I suggest calling PSS. Ross: Are you accessing OMA through a URL that looks liks "https://<your server>/oma", or is there something after the ‘/oma’? If there is something behind the ‘/oma’, the application is going to think that you’re trying to access a particular item (eg. an email or meeting request) that you bookmarked from your last session using OMA. Since OMA no longer has information (session state) about what item was matched to the URL you’re using, it is letting you know that the session state has been cleared since you last accessed that URL. Sorry for not describing the situation more clearly. Here is what I am trying to do. One Exchange 2003 server hosting xxx.com and yyy.com. SSL is required. I had to make the changes suggested by Q817379 in order to get OMA working even though I don’t have a FE/BE setup. Created a new VD called xxxDAV and have the ExchangeVDir key pointing to xxxDAV. OMA VD is pointing to xxx.com by default. Users with xxx.com address can access OMA. Users with yyy.com address cannot access OMA. Thanks again Hi Ross, I have tried to setup OMA a million times now. And it seems like a lottery to make it work. I have followed the deployment guide to the letter at least 100 times now. I always use a clean install, new domain new org etc. I got it working on the 5 or 6th atempt. But the problem I get/got all the other times was this: I’d goto my directory and use my login details to login. I would get rejected 3 times and then given an unauthorised access page. I tried giving accounts every permission under the sun. But to no avail. But I redone it again and again (the exact same way I did when it worked) many many times and it only works sometimes. Out of a realistic 50 clean installs I had OMA working about 6 times. These are not good odds, no matter how you look at it. This was on a MS Server 2003/Exchange 2003 box, reformated and setup clean each time. Is there a definitive, it works first time if you do this….. article/guide? Any help would probably stop others from freaking out for weeks. Cheers G Hi Everyone Is there any howto out there on howto :-) setup OMA and SSL in an environment using single server (=not using FE/BE servers). /Johan@husera.se Hi there! Speaking of WAP from a phone, Does anyone know how to install Certificates onto a SonyEricsson T616 phone? I’ve somehow lost all the Trusted Certs in there, but now, I have them as .crt files on my PC. How do I get them installed on my phone as Trusted Cert again? Thx MUCH!! -Kor I love OMA but I don’t use it at all because of one problem that I can’t seem to get past (at least with my phone – a supported Nokia 6200) – I -always- have to log in. Is there any way to somehow store the credentials so I dn’t have to numpad it in -every- time? It doesn’t appear that I can do it in the URL (I’ve tried many times and just can’t seem to get it to work). Am I doing something really, really wrong here? Appreciation in advance for any and all suggestions! Cheers, Jeff
https://blogs.technet.microsoft.com/exchange/2004/03/16/outlook-mobile-access-from-an-exchange-newbie/
CC-MAIN-2016-40
refinedweb
2,305
71.44
Red Hat Bugzilla – Bug 168578 Review Request: perl-Class-ErrorHandler Last modified: 2007-11-30 17:11:13 EST Spec Name or Url: SRPM Name or Url: Description: This is Class::ErrorHandler, a base class for classes that need to do error handling (which is, probably, most of them). Personal preference: I'd not add the COPYING and Artistic files. This Perl dist only consists of one single file, containing clear Copyright and License terms inlined, so bloating the rpm with legally doubtful files doesn't make much sense to me. Anyway, APPROVED. Ping - Steve, it's more than 2 months since this package has been approved. I am sorry, but don't see any other possibility but to close this Request as FAILED shouldn't you respond within 1 week from now, or somebody else volunter to take over maintainership of this package. (In reply to comment #2) > Ping - Steve, it's more than 2 months since this package has been approved. > > I am sorry, but don't see any other possibility but to close this Request as > FAILED shouldn't you respond within 1 week from now, This week has passed, no reaction from Steve so far. As this package blocks others, instead of letting it FAIL, I've decided to import it into CVS, request a built on devel and to mark it as orphaned. I'm sorry, I honestly never saw the APPROVED mail or your last one. Work's been crazy, so I've been in maintenance mode lately instead of checking on new package requests. (In reply to comment #4) > I'm sorry, I honestly never saw the APPROVED mail or your last one. Should you still be interested in maintaining this package, would you please take ownership of this package (change owners.list accordingly) and issue branch requests for those FC versions you'd like to see implemented?
https://bugzilla.redhat.com/show_bug.cgi?id=168578
CC-MAIN-2016-30
refinedweb
315
65.56
Opened 6 years ago Closed 6 years ago Last modified 3 years ago #11130 closed (fixed) incorrect namespace error in django/docs/howto/custom-template-tags.txt code example Description The surrounding code examples assume that the import statement used is from django import template. The last code example in the Passing template variables to the tag section follows this except for the line: self.date_to_be_formatted = Variable(date_to_be_formatted) This line should read: self.date_to_be_formatted = template.Variable(date_to_be_formatted) Attachments (1) Change History (5) Changed 6 years ago by phyfus comment:1 Changed 6 years ago by phyfus - Cc johann@… added - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 6 years ago by Alex - Triage Stage changed from Unreviewed to Ready for checkin comment:3 Changed 6 years ago by kmtracey - Resolution set to fixed - Status changed from new to closed comment:4 Changed 3 years ago by jacob - milestone 1.1 deleted Milestone 1.1 deleted Note: See TracTickets for help on using tickets. (In [10799]) Fixed #11130 -- Corrected code example in custom template tag doc. Thanks phyfus.
https://code.djangoproject.com/ticket/11130
CC-MAIN-2015-11
refinedweb
182
50.57
- NAME - VERSION - TYPES IN PERL? - THE TYPES - WHAT IS A TYPE? - SUBTYPES - TYPE NAMES - COERCION - TYPE UNIONS - TYPE CREATION HELPERS - ANONYMOUS TYPES - VALIDATING METHOD PARAMETERS - LOAD ORDER ISSUES - AUTHORS NAME Moose::Manual::Types - Moose's type system VERSION version 2.1100 TYPES IN PERL? Moose provides its own type system for attributes. You can also use these types to validate method parameters with the help of a MooseX module. Moose's type system is based on a combination of Perl 5's own implicit types and some Perl 6 concepts. You can create your own subtypes with custom constraints, making it easy to express any sort of validation. Types have names, and you can re-use them by name, making it easy to share types throughout a large application. However, this is not a "real" type system. Moose does not magically make Perl start associating types with variables. This is just an advanced parameter checking system which allows you to associate a name with a constraint. That said, it's still pretty damn useful, and we think it's one of the things that makes Moose both fun and powerful. Taking advantage of the type system makes it much easier to ensure that you are getting valid data, and it also contributes greatly to code maintainability. THE TYPES The basic Moose type hierarchy looks like this Any Item Bool Maybe[`a] Undef Defined Value Str Num Int ClassName RoleName Ref ScalarRef[`a] ArrayRef[`a] HashRef[`a] CodeRef RegexpRef GlobRef FileHandle Object In practice, the only difference between Any and Item is conceptual. Item is used as the top-level type in the hierarchy. The rest of these types correspond to existing Perl concepts. In particular: Boolaccepts 1for true, and undef, 0, or the empty string as false. Maybe[`a]accepts either `aor undef. Numaccepts integers, floating point numbers (both in decimal notation & exponential notation), 0, .0, 0.0 etc. It doesn't accept numbers with whitespace, Inf, Infinity, "0 but true", NaN & other such strings. ClassNameand RoleNameaccept strings that are either the name of a class or the name of a role. The class/role must already be loaded when the constraint is checked. FileHandleaccepts either an IO::Handle object or a builtin perl filehandle (see "openhandle" in Scalar::Util). Objectaccepts any blessed reference. The types followed by "[`a]" can be parameterized. So instead of just plain ArrayRef we can say that we want ArrayRef[Int] instead. We can even do something like HashRef[ArrayRef[Str]]. The Maybe[`a] type deserves a special mention. Used by itself, it doesn't really mean anything (and is equivalent to Item). When it is parameterized, it means that the value is either undef or the parameterized type. So Maybe[Int] means an integer or undef. For more details on the type hierarchy, see Moose::Util::TypeConstraints. WHAT IS A TYPE? It's important to realize that types are not classes (or packages). Types are just objects (Moose::Meta::TypeConstraint objects, to be exact) with a name and a constraint. Moose maintains a global type registry that lets it convert names like Num into the appropriate object. However, class names can be type names. When you define a new class using Moose, it defines an associated type name behind the scenes: package MyApp::User; use Moose; Now you can use 'MyApp::User' as a type name: has creator => ( is => 'ro', isa => 'MyApp::User', ); However, for non-Moose classes there's no magic. You may have to explicitly declare the class type. This is a bit muddled because Moose assumes that any unknown type name passed as the isa value for an attribute is a class. So this works: has 'birth_date' => ( is => 'ro', isa => 'DateTime', ); In general, when Moose is presented with an unknown name, it assumes that the name is a class: subtype 'ModernDateTime' => as 'DateTime' => where { $_->year() >= 1980 } => message { 'The date you provided is not modern enough' }; has 'valid_dates' => ( is => 'ro', isa => 'ArrayRef[DateTime]', ); Moose will assume that DateTime is a class name in both of these instances. SUBTYPES Moose uses subtypes in its built-in hierarchy. For example, Int is a child of Num. A subtype is defined in terms of a parent type and a constraint. Any constraints defined by the parent(s) will be checked first, followed by constraints defined by the subtype. A value must pass all of these checks to be valid for the subtype. Typically, a subtype takes the parent's constraint and makes it more specific. A subtype can also define its own constraint failure message. This lets you do things like have an error "The value you provided (20), was not a valid rating, which must be a number from 1-10." This is much friendlier than the default error, which just says that the value failed a validation check for the type. The default error can, however, be made more friendly by installing Devel::PartialDump (version 0.14 or higher), which Moose will use if possible to display the invalid value. Here's a simple (and useful) subtype example: subtype 'PositiveInt', as 'Int', where { $_ > 0 }, message { "The number you provided, $_, was not a positive number" }; Note that the sugar functions for working with types are all exported by Moose::Util::TypeConstraints. TYPE NAMES Type names are global throughout the current Perl interpreter. Internally, Moose maps names to type objects via a registry. If you have multiple apps or libraries all using Moose in the same process, you could have problems with collisions. We recommend that you prefix names with some sort of namespace indicator to prevent these sorts of collisions. For example, instead of calling a type "PositiveInt", call it "MyApp::Type::PositiveInt" or "MyApp::Types::PositiveInt". We recommend that you centralize all of these definitions in a single package, MyApp::Types, which can be loaded by other classes in your application. However, before you do this, you should look at the MooseX::Types module. This module makes it easy to create a "type library" module, which can export your types as perl constants. has 'counter' => (is => 'rw', isa => PositiveInt); This lets you use a short name rather than needing to fully qualify the name everywhere. It also allows you to easily create parameterized types: has 'counts' => (is => 'ro', isa => HashRef[PositiveInt]); This module will check your names at compile time, and is generally more robust than the string type parsing for complex cases. COERCION A coercion lets you tell Moose to automatically convert one type to another. subtype 'ArrayRefOfInts', as 'ArrayRef[Int]'; coerce 'ArrayRefOfInts', from 'Int', via { [ $_ ] }; You'll note that we created a subtype rather than coercing ArrayRef[Int] directly. It's a bad idea to add coercions to the raw built in types. Coercions are global, just like type names, so a coercion applied to a built in type is seen by all modules using Moose types. This is another reason why it is good to namespace your types. Moose will never try to coerce a value unless you explicitly ask for it. This is done by setting the coerce attribute option to a true value: package Foo; has 'sizes' => ( is => 'ro', isa => 'ArrayRefOfInts', coerce => 1, ); Foo->new( sizes => 42 ); This code example will do the right thing, and the newly created object will have [ 42 ] as its sizes attribute. Deep coercion Deep coercion is the coercion of type parameters for parameterized types. Let's take these types as an example: subtype 'HexNum', as 'Str', where { /[a-f0-9]/i }; coerce 'Int', from 'HexNum', via { hex $_ }; has 'sizes' => ( is => 'ro', isa => 'ArrayRef[Int]', coerce => 1, ); If we try passing an array reference of hex numbers for the sizes attribute, Moose will not do any coercion. However, you can define a set of subtypes to enable coercion between two parameterized types. subtype 'ArrayRefOfHexNums', as 'ArrayRef[HexNum]'; subtype 'ArrayRefOfInts', as 'ArrayRef[Int]'; coerce 'ArrayRefOfInts', from 'ArrayRefOfHexNums', via { [ map { hex } @{$_} ] }; Foo->new( sizes => [ 'a1', 'ff', '22' ] ); Now Moose will coerce the hex numbers to integers. Moose does not attempt to chain coercions, so it will not coerce a single hex number. To do that, we need to define a separate coercion: coerce 'ArrayRefOfInts', from 'HexNum', via { [ hex $_ ] }; Yes, this can all get verbose, but coercion is tricky magic, and we think it's best to make it explicit. TYPE UNIONS Moose allows you to say that an attribute can be of two or more disparate types. For example, we might allow an Object or FileHandle: has 'output' => ( is => 'rw', isa => 'Object | FileHandle', ); Moose actually parses that string and recognizes that you are creating a type union. The output attribute will accept any sort of object, as well as an unblessed file handle. It is up to you to do the right thing for each of them in your code. Whenever you use a type union, you should consider whether or not coercion might be a better answer. For our example above, we might want to be more specific, and insist that output be an object with a duck_type 'CanPrint', [qw(print)]; We can coerce file handles to an object that satisfies this condition with a simple wrapper class: package FHWrapper; use Moose; has 'handle' => ( is => 'rw', isa => 'FileHandle', ); sub print { my $self = shift; my $fh = $self->handle(); print {$fh} @_; } Now we can define a coercion from FileHandle to our wrapper class: coerce 'CanPrint' => from 'FileHandle' => via { FHWrapper->new( handle => $_ ) }; has 'output' => ( is => 'rw', isa => 'CanPrint', coerce => 1, ); This pattern of using a coercion instead of a type union will help make your class internals simpler. TYPE CREATION HELPERS The Moose::Util::TypeConstraints module exports a number of helper functions for creating specific kinds of types. These include class_type, role_type, maybe_type, and duck_type. See the docs for details. One helper worth noting is enum, which allows you to create a subtype of Str that only allows the specified values: enum 'RGB', [qw( red green blue )]; This creates a type named RGB. ANONYMOUS TYPES All of the type creation functions return a type object. This type object can be used wherever you would use a type name, as a parent type, or as the value for an attribute's isa option: has 'size' => ( is => 'ro', isa => subtype( 'Int' => where { $_ > 0 } ), ); This is handy when you want to create a one-off type and don't want to "pollute" the global namespace registry. VALIDATING METHOD PARAMETERS Moose does not provide any means of validating method parameters. However, there are several MooseX extensions on CPAN which let you do this. The simplest and least sugary is MooseX::Params::Validate. This lets you validate a set of named parameters using Moose types: use Moose; use MooseX::Params::Validate; sub foo { my $self = shift; my %params = validated_hash( \@_, bar => { isa => 'Str', default => 'Moose' }, ); ... } MooseX::Params::Validate also supports coercions. There are several more powerful extensions that support method parameter validation using Moose types, including MooseX::Method::Signatures, which gives you a full-blown method keyword. method morning ( Str $name ) { $self->say("Good morning ${name}!"); } LOAD ORDER ISSUES Because Moose types are defined at runtime, you may run into load order problems. In particular, you may want to use a class's type constraint before that type has been defined. In order to ameliorate this problem, we recommend defining all of your custom types in one module, MyApp::Types, and then loading this module in all of your other modules..
https://metacpan.org/pod/release/ETHER/Moose-2.1100-TRIAL/lib/Moose/Manual/Types.pod
CC-MAIN-2015-35
refinedweb
1,909
61.26
After a dry spell, there’s now a couple of posts at once… 🙂 I have alluded to this in the past, but I thought I’d take the opportunity to provide a specific post on this topic. The name itself is a little misleading, but… To answer the question, it gives us access to the VisualTree, a tree that we can iterate up and down to our hearts content. How do I know this? A number of years ago I was working on a coolstorage program where I needed to explore the visual tree to provide some nifty effects. It was there that I learnt about WPF and how the interface objects were stored as a hierarchy and I gained a newfound respect for Microsoft and WPF! Why am I going on about this – well, ScriptUtil.FindChild() will only go down the visual tree, not up it, so it is very helpful for us to know where in the scheme of things Smart Office grants us an entry point – then we will take that entry point and exploit the .Net framework 🙂 Probably the easiest way to demonstrate where the Controller.RenderEngine.Content sits in the visualtree is with a script: import System; import System.Windows; import System.Windows.Controls; import MForms; package MForms.JScript { class testContent { public function Init(element: Object, args: Object, controller : Object, debug : Object) { var content : Object = controller.RenderEngine.Content; content.Background = new System.Windows.Media.SolidColorBrush(System.Windows.Media.Colors.Black); } } } This script will simply set the background to black for the area that the .Content grid occupies. Nice, quick and easy. As we can see the Sorting order isn’t within the black area, so passing controller.RenderEngine.Content to ScriptUtil.FindChild() will just return null. We have to walk up the tree to get to the sorting order. This can be done with some code like so – this will take you all the way up the visual tree (I usually look for a standard control and iterate up the list until I hit that control). var parent : Object = content var lastParent : Object = content; while(null != (parent = VisualTreeHelper.GetParent(parent))) { lastParent = parent; } please can you write a full script to coloring sorting order level If you take a look at this post it demonstrates how to walk up the Visual Tree, then you will need to walk down the Visual Tree until you find the Sort Order. Cheers, Scott
https://potatoit.kiwi/2012/05/16/controller-renderengine-content-what-does-it-give-us-access-to/
CC-MAIN-2017-39
refinedweb
404
63.39
China good quality low price manufacture&exporter&supplier ro... US $0-1 10000 Pieces (Min. Order) china suppliers low price good electrical wire mesh cable tra... US $1-5 1 Meter (Min. Order) good quality fish meal low price,origin china US $450-500 20 Tons (Min. Order) Good quality 2 Wheel electric motorcycle,Low Price electric m... US $410-550 30 Pieces (Min. Order) low price adult tricycles BRI-S02 china 50cc scooter 2 stroke... US $700-1200 1 Piece (Min. Order) Good Quality Boxchip A13 7 inch Best Low Price Tablet Pc US $20-30 1 Piece (Min. Order) china manufacture low price good quality 7'' infrared multi t... US $31-67 1 Piece (Min. Order) Low price/ daily use/uncommon appearance,pretty good china fa... US $0-55 1 Piece (Min. Order) low price good quality china 10 inch dvd bluetooth gps revers... US $78.3-88.3 1 Piece (Min. Order) Most popular cool design good quality low price smart two whe... US $170.0-210.0 5 Pieces (Min. Order) Fast Delivery Low Price Good Battery blue Electric Scooter ... US $115-135 1 Piece (Min. Order) 2015 latest low price good voice tablet pc china price, Suppo... US $26.99-29.99 1 Piece (Min. Order) Low price best shipping freight forwarder from China to US Am... US $10-65 1 Cubic Meter (Min. Order) Shipping Container China to Dominican Republic Best Price US $1-200 1 Cubic Meter (Min. Order) import china goods 9 inch android tablet with low price US $37-42 1 Piece (Min. Order) taobao /alibaba low price of shipping to canada from china fo... US $2-30 1 Cubic Meter (Min. Order) Cheap goods from china OEM 9inch android PC tablet to compute... US $34-37 100 Pieces (Min. Order) Top Selling 10.1 Inches 1.4GHZ Tablet ATM7029 Computer impor... US $1-56 1 Piece (Min. Order) 7 inch vatop tablet pc bulk buy from China with good quality ... US $1-43.5 100 Units (Min. Order) Custom good quality low price balance scooter alibaba in ... US $170-220 1 Piece (Min. Order) low price adult tricycles BRI-S02 china 50cc scooter 2 stroke... US $700-1200 1 Piece (Min. Order) new products on china market 7inch fdd 4g lte oem smartphone ... US $103.6-119.9 20 Pieces (Min. Order) 2015 9 inch Tablet A33 Android PC Tablet good price made in ... US $1-41 1 Piece (Min. Order) secure online trading China factory supply new low price self... US $300.00 1 Piece (Min. Order) Good quality and low price uses sim card 7 inch cheapest rugg... US $44.00 20 Pieces (Min. Order) China Supplier M1013D RAM 512MB ROM 8GB Alwinner A33 Quad Cor... US $57-60 200 Pieces (Min. Order) low price vespa style electric scooter company US $280-315 30 Pieces (Min. Order) Good Quality Low Price Wholesale 10Inch Hover Board, Go Board... US $155-180 5 Pieces (Min. Order) taobao /alibaba low price of shipping to malaysia US $25-75 1 Cubic Meter (Min. Order) Top Sale mini pc octa core with low price US $1-130 10 Units (Min. Order) taobao /alibaba low price of shipping to indonesia--------- v... US $1-100 1 Cubic Meter (Min. Order) Android tablet pc, 10 inch tablet pc, good price tablet pc US $30-32 10 Pieces (Min. Order) Good Quality Boxchip A13 7 inch Best Low Price Tablet Pc US $20
http://www.alibaba.com/showroom/low-price-china-goods.html
CC-MAIN-2016-07
refinedweb
576
79.36
Hi all, I have a 32-bit processor, and I believe floats are generally given 23 bits for the mantissa, but I want to know for sure. However I'm running Red Hat Linux 9.0, which has no float.h that I can find, even though /usr/include/c++/3.2.2/cfloat seems to include it?? The cfloat header looks like this: but a 'find' doesn't yield any float.h anywhere on my system.but a 'find' doesn't yield any float.h anywhere on my system.Code: #ifndef _CPP_CFLOAT #define _CPP_CFLOAT 1 #pragma GCC system_header #include <float.h> #endif My floats are 4 bytes in size, but how can I find how those 32 bits are actually structured, other than writing a for loop that just spits out the biggest number it can, then seeing how many bits are required to represent that number? Thanks.
https://cboard.cprogramming.com/cplusplus-programming/50506-what-float-equivalent-int_max-printable-thread.html
CC-MAIN-2018-05
refinedweb
149
76.22
Django is a web development framework that every Python developer should experience, and might just be worth learning Python for. Introduction Django is a free open source framework for developing websites. It offers unprecedented ease in defining complex data structures, and emphasizes rapid development and the DRY principle (Don't Repeat Yourself). Some of the major advantages of Django, which come out-of-the-box are: - MVC / MTV paradigm The applications you create cleanly separate the model (the part the describes your data), the view (the part that generates the data to be displayed) and the template (the part that defines the visual appearance of your content) - In-build ORM (Object-relation mapper) No need to learn SQL! Django lets you describe how your data is related in Python by defining Model classes. The SQL is automatically generated, and you have a ready-made API to work with your application's data. - Admin interface automatically generated For whatever kind of data structure you create for your website, an administration panel is automatically created, and is usable out-of-the-box. The administration panel can be customized for even better usability though. - Highly customizable URL schemes Django uses Regular Expressions to match URLs making it possible to create incredible complex URL schemes. It is important to note that Django is NOT a CMS! You can use it to build your own CMS though. It does not come inbuilt with anything, and consequently it is not constrained by anything. There are a number of modules you can find online that will let you create a Django application with ease. Installing Django Since Django is a Python based framework, do go ahead and install Python, you can find it in this month's DVD. You need Python 2.7 not Python 3. If you are running Linux, it is likely you have Python installed, and Django is already in your repositories. In this case install from your distro's repos (on openSUSE it's sudo zypp install python-django, on Ubuntu sudo apt-get install python-django) this is the best way since this way it will automatically be updated and dependencies will automatically be installed. If you are running Windows or your Linux distro's repositories do not include Django you can download it from. Extract the archive to a known directory, and in the command line go to the directory in which you have extracted Django. Now run "sudo python setup.py install" on Linux or "python setup.py install" on Windows. Dome Django features might require you to install additional Python modules, but that is out of the scope of this article, and we will not be covering those features for now. In a production environment you will need an SQL server and a web server to serve your Django application. Right now however we will use the inbuilt SQLite database and the inbuilt server. Creating our Django site Django can be used to create nearly any kind of website. We will be making a simple application that lets you track the books you have lent to others. We need to decide the structure of our data. We need the application to be able to store a list of Books, and a list of Persons, and associate a book with a person if it is lent to them. A person will have a name, a phone number and an email address so you can contact them. A Book will have a name, an entry for the person who took it, and the borrowing date. STEP 1: Create a site Go to a folder where you wish to start this project in the command line, and type the following: python django-admin.py startproject myproject This will create the basic structure of your website inside the directory myproject. Enter that directory. NOTE: django-admin.py might not be in your path on Windows. You can find it in C:Python27Scriptsdjango-admin.py in which case use that path instead of just django-admin.py. STEP 2: Create an app You've created a website, but it needs to have an application that contains a bulk of the code. A site may have multiple apps and an app might be used by multiple sites. You can create one with the following command: python mange.py startapp books STEP 3: Set up your database Open the file called settings.py in your project folder in a text editor like Notepad . Right in the beginning you will find a configuration setting called DATABASES, under which you will find 'default'. Under the default database setting, change the ENGINE setting to 'django.db.backends.sqlite3', and the NAME to something like 'myproject.db'. The NAME setting is just a path to the database. STEP 4: Add your app to your site In the same settings file, go down to the INSTALLED_APPS setting, and add 'books' - the name of your application - right at the end. STEP 5: Enable the admin panel At this point you might also go ahead and uncomment the line that says 'django.contrib.admin' by removing the '#' in the beginning of that line. Close the settings.py file. Open the urls.py file in the same directory, and uncomment the admin lines as instructed in the file. Close the file. STEP 6: Creating the models Open the books directory in your project folder, and open the models.py file for editing. Add the following to the file (it should already have ' from django.db import models' else add that as well): These lines describe the data structure of our website. The Person class defines the person, and the Book class defines the book. The first name, and last name as character field of specifies lengths. The phone number is stored in a character field as well, in case you wish to store numbers as " 91 99 9999 9999", and the email is an In the Book class the name is a character field with a much larger limit on length. There is a borrowed_on field to store the date on which it was borrowed. Finally you have borrowed_by, which related the Book to its borrower. We can directly use Person here as it is already defined, but otherwise we would have to enter it as ' Person'. With this kind of relation, a Book can only go to one Person, but one Person can have borrowed multiple books. By setting blank=True we have made entering those fields optional. So if you want you can just enter a first name of a person, but you can optionally enter a last name, a phone and an email as well. Similarly for a book, all you need is a name. Not all books will be borrowed, and once a book is returned it will need to have no one associated with it. In addition to blank=True, we need to set null=True for ForeignKey and DateField for this to work. Finally, the __unicode__function describes the string representation of an object. If you were to remove that, every book would be called "Book object" in the admin panel. STEP 7: Add the Book and Person objects to the admin panel Create a file called admin.py in your books app directory, and add the following to it: This code registers the Book, and Person classes with the admin site, so they can be managed using the automatic admin backend. STEP 8: syncdb Now in your project directory in the command line run the following command: python manage.py syncdb This command will create the database and the basic structure required by each application. It will also ask you to enter a username, email and password for the default site administrator. If you modify your application, you will need to delete your database file and run this command again. You will lose your data in your process though! There are other ways to accomplish this without deleting your data but they are beyond the scope of this tutorial. STEP 9: Start your server In the command line in your project directory run the following command: python manage.py runserver This is not a production worthy server, but it is good enough for testing. By default it will launch a local server that you can access by entering 127.0.0.1:8000 in the browser address bar. The server is smart enough to automatically restart if it detects that you have changed your code. It won't detect if you have added new files, and might stop working if you save a file with an error though. STEP 10: Use your website Enter in your browser addressbar, enter the username and password you picked earlier. You will now be able to access a fully functional admin panel that will let you browse and edit the books and person you have added. That's it! You're done! Well, not really. Now all you have is a website that has an admin panel, but no frontend. In this example, it might be enough to simply view all details in the admin panel, but a normal website will have a frontend. For that you need to learn how Django views and templates work which we will cover in the future. You can see how easily we have a website up that offers very customized functionality. You could now easily modify this site, and add an owner for each book so it could work as a book management system that works for multiple people; each person can have books they have lent, and books they have borrowed. You can further explore Django using the documentation that is available on its website. This article appeared in Digit Magazine's October issue.
http://www.digit.in/general/starting-with-django-7790.html
CC-MAIN-2017-17
refinedweb
1,630
63.29
A geohash library for Elm. To install it in your development directory: elm install andys8/elm-geohash To use it from your code: import Geohash geohash = Geohash.encode -25.38262 -49.26561 8 Thanks to Ning Sun for the JavaScript implementation. The geohash preserves spatial locality so that points close to each other in space are close to each other on disk. This is because the arrangement of the result is comparable with space filling Z-order curves. The length of geohashes can be chosen individually and depending on the degree of accuracy. Characters at the end are less significant. Truncating the geohash can be used to cover larger areas. In fact this can be used to build range queries based on the prefix of the primary key. The geohash is constructed bitwise. The range of both dimensions will be cut in half. If the target point is located in the greater half of the range, the value of the first bit is 1. Otherwise it’s 0. The example longitude 11.53..° would result in a 1-bit as first value because it’s part of range [0°, +180°] and not [-180°, 0°). This binary partitioning approach will be repeated alternately for both axes (beginning with longitude). Because the encoding is weaving the bits together, the geohash has the spatial locality property.
https://package.frelm.org/repo/178/1.1.2
CC-MAIN-2019-09
refinedweb
223
67.25
D Programming - Modules Modules are the building blocks of D. They are based on a simple concept. Every source file is a module. Accordingly, the single files in which we write the programs are individual modules. By default, the name of a module is the same as its filename without the .d extension. When explicitly specified, the name of the module is defined by the module keyword, which must appear as the first non-comment line in the source file. For example, assume that the name of a source file is "employee.d". Then the name of the module is specified by the module keyword followed by employee. It is as shown below. module employee; class Employee { // Class definition goes here. } The module line is optional. When not specified, it is the same as the file name without the .d extension. File and Module Names D supports Unicode in source code and module names. However, the Unicode support of file systems vary. For example, although most Linux file systems support Unicode, the file names in Windows file systems may not distinguish between lower and upper case letters. Additionally, most file systems limit the characters that can be used in file and directory names. For portability reasons, I recommend that you use only lower case ASCII letters in file names. For example, "employee.d" would be a suitable file name for a class named employee. Accordingly, the name of the module would consist of ASCII letters as well − module employee; // Module name consisting of ASCII letters class eëmployëë { } D Packages A combination of related modules are called a package. D packages are a simple concept as well: The source files that are inside the same directory are considered to belong to the same package. The name of the directory becomes the name of the package, which must also be specified as the first parts of module names. For example, if "employee.d" and "office.d" are inside the directory "company", then specifying the directory name along with the module name makes them be a part of the same package − module company.employee; class Employee { } Similarly, for the office module − module company.office; class Office { } Since package names correspond to directory names, the package names of modules that are deeper than one directory level must reflect that hierarchy. For example, if the "company" directory included a "branch" directory, the name of a module inside that directory would include branch as well. module company.branch.employee; Using Modules in Programs The import keyword, which we have been using in almost every program so far, is for introducing a module to the current module − import std.stdio; The module name may contain the package name as well. For example, the std. part above indicates that stdio is a module that is a part of the std package. Locations of Modules The compiler finds the module files by converting the package and module names directly to directory and file names. For example, the two modules employee and office would be located as "company/employee.d" and "animal/office.d", respectively (or "company\employee.d" and "company\office.d", depending on the file system) for company.employee and company.office. Long and Short Module Names The names that are used in the program may be spelled out with the module and package names as shown below. import company.employee; auto employee0 = Employee(); auto employee1 = company.employee.Employee(); The long names are normally not needed but sometimes there are name conflicts. For example, when referring to a name that appears in more than one module, the compiler cannot decide which one is meant. The following program is spelling out the long names to distinguish between two separate employee structs that are defined in two separate modules: company and college.. The first employee module in folder company is as follows. module company.employee; import std.stdio; class Employee { public: string str; void print() { writeln("Company Employee: ",str); } } The second employee module in folder college is as follows. module college.employee; import std.stdio; class Employee { public: string str; void print() { writeln("College Employee: ",str); } } The main module in hello.d should be saved in the folder which contains the college and company folders. It is as follows. import company.employee; import college.employee; import std.stdio; void main() { auto myemployee1 = new company.employee.Employee(); myemployee1.str = "emp1"; myemployee1.print(); auto myemployee2 = new college.employee.Employee(); myemployee2.str = "emp2"; myemployee2.print(); } The import keyword is not sufficient to make modules become parts of the program. It simply makes available the features of a module inside the current module. That much is needed only to compile the code. For the program above to be built, "company/employee.d" and "college/employee.d" must also be specified on the compilation line. When the above code is compiled and executed, it produces the following result − $ dmd hello.d company/employee.d college/employee.d -ofhello.amx $ ./hello.amx Company Employee: emp1 College Employee: emp2
https://www.tutorialspoint.com/d_programming/d_programming_modules.htm
CC-MAIN-2017-43
refinedweb
832
50.84
This is the first of three articles about creating Windows controls. Article 1 (this) will explain how to design a good control, number 2 goes into coding details by creating a new control step-by-step and article 3 shows the Q&A aspects of controls and how to test and validate Windows controls. Here we go… I will try to give you some tips how to design a better control. I have been doing GUI development for almost 10 years now and all of this writing is out of my own experience. Many controls, which are available on websites like CodeProject, are nice pieces of code but not useable at all. At a first glance they seem to fit perfectly into the project you develop and do all that you need. You download it and with a little tweaking here and there, you build it into your application and the Boss is happy that you finished the project in time. Unfortunately, this is seldom the end of the story when you develop a commercial application. Even if the new control passes your Q&A division (you have one, don’t you?), eventually the support team will report problems. The most commonly encountered problems with custom controls are: All of this does really happen and it will happen to your customers. The biggest mistake a developer can make is testing the application on a developer’s computer. Never ever do this. Take the computer of the secretary, your aunt’s old 486, or even install a clean OS without the latest service packs and just choose a different display scheme. Also, test other OS languages – remember there is a world outside your office. Tip: most developers do not have the resources to cleanstall (clean-install) various operating systems on various computers, there is a simple, yet efficient solution: emulators like VM-Ware ( [^]). They have a reasonable price and are perfectly suited for a developer’s testing environment. These mistakes are the result of the fact that (almost) all these controls are developed for a specific purpose. End of the line. Few programmers go the extra mile to complete the design and deliver a fully functional control that will pass all tests. Even many commercial GUI libraries fail here. Further reading on controls and control design: When you intend to write your own control, stop before you write a single line of code and figure out if you really need to create a custom control or if it can be solved with a standard Windows control or a combination of them. Windows provides a rich variety of controls, which are suited for almost any purpose and they are 99% bug free. Your custom control will reach this level only after many, many hours of use, testing and debugging. Often it requires a change in design to achieve the required results with the standard controls, but normally it is worth the change. You gain stability and standard conformance. Your users do not need to learn how to use your new control; they are familiar with the Windows ones. If you think that your users must have a button with a different background color and another font, go out and ask your users. They may like the colorful appearance, but if you confront them with a standard solution, they will agree that the standard solution is better to use. Remember: Usability and customization seldom go hand in hand. Ok, so you decided you must have an extraordinary whatever-control? Fine, lets go ahead and figure out how to do it right. Figure out what is the target audience, the average user, of your application. This determines how you represent your data, how you label it, how it has to look.. Ok, back to our design. Now that you know your audience, you can make a clear decision on the representation. Always keep in mind, Joe Averageuser does not want a cool looking control, but one that is easy to handle, that he understand it without pressing F1 or reading the manual. If it satisfies these requirements and is still cool looking, then you are one-step further on your bosses raise list. A word about the keyboard: handle keys like Windows does – up is up and down is down. If you use the Enter key to select an item, your users will not understand it. MSDN has nice lists on keys and their suggested use. Before you start coding, make a mockup of your control. Fire up your favorite paint program and sketch the control. Place the bitmap in your application. See how it looks and ask others what they think of it. Do they understand what it is? How would they use it? If your dummy survives this simple test, you can – almost – start coding. Now it is time to think about the controls usage, its navigation. In most cases, you know instinctively how to use it with a mouse. However, what about the keyboard? Many users (that includes me) like to use the keyboard for navigation, just because it is faster, just because you have to type something in a nearby field, or simply because the computer does not have a mouse attached. Keyboard navigation always includes showing a focus. The focus is a visual feedback to indicate which part of your control the user is interacting with. This is the user’s only clue what he is doing. Again, look at the Windows controls to see how a focus is shown. The button shows a dotted rectangle and changes border, an edit control shows a caret at the insert position, a menu shows a colored selection. Imitate this behavior. Joe Averageuser is used to it.. WM_SETFOCUS LVIS_FOCUS If you do not get it right at first, don’t worry – even Microsoft doesn’t always. (MS: How are we supposed to use the dropdown function of the “Open” button in Word XP’s File Open dialog?) Always ask others for their opinion about your control and if they understand it. Watch them using it. Don’t forget that you have to remove the focus if your control or the application looses focus. There are certain windows messages, which you should handle because they may change focus state or style. To make things more complicated, these messages vary on different Windows versions. Focus management means keeping some state variables around and up-to-date. This can be a tricky part of your controls logic. Depending on the complexity of your control you might even need to show focus in various forms – a caret when the user interacts with text, a dotted line on a button-type area, etc. Another important visual feedback for focus – mouse focus this time – is hot tracking. Hot tracking means changing the visual appearance of an item or area when the user moves (hovers) the mouse over it. A good example, are the buttons in the toolbar of Internet Explorer. Here, the search button normal and hot-tracked. When you implement features like this, adhere to the Windows settings and capabilities of the installed OS and IE version. This will make your control appear much more “Windows like” because it acts and reacts like all other standard controls. The Windows API functions GetSystemMetrics() and SystemParametersInfo() are your best friends. They provide you all information that you need. If you cache this information in your own variables, you have to process the change notifications sent by Windows whenever the user changes a setting. Look at the WM_xxxCHANGED notifications in your MSDN. GetSystemMetrics() SystemParametersInfo() WM_xxxCHANGED Do you calculate the size of your control in pixel? Do you draw at certain locations? Don’t do it. When you place your control on a dialog, it is measured in dialog units. Dialog units scale depending on the display settings. When the user chooses a big default font, the dialog will be bigger and so will your control. Try it; select “Large Fonts” in the display properties. How does your control look now? Always calculate sizes and locations relative. If you are going to display simple iconic graphics, consider using a TrueType font. Windows also uses a font to display the symbols on any window. The Minimize, Maximize, Close icons but also the scroll arrows are just letters from a custom font. Outlook uses a custom font for the attachment, priority and other symbols. Why? Because it scales. Bitmaps don’t scale. And it is yet ten times easier to output text than display a bitmap or even WMF file. If you create items, which relate to Windows items, mimic Windows. Use GetSystemMetrics() to retrieve the size, dimensions or depth of the corresponding Windows element. For example, when you draw a 3D element, you can get the correct size by calling GetSystemMetrics(SM_CXEDGE) and GetSystemMetrics(SM_CYEDGE). This ensures that your control looks right in all configurations and Windows versions. GetSystemMetrics(SM_CXEDGE) GetSystemMetrics(SM_CYEDGE) When you must display bitmaps or icons then be very careful with transparency and background colors. Often programmers forget that the background color of an icon is not gray. You can notice this in many pre-Windows 2000 applications when Microsoft used a different gray for the background of dialogs and windows. When you run these applications with altered colors (or under Windows 2000/XP) then you notice that the icons have a different gray. So, make the background color transparent and set the transparency color correctly. You can easily check if you got it right, by changing the color of the 3D Objects in the display settings. Never assume a color because the user will change it. Always use GetSysColor() and GetSysColorBrush() to get the currently active colors. What I said about the metrics, applies also for the colors: they may be changed at any time. Make use of the WM_xxxCHANGED notifications. GetSysColor() GetSysColorBrush() You cannot post virtual functions. How are programmers going to use your control? Through a class, which implements the control, you will answer. Not exactly the right answer. I know, it’s the common and most convenient way. But just look at the Windows controls – MFC programmers often forget that there is no function named SetExtendedStyle() in the Windows API. This is just the MFC class function which makes a simple SendMessage() with some parameters. SetExtendedStyle() SendMessage() It takes some effort to implement a message-based interface instead of a class-based interface. But it has several advantages: PostMessage Let us have a look at the conventional implementation: class CMyControl { // ... some declarations here void SetColor(RGB c) { m_color = c; Invalidate(); } RGB GetColor() { return m_color; } private: RGB m_color; }; That is all that is. End of story. While this is the easiest and fastest implementation, it is also the most inflexible one. Even when you declare the SetColor/GetColor functions as virtual you don’t gain much. None of the advantages of a message-based implementation is here. SetColor/GetColor virtual Now, the same in a message-based implementation: // in the main header file MyControl.h #define MC_SETCOLOR (MC_BASEMSG + 1) #define MC_GETCOLOR (MC_BASEMSG + 2) // in the class definition file MyControl_impl.h class CMyControlImpl { // ... some declarations here void SetColor(RGB c) { SendMessage(MC_SETCOLOR, 0, (LPARAM)c); } RGB GetColor() { return SendMessage(MC_GETCOLOR); } // if this is using MFC we have here the message handler declarations afx_msg LRESULT OnMsgSetColor(UINT, WPARAM, LPARAM); private: RGB m_color; }; // in the class implementation file MyControl_impl.cpp BEGIN_MESSAGE_MAP(CMyControlImpl, CWnd) //{{AFX_MSG_MAP(CMyControlImpl) ON_MESSAGE(MC_SETCOLOR, OnMsgSetColor) ON_MESSAGE(MC_GETCOLOR, OnMsgGetColor) //}}AFX_MSG_MAP END_MESSAGE_MAP() LRESULT CMyControlImpl::OnMsgSetColor(UINT, WPARAM c, LPARAM) { m_color = (RGB)c; Invalidate(); } LRESULT CMyControlImpl::OnMsgGetColor(UINT, WPARAM, LPARAM) { return (LRESULT) m_color; } This is quite a lot more code to write. But any developer using this control will need only the first file MyControl.h which contains all message definitions. If your control is complex or you have enough time and motivation at hand, then you provide a wrapper class and/or macros. #define MyControl_SetColor (hwndMC, c) \ (int)SNDMSG((hwndMC), MC_SETCOLOR, (WPARAM)(c), 0L) #define MyControl_GetColor (hwndMC) \ (int)SNDMSG((hwndMC), MC_GETCOLOR, 0, 0L) class CMyControl { public: void SetColor(RGB c) { MyControl_SetColor(m_hWnd, c); } RGB GetColor() { return MyControl_GetColor(m_hWnd); } } This is exactly what MFC is doing (with a little more error checking in-between) with all the standard Windows controls. Have a look at the CEdit class in the MFC source files. Learn how they did it. Also, look into the commctrl.h file in the VC include directory. This file contains the definitions for all Windows controls. CEdit commctrl.h Using this method, you gain all advantages mentioned before. It is more work because of all the definitions and extra header files. In the end, you cannot avoid this extra step when you want to create commercial quality controls. If you ever wrote controls, you will know the pain of debugging GUI code. With the message-based approach things get easier – attach Spy++ or even better Winspector[^] to your control and see the messages flow. You can easily spot any wrongly sent message, any out-of-range parameter and any return value. One last point to enhance your control: do not call exposed functions directly. Let me demonstrate this with our sample. Assume we have another function called UpdateData(), which at some point has to set the color and thus either calls the OnMsgSetColor() directly or even worse, modifies the m_color variable. You will never be able to figure out why the control changes color, if you don’t trace into the UpdateData() function. Would this function send a MC_SETCOLOR message, it would be obvious why the color change happens. UpdateData() OnMsgSetColor() m_color MC_SETCOLOR Of course, for more complex functions you might need to define structures which are passed by pointer to the control. Even if you have to pass more then 2 parameters you will need to squeeze them into a structure already or you manage to use short int parameters and pass them with the MAKEWPARAM() or MAKELPARAM() macros. short int MAKEWPARAM() MAKELPARAM() Well, most probably not many people do this. I agree that “print” is not the best choice of words for the purpose. Something like “render” would have been more appropriate. By now, you may have realized that I am talking about the WM_PRINT and WM_PRINTCLIENT messages. They are not much different from the normal WM_PAINT message, but they may be invoked without an invalid area. At any time, the application can request a control to paint itself into a given DC by sending a WM_PRINT or WM_PRINTCLIENT message. WM_PRINT WM_PRINTCLIENT To implement these messages you only have to move your painting code into a separate function, which takes a DC as input parameter. The difference between WM_PRINT and WM_PRINTCLIENT is that the first should render the whole control window including non-client areas and the later should only render the client area. Implementing these messages is required to support some of the newer Windows API functions like AnimateWindow(). AnimateWindow() Terminal services are no longer a buzzword but reality for many companies and all Windows XP users. They are available by default on any Windows XP installation just that they are now called Remote Desktop. Also available as Windows NT 4.0 Terminal Services and Citrix Terminal Server and probably under some other names. Do not underestimate how widely used this is in enterprises. This is definitely a feature you want to add to your windows control. There are few things to consider when you plan to make your control TS aware: the painting code must be optimized and mouse interaction should be minimized. By detecting TS sessions and handling the WM_WTSESSION_CHANGE message, the control can minimize graphic effects when used in a terminal session. Advanced features like hovering can be disabled; the ever-popular flat controls, which show/hide the border when active, may revert to normal operations, and so on. WM_WTSESSION_CHANGE With careful planning, TS awareness can be added without breaking compatibility to Win9x systems. With each new Windows version, Microsoft adds some new features to the existing controls, new windows messages, new styles. Most of these features are easy to implement if you plan your control carefully. One of these new things is the UI status feedback. You can turn it on or off in Display Settings / Appearance / Effects. It is titled “Hide underlined letters for keyboard navigation until I press the Alt key”. The context help of this item already reveals that there is more to it than they tell with this clunky title. This special feature is hidden behind three new windows messages: WM_QUERYUISTATE, WM_UPDATEUISTATE and WM_CHANGEUISTATE. These messages simply take or give a bitmap specifying which feature should be turned off or on. The only thing your control has to do is consider these bits when drawing the control or its visual feedback. WM_QUERYUISTATE WM_UPDATEUISTATE WM_CHANGEUISTATE Another big thing is Windows XP’s Theme support. Fully supporting themes is lots of work, if your control uses default windows elements (buttons, scrollbars, dropdown indicators, etc) as parts of the window, you must implement Theme support. There is no way around. Otherwise, your control looks awful between all the themed controls in your customer’s application. Fortunately, there are a few helper classes on sites like CodeProject, which you can easily use in your implementation. Writing good controls is hard work. Designing a controls interface takes time. A control that you created for your own application will never be a complete solution. It just implements what you needed. The next programmer using your control may have very different requirements to the interface or environment. If you intend to publish your control, review the interface and code carefully and add functionality that is missing. Any Set… function should have a corresponding Get… function. Even if you do not need it in your application, others will need. When your control provides the function already, then it is reusable. Otherwise, it is just a code fragment, which needs massive work. Set… Get… If your control uses constants or #defined measures, think if they can be made configurable. Do not impose limits just because you did not need more. #define That’s all for now. I will update this article based on your feedback and any further ideas. I hope you will read on when the next part of this series is written. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here Error 4711: Signature expired CSampleControl InitSampleControl() CSampleCtrl CSampleView A simple solution is to intercept the WM_SETFOCUS message and set the LVIS_FOCUS state on the first list item if no item is selected or focused. CListBox CListCtrl CDragListCtrl OnPaint() OnNcPaint() DrawItem() ::InitMyCustomControl(); ::CreateWindow(...); ::SendMessage(...); ... CreateXXX() InitPCSpectraControls() this class myCustomControl : public CWnd{ myCustomControl(){ /* Load */ } CreateCoolControl() { // Create control m_hWnd = ::CreateCoolBar(/* Params */); } }; General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/3077/Designing-a-Windows-Control-Part?msg=673553
CC-MAIN-2014-35
refinedweb
3,182
55.95
Typically, using standard formats when programming can help you migrate information between different programs. Using the Comma Separated Value file format, for example, lets you create lists of data within text files that you can import into Excel. A CSV file simply represents a series of data items separated by commas. You can even control the type of language encoding Excel uses when reading the file, if your information deals with international names or titles. Use Java's FileWriter classes to create UTF-8 encoded documents for use in Excel. 1. Set up your Java class to use OutputStreamWriter and FileOutPutWriter objects: import java.io.*; class ExcelWriter{ public static void main(String[] args){ 2. Open a FileOutputStream object containing the name of a CSV file. You can name this file anything you like, as long as it ends with the .csv extension: public static void main(String[] args){ FileOutputStream fos = new FileOutputStream("C:\example.csv"); 3. Create a byte array and write it to the file. This array represents a UTF-8 encoding bit, which signals to programs that the file should be decoded using UTF-8: byte[] enc = new byte[] { (byte)0xEF, (byte)0xBB, (byte)0xBF }; fos.write(bom); 4. Write additional, comma-separated values to the file, and close it to save it. The file, now encoded in UTF-8, signals to Excel that it should be decoded as UTF-8 when read: writer = new OutputStreamWriter(fos, "UTF-8"); writer.write("Robert, 25, Illinois\n"); writer.write("Judy, 23, Indiana\n"); writer.close(); } References (3) Photo Credits - Wikimedia Commons | D235
https://smallbusiness.chron.com/utf8-encoding-excel-using-java-47939.html
CC-MAIN-2018-39
refinedweb
262
57.47
It is somewhat intimidating to know where to turn for training as a new BizTalk Developer or Administrator. This article and it's accompanying article for Administrator Training is designed to answer the question: Where do I start to learn about BizTalk Server? I would like to begin to highlight two article on this subject: Microsoft: The BizTalk Server tutorials contain detailed steps for implementing simple scenarios. These tutorials give new users an experience of using a variety of BizTalk tools while creating compiled, testable solutions. The MSDN Code Gallery enables you to download and share sample applications, code snippets, and other resources with the developer community. You can find a plethora of all kinds of sample applications, and so on different technologies ranging from general c# to advanced solutions like Service Bus Explorer. For BizTalk Server you can find over 20 sample solutions, and code. Feel free to download, rate them and leave feedback. If you have inspiration to create a sample or have lying around you can upload your sample easily. There are a number of open source adapters that can aid you as BizTalk developer. These adapters are not the out of the box offered by BizTalk Server: BizTalk Server Adapter pack (that provides connectivity to SAP, Siebel, SQL and many more): There are a number of open source functoids that can aid you as BizTalk developer. These functoids are not the out of the box offered by BizTalk Server: There are a number of open source pipelines components that can aid you as BizTalk developer. These pipelines componentsre not the out of the box offered by BizTalk Server: The BizTalk Server tutorials contain detailed steps for implementing simple scenarios. These tutorials give new users an experience of using a variety of BizTalk tools while creating:: There a great number of BizTalk books present today ranging from BizTalk Server 2000: A Beginner's Guide to BizTalk Server 2010 Patterns. In this article you will find all the books published on BizTalk Server since it's 2004 version and up, including links to reviews and blog posts. This article is intended to be a knowledge base of all BizTalk blog’s that are available. And Also: QuickLearn Training Inc. Go beyond the Microsoft curriculum with QuickLearn training. Our trainers will help you master the skills necessary to both develop and administrator a robust BizTalk Server environment. In addition to our public classes delivered live from our classrooms in Kirkland Washington, they are offered through partners throughout the world. They are also available for private delivery at your venue. Our public classes can be attended in person or remotely using the latest in virtual classroom technologies. Our classes are updated frequently to reflect the latest versions and updates to BizTalk Server. Our BizTalk Server 2013 courses include: QuickLearn also still offers courses for BizTalk Server 2010 including: BizTalk Developer Fundamentals, Immersion and Deep Dive. QuickLearn also offers Expert Series courses which spend 2-3 days focusing on specific BizTalk technologies: Many BizTalk Server developers also find our Administrator course useful. We are sure you will love our experienced and passionate instructors. Plurasight is an online training library providing developers access to an in-depth suite of searchable and browse-able training courses focused on current and emerging Microsoft platform technologies. Read suggested related topics:
http://social.technet.microsoft.com/wiki/contents/articles/14598.training-resources-for-biztalk-developers.aspx
CC-MAIN-2015-22
refinedweb
555
51.58
Convolution Matrix Convolution Matrix Flash 8 introduces some very powerful tools for manipulating bitmaps at the pixel level. Included in this list of tools is flash.filters.ConvolutionFilter. ConvolutionFilter. Unlike Matrix and ColorMatrixFilter, ConvolutionFilter’s matrix does not have a set number of rows and columns. The number of rows and columns depend on the type and strength of the effect you are trying to achieve. In a nutshell, ConvolutionFilter looks at each and every pixel in a source bitmap. As it does this, it uses the center value in the matrix as the value of the current pixel being manipulated. For example, in a 5 x 5 matrix, the center value is at (2, 2). It then multiplies the values from the matrix to the surrounding pixels and adds the resulting values for all pixels to get the value for the resulting center pixel. Here is the formula used on a 3 x 3 matrix convolution: dst (x, y) = ((src (x-1, y-1) * a0 + src(x, y-1) * a1.... src(x, y+1) * a7 + src (x+1,y+1) * a8) / divisor) + bias As you can see, for the pixel located at (x, y), ConvolutionFilter with a 3 x 3 matrix takes the pixel (x–1, y–1) and multiplies it by the value in the matrix located at (0,0), and then adds the pixel (x, y–1) multiplied by the value in the matrix at (0,1), and so on until all of the matrix values have been multiplied by the corresponding pixel value. (This is done for each color channel.) Finally, it takes that total, divides by the value of divisor, and adds the value of bias. Obviously the larger your matrix, the longer this process takes. To apply a convolution matrix, you can pass a matrix along with the number of rows and columns into the ConvolutionMatrix constructor, as follows: import flash.filters.ConvolutionFilter; // Set up our 3x3 matrix: var mat:Array = [ 1,0,1, 0,1,0, 1,0,1 ]; // Tell the ConvolutionFilter that this is a 3 row, 3 column matrix // Pass the matrix in as well: var convMat:ConvolutionFilter = new ConvolutionFilter(3,3,mat); clip.filters = [convMat]; The following examples illustrate some convolution matrices. A useful exercise is to look at the matrix values and guess the effect they might produce on the image. Figure 10 shows the original photo. Figure 10. Original image In Figure 11, the pixel being affected gets its original value multiplied by 5, while the pixels immediately above, below, to the left, and to the right are multiplied by –1, with the resulting values added together and multiplied by the affected pixel to produce a new value for that pixel. The resultant effect is that there is an increase in contrast between neighboring pixels. If neighboring pixels have values that are quite similar, those pixels remain fairly similar. However, the greater the original color value difference between pixels, the greater the resulting difference will be. Figure 11. A sharpening effect Figure 12 shows the value of the affected pixel having its value added to that of each of its surrounding eight pixels. You can probably guess that this “mashing” of values results in a blur effect. Figure 12. A blurring effect By looking at the matrix in Figure 13, you can see that the result is trickier to guess than in the previous examples. The affected pixel tends to become closer in value to the pixels near the bottom right and further away in value than the pixels at the top left. The result is an embossed effect with the light source appearing to emanate from the top left. Figure 13. An embossing effect As you can see, having a basic understanding of matrices allows you to produce some powerful effects using the new ConvolutionFilter. In the convolution demo file that accompanies this article, you can play directly with values in a matrix and see the resulting convolution effect on the image. Play with the following demo to see the changes for yourself: Where to Go from Here Flash 8 was truly one of the biggest releases in Macromedia’s product history. It provides developers with a very granular level of control in several different areas. Matrices provide a powerful means of doing this type of manipulation. This article serves as a primer for people interested in gaining a better understanding of matrices. I recommend playing with these matrix values to see the ensuing results. That’s often where the real learning happens.
http://designstacks.net/convolution-matrix
CC-MAIN-2016-50
refinedweb
756
50.46
This post is also available in: Español (Spanish) The Javascript standard is constantly evolving and at times, it can be difficult to have features that match up with what is supported in a framework. Another issue you might run into is having an older tutorial that may not match up with what is on the cutting edge. The other day I was showing my brother how to set up express JS and saw that it didn’t use ES6. Here is how I walked him through it. You can find out more by reviewing our video course here. Assuming that you already have node installed, create a new directory named myapp mkdir myapp && cd myapp Then create your npm package file and make sure to set the entry point to app.js npm init npm install express body-parser --save By default, you’ll be using ES5 and it’ll be required to use require to pull in modules. As we move forward with ES6 and beyond, it’s really best for us to start using ES6 classes as well as import and export statements. To do this, we’ll need Babel in order to interpret our ES6 syntax. Lets pull in both ‘babel-cli and babel preset es2015” as dev dependencies as well as add the .babelrc file npm install --save-dev @babel/core @babel/cli @babel/node npm install @babel/preset-env --save-dev { "presets": ["@babel/preset-env"] } Creating the Model for JSON Output Once we have the setup finished, lets build out a simple API that returns JSON Create a models folder with a js file called User. export default class User { constructor(name, username, email){ this.name = name; this.username = username; this.email = email; } getUsername(){ return this.username; } getName(){ return this.name; } } Then create a routes folder with an index.js file. import express from 'express'; let router = express.Router(); import User from '../models/user'; /* GET home page. */ router.get('/', (req, res, next) => { let languages = [ { language: 'Spanish' }, { language: "French" }, { langauge: "German" } ]; res.json(languages); }); router.get('/users', (req, res, next ) => { let users = [ new User('James Coonce','jcoonce','none@none.com'), new User('Bob Coonce','bcoonce','none@none.com'), new User('Euri','euri','none@none.com'), new User('Norman','jcoonce','none@none.com'), ]; res.json(users); }); router.post('/user/create', (req, res) => { let user = new User(req.body.name, req.body.username, req.body.email); res.json(user); }) export default router; you can see that we import the express from our node modules and set our router to being the Router method on our imported express class. We also import the User class and create several instances of it inside of our ‘/users’ and return them as json. Just to test the post method we also pull in the parameters from the body and return a user object back out as json let user = new User(req.body.name, req.body.username, req.body.email); res.json(user); then finally we export our Router. As we go back, we create our app.js file import express from 'express'; import bodyParser from 'body-parser'; import router from './routes/index'; let app = express(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); app.use('/', router); app.listen(3000, function () { console.log('Example app listening on port 3000!') }); We import express as well as our router and then we add the body parser and router to express and specify which port we want to run on. To finally wrap things up, we go to our package.json file and write our node script. nodemon app.js --watch server --exec babel-node This says run app.js with babel node using the es2015 preset. Now you can go to port localhost:3000 and you should be able to have your application return json at the various urls. You can find a full example of the github repo
https://codebrains.io/setting-up-express-with-es6-and-babel/
CC-MAIN-2019-35
refinedweb
647
67.15
So you’re beginning to use Python daily. Great! You might still have a critical set of Matlab scripts from back in the day that doesn’t have a neat equivalent in Python. How do you rewrite this code in Python without introducing many headaches? I’ve had to do this a handful of times and have developed a method based on test-driven development that avoids the biggest issues in code translation: - Define what you want to translate - Create a test plan based on existing code - Write tests in Python - Write the code in Python - Iterate until tests pass To test the method, I’ve migrated a script that I found in Susillo and Abbott (2009). It simulates a chaotic neural network and trains it with recursive least-squares. I wanted to make sure I would run into issues (gotchas!) when I translated the code to exercise the method – and I did run into issues! The method I’m outlining highlighted the translation bugs early and helped squash them. This is part 3 of my guide on transitioning from Matlab to Python. Read part 1: a curriculum to learn Python and part 2: choosing environments, IDEs and packages. Pick your battles Algorithmic code that does matrix manipulation is typically a good candidate for translation into Python. Importantly, the code you will translate needs to currently run in Matlab, since we’re going to have to run the code and capture its current behaviour. Mex code can be repurposed pretty straightforwardly into a Python extension, keeping the core C numerics intact. Try to avoid direct translation of code with dependencies on Simulink, GUIDE, or esoteric toolboxes. For GUIDE in particular, I’ve found that the limitations of GUIDE enforce a certain style of coding – lots of globals and convoluted logic. You’re probably better off rewriting the GUI code from scratch than attempting direct translation. Not every piece of Matlab code needs to be translated to Python. If your goal is to archive an existing pipeline so it continues to run despite deprecation, containerizing Matlab with Docker is a good route. You can also call legacy Matlab code from Python directly using the official Matlab engine for Python from Mathworks or using shell calls. However, some situations will require translating Matlab code, for instance, if you want to run your code in the cloud. Finally, consider doing the process for a smaller project first. It’s not that easy! You will learn a lot from doing this once on a smaller scale. I recommend that you create a git repo for your project and commit regularly, because you might be doing a lot of iterations to get everything to run correctly. Creating a test-based strategy for rewriting the code We’ve identified a good script or function to rewrite. Should we start writing Python? No! It is very easy to rewrite code in another language that seems to work, but has subtle bugs. You can spend days or weeks chasing down these bugs, and it will drive you crazy! For this reason, we will use a test-based strategy: - Capture what the behaviour of the code is right now for a variety of inputs. - Rewrite the code in Python and make sure it does exactly the thing that we want it to do. That is, for the same inputs, it gives us the same outputs. We will create unit tests to verify the behaviour of our code is correct, meaning that it matches the old code exactly. At this point, you may say – hold on, I don’t just want to transfer the code over, I want to improve it! Our goal is to capture the logic part of the code – the number-crunching. This does not stop us from improving the reference code in a number of ways, including: - Making constants changeable - Making the code more modular - Using Pythonic data structures, for example using pandasto store time series. In any case, we have to build on a solid foundation – a reference. Once we replicate the original behaviour and commit it to a repo, we can improve it later in a new commit. Identify inputs and outputs We need to identify the inputs and outputs of the script or function we want to rewrite. One very nice thing about Matlab is that functions tend to be stateless and do not modify their arguments (unless an input is a reference object type, which is pretty rare). Thus, we can reason pretty easily about the inputs and the outputs to a function: - The inputs are the arguments. - The outputs are the return variables. Functions may have side-effects, such as creating a plot or saving a file. A side-effect to a function is an observable effect besides returning a value (the main effect) to the invoker of the operation. If the code you’re translating has side-effects, you should aim to remove them. For example, consider a function takes in data and returns nothing, saving the results to disk, like so: function [] = mycomplexop(outfile) data = zeros(100, 100); % Complex things happen in the middle... save(outfile, 'data'); end You can transform this into a side-effect-free function by returning the results instead of saving them: function [data] = mycomplexop() data = zeros(100, 100); % Complex things happen in the middle... return end I prefer to comment out plotting code rather than trying to replicate it exactly. Again, we want to focus on the number crunching. Identify test cases Many Matlab scripts are written in a style where the inputs to a function are set up, a function or set of functions are called, and then the output is plotted. We need to capture the inputs to this set of functions as well as the inputs. An easy way to do this is using save: % Set up the inputs. t = 0:1000; dt = .1; g = 10; save inputs.mat -hdf5 % Call the function complexoutput = mycomplexfun(t, dt, g); save outputs.mat -hdf5 % Plotting the output plot(t, complexoutput); I used Octave here – but you can use -v7.3 rather than -hdfd5 to save if you’re using Matlab. mat files saved in these formats are actually hdf5 files that Python can read. Pay special attention to randomization. It will be hard to replicate Matlab’s random number generation exactly in Python. Ideally, functions should be independent of randomizations, because functions with random outputs are very hard to test for correctness. If a function requires random variables to implement an algorithm, we can circumvent this issue by giving the function the random data it needs. For example, let’s say we have a function that computes the distribution of the range of a random normal variable: function [dist] = rangedist() A = randn(100, 1000); dist = max(A, 1) - min(A, 1); end We can rewrite this function so that its source of randomness comes from outside the function: function [dist] = rangedist(A) dist = max(A, 1) - min(A, 1); end Then we can generate the A variable in a script, and save that particular random draw. Make sure to pick test cases that won’t take more than a few seconds to run. You will be running your tests many times, and these seconds will add up! Toy data is fine. It’s also good practice to test our code on simple inputs where the correct output is clear, for example, all zeros, all ones, etc. Save as many input-output pairs as is necessary to capture the range of behaviours of functions you want to capture. In addition to testing our script end-to-end, we may choose to test intermediate computations involved in the script. Testing more fine-grained computations will help you debug your code. If your function is already written as a function, it may call private functions – which can make things a bit tricky. You may need to refactor a bit of the Matlab code to make private functions public so that you can capture both their inputs and outputs. Writing a test scaffold Unit testing checks whether the results of a function are as expected given a set of inputs. Testing in Python can range from using simple inline assert statements to sophisticated automated solutions that check that tests pass every time you push code into your repo. In our case, we will use a lightweight manual testing method: hiding test cases behind __name__ == '__main__'. We will create a python module that is meant to be imported, for example, via import mymodule. When we run the file on the command line, however, via python module.py, the code hidden behind this if statement will be run: import numpy as np def my_complex_fun(A): # Complicated things occur. return A * 2 def _run_tests(): # Load inputs. with h5py.File('inputs.mat', 'r') as f: A = np.array(f['A/value']) with h5py.File('outputs.mat', 'r') as f: B_ref = np.array(f['B/value']) # Call the function. B = my_complex_fun(A) # Check shapes are good. assert A.shape == B.shape # Check values are similar. np.testing.assert_all_close(B, B_ref) if __name__ == '__main__': # Run the tests. _run_tests() The tests are inside the _run_tests() function. This way, the module namespace will not be polluted with variables that could have the same name as other variables inside of a function. When the code is run, the tests should raise errors if the output is not as expected, which will stop execution. You will see a big old error on the command line, and then you know that you must correct the code. You can raise an error by: - Using the assertstatement. It raises an error whenever the statement is False. - Using the methods in np.testing. These methods can check whether, for example, the elements of numpy array are all close to a reference (within numeric error). - Using the raisestatement to manually raise an error. That’s really all you need to test code. Numerous unit-testing libraries in Python exist, but at their core is the same idea: create code that raises an error when the output not what is expected. Picking organization Main effects are much easier to test than side effects in the unit-testing framework. You’ll often find yourself writing side-effect-free, modular and functional code for this reason. That’s great! You’ll tend to create smaller functions so that you can test each individual component. Your code will be easier to reason about. The Python code you write may be objected-oriented. Although Matlab now has excellent support for classes, much Matlab code, especially older code, eschews the use of classes. If you don’t feel comfortable writing object-oriented code, don’t sweat it – Python code doesn’t have to use classes to be Pythonic. Classes make sense when something needs to remember and manage its own state. Here’s a tutorial on Python classes if it’s something that intrigues you. Writing the code – common gotchas Now that we’ve picked our tests, and picked how we’ll organize our code, we can start coding. Matlab’s matrix operations naturally translate to Numpy. Be careful with these common gotchas: - Reshape operations will cause you headaches. Matlab uses Fortran order, Python uses C by default. This means in Matlab: A = [1, 2, 3; 4, 5, 6] A(:) ans = [1, 4, 2, 5, 3, 6] Whereas in Python: >>> A = np.array([[1, 2, 3], [4, 5, 6]]) >>> print(A.ravel()) [1, 2, 3, 4, 5, 6] - Saving to hdf5 in Matlab and loading in Python will invert axes because of the distinction between C and Fortran order. You will have to swap axes front to back, for example. For example, if you save a three-dimensional tensor in Matlab Aand then load it back in Python via hdf5, you will need to call A.swapaxes((2, 1, 0))to get back the original tensor. - Be careful with dimension-one vectors! If you have a one-dimension vector aof shape (N, ), then a.dot(a.T)gives you a vector of the shape (N, ). It will not be a scalar, nor a matrix. If you wanted to computed the outer product of awith itself, use a.reshape((-1, 1)).dot(a.reshape((1, -1)))or better yet np.outer(a, a). - Hidden dot products can cause problems! The *operator applies to scalars and to vectors in Matlab, and the semantics are different depending on the dimension of the operands. A.dot(B)replace Matlab’s A * B. You may also see A @ Bbeing used but it’s still rather esoteric. * Objects in Python have pass-by-reference semantics. Several methods in numpy modify data in place. Be careful! Sometimes, you will explicitly need to copy()your data. - Python uses 0-based indices and Matlab 1-based indices. Watch out! - Look at your data! Don’t fly blind! If you write a piece of code that you suspect may cause a bug, you can use inline assert statements to make sure it works. I like to use this to check the dimensions of intermediate arrays, for example. Some of the problems of direct index manipulation can go away by using pandas dataframes where appropriate. xarray uses named dimensions for tensors, which can help avoid many mistakes caused by the need for swapping axes and reshaping. np.einsum can express complicated sums and products of tensors in an easy-to-read, less error-prone manner. You might want to consider translating parts of your numerics to PyTorch or Tensorflow directly, especially if you want to take advantage of automatic differentiation. Performance-sensitive code can be JIT’ed using jax or numba, or rewritten in Cython. Start small and optimize as necessary. Using the method on a real script The example script I rewrote is about 165 lines of code. It’s written in a pedagogic style and intersperses explanations, logic and plotting. In that sense, it’s similar to a jupyter notebook. I focused on transferring the logic of the main loop, which does a forward simulation of the network and trains it with the FORCE algorithm. I had to change a handful of things related to plotting to get it to run in Octave, but I was able to get it to run within about 10 minutes. Not bad for 10-year code! The code is organized in this way: - sets up variables. - runs the body of the computation (a big forloop). This body has side effects (plotting). - finishes in an end state that contains the results of the computation. Although the code is not set up as a function, it was fairly easy to isolate the inputs are – they’re what the for loop needs to get started: x0, z0, wf, dt, M, N, ft And the outputs are the variables computed in the main for loop, namely: ztand w0_len I captured these inputs and outputs as a test case for my script. I commented out the forward simulation of the trained network that comes after the main for loop since it’s another logically separate component of the code that I could attack later. In addition to the end-to-end result of the script, I captured the result of the first iteration of forwarding the network state, to create a more granular test. I also checked array dimensions. I decided to write the code as a class, with methods train and simulate. The resulting code doesn’t look like Matlab code. Yet, because I’d set up a test scaffold, I could be assured that it works the same as the old Matlab code. Translation was hard! I ran into multiple very subtle issues translating the rather short code I was working (maybe 20 lines of real logic). The issues were subtle – incorrect manipulation of dimension-1 vectors, off-by-1 errors in a loop. Having reference code and functional tests allowed me to isolate these issues, fix them, and replicate the original functionality exactly. It took about 3 hours, which is a lot! I was very careful to set up a well-defined goalpost when I translated the code, so I knew when I would be done, which kept me going. The translated code is available here. I imported the code into a notebook and generated some figures from it – it runs! Conclusion Using test-driven development, it’s possible to systematically, incrementally transfer legacy Matlab code to the Python environment. You can be assured that the code will work in the end because you will have defined what working means. You will have created a test suite for your code with good coverage for your new Python code. You might even have spotted subtle bugs in the original Matlab code along the way! Future you will thank you for your attention to detail. This was part 3 of my guide on transitioning from Matlab to Python. Read part 1: a curriculum to learn Python and part 2: choosing environments, IDEs and packages.
https://xcorr.net/2020/03/04/rewriting-matlab-code-in-python/
CC-MAIN-2021-04
refinedweb
2,839
63.9
Anaconda/Stage2DevelopmentGuide From FedoraProject Latest revision as of 18:30, 29 June 2010 [edit] What's needed for working on stage2 There are a couple of things you need to work on stage2 that make development go much more quickly. - anaconda source tree: Obviously, you need the correct version of the anaconda source code. Usually you will want the latest checkout from the appropriate branch. - Writeable NFS volume: You'll need some NFS space that you can write to. At the least, you will need to be able to create an RHupdates directory and add files to it. You may sometimes also want to modify the packages in the tree (like for testing failure cases). You'll want to make sure this NFS volume is the same version as what you're trying to debug. So if you are working on a RHEL5 bug, make sure you have a RHEL5 tree. If it's a Fedora development bug that only happens on a certain day's tree, make sure you have that day's tree. These versions don't always have to match up exactly. - Web space: Having some publicly available web space is helpful for posting updates.img and kickstart files. These are helpful both for your development and for distributing to other people for testing. - CD/DVD media: For testing media installs, make sure you have burned the latest available media. These are available for all releases, beta releases, and some nightly trees around release time. However, they are not available for all nightly trees. - HTTP/FTP server: For testing there remote install methods, you'll need a server running that has the correct trees available. These methods are not tested all that often, so this isn't typically all that important. Close to a release, you will want to have access to this however. [edit] Coding Standard When in doubt about Python coding standard we use just go with the style in the file you are changing. If creating new file or still in doubt, consult PEP8. [edit] Testing Changes By far, the easiest way to test your changes is to use the NFS installation method. To test your modified files, you then create an RHupdates directory in the top level of the installation tree. That's the level that contains the license file, release notes, and GPG keys. Then put your modified files into that directory and reboot your test machine. anaconda will automatically detect the presence of the RHupdates directory. Files in RHupdates or an updates.img are used instead of the same files from the stage2 image. If your modifications include completely new files or different versions of shared libraries, you can include those in RHupdates. Due to the way python modules get imported, if you need to include an updated file from a python module you will need to include all the files from that module. However, anaconda has a shortcut for certain modules that we control and update frequently that allows you to just put the changed files in your update. These modules are listed in setupPythonUpdates in the anaconda file. Some bugs only show up on specific install methods. When that install method isn't NFS, you can't use the RHupdates directory. You'll need to make an updates.img containing all your modifications. An updates.img is an ext3 filesystem image that can contain anything RHupdates can. Typically, anaconda cannot automatically detect an updates.img. You'll need to provide the updates command line parameter if the image is on a floppy or USB key drive, or updates= if you have put your image on a web server. Updates via HTTP or FTP are often the easiest way to go as you can work up a fix, test it, and then point extra testers at the location. Note that updates= only works on RHEL5 and later, and all recent Fedora releases. It is not supported in RHEL4 so you will have to use a floppy or USB key drive. See AnacondaUpdates for more explanation on updates.img. This script generates an updates.img containing all the files listed on the command line and uploads it to a publicly accessible website for testing: #!/bin/bash # MOUNTPOINT=/misc if [[ $# < 2 ] ; then echo "usage: $0 <updates.img filename> <file1> ... <filen>" exit fi if [ -d $MOUNTPOINT/lost+found ] ; then echo "/misc is already mounted, exiting" exit 1 fi UPDATES_IMG=$1 shift if [ -f /tmp/$UPDATES_IMG ] ; then rm -f /tmp/$UPDATES_IMG && exit 1 fi dd if=/dev/zero of=/tmp/$UPDATES_IMG bs=1M count=1 yes | /sbin/mkfs.ext3 /tmp/$UPDATES_IMG mount -o loop /tmp/$UPDATES_IMG $MOUNTPOINT cp -a $* $MOUNTPOINT umount /tmp/$UPDATES_IMG scp /tmp/$UPDATES_IMG clumens@people:public_html/$UPDATES_IMG rm -f /tmp/$UPDATES_IMG Alternative version, which checks adds modified files from your git repository or files modified in a specific commit. Does not need root privileges, since it uses CPIO format instead of a filesystem image. #!/bin/bash # Usage: updates.sh [<commit>] set -e COMMIT=$1 if [ -n "$COMMIT" ] then FILES=$(git show $COMMIT |sed -n 's,^+++ b/\(.*\),\1,p') else FILES=$(git status |awk '/modified:/ {print $3}') fi if [ -z "$FILES" ] then echo "No files" >&2 exit 1 fi (ls $FILES |cpio -c -o) |gzip -c9 >updates.img scp updates.img fedorapeople.org:public_html rm updates.img [edit] Debugging One of the primary ways to debug anaconda is by using pdb, the interactive Python debugger. This guide assumes you already know how to use pdb. If not, see the pdb reference guide. [edit] Python Debugger Python's interactive debugger is not nearly as fancy or useful as gdb, but it has enough features for us to figure out what's going on. The most common way you will interact with pdb is by pressing the Debug button on anaconda's exception dialog box. Pressing this immediately drops you into pdb on tty1. From here, you can inspect values and check out the state of the filesystems. However, you cannot return to anaconda. Once in this top-level exception handler, continuing will cause anaconda to exit. You can also enter pdb at any point by adding a function call exactly where you need it and putting that updated file in RHupdates. Let's say you were debugging a problem where anaconda was not writing out the anaconda-ks.cfg file towards the end of installation. This code is located in packages.py and looks like this: def writeKSConfiguration(anaconda): log.info("Writing autokickstart file") if not flags.test: fn = anaconda.rootPath + "/root/anaconda-ks.cfg" else: fn = "/tmp/anaconda-ks.cfg" anaconda.id.writeKS(fn) You could drop into the debugger at the beginning of this function by changing it to the following: def writeKSConfiguration(anaconda): import pdb pdb.set_trace() log.info("Writing autokickstart file") if not flags.test: fn = anaconda.rootPath + "/root/anaconda-ks.cfg" else: fn = "/tmp/anaconda-ks.cfg" anaconda.id.writeKS(fn) The only changes here are the new two lines at the beginning of the function. Once anaconda reaches this point, it will drop to a pdb prompt on tty1. If you're watching the graphical interface, you will notice that it has stopped updating when it reaches this point. Then, you'll need to switch over to tty1 and do whatever debugging actions you want. Unlike being in pdb from the exception dialog, here if you continue anaconda will keep going as expected. This is one of our primary debugging techniques. There's a third way to get into the Python debugger. By adding the debug command line parameter, you will get a Debug button on every screen in the interface and on certain dialogs. Pressing this button automatically drops you into pdb. From here, you can debug and then continue if needed. [edit] Logging anaconda includes a logging framework that supports multiple severity levels, multiple logging destinations, and logging to a remote server. The stage2 supports all these features, while the loader only supports multiple severity levels. By default, anaconda logs to tty3 and to the /tmp/anaconda.log file. The severity level controls what messages will be written to tty3. All log messages regardless of severity level will be written to /tmp/anaconda.log. Severity is controlled by the loglevel= command line parameter. Valid settings are debug, info (the default setting), warning, error, and critical. Remote logging is more useful for users on headless systems than it is for developers. However there are still times when it's handy. To log stage2 messages to a remote system, use the syslog=host[:port] parameter as discussed in Anaconda/Options . Remote logging is subject to the severity level system as well. Often when debugging a problem, you won't get all the output you need from anaconda. You can easily add some extra logging statements to the code where you need it. First, make sure the python file imports the logging module. Most do, but the ones that don't will need a block like the following near the top: import logging log = logging.getLogger("anaconda") Then add your logging statements throughout. Each logging statement has to specify its severity level. For debugging, it's easiest to just use error and not worry with the loglevel parameter. log.error("error unmounting filesystem in systemUnmounted") [edit] Causing exceptions Rarely, it can be useful to cause an exception. The most common time when you may want to do this is when testing out the exception handlers. The easiest way to do this is to just add in a raise SystemError right in the code at the point where you'd like an exception. [edit] Shell There is a terminal usually located in tty2 that is very useful when you want to see something in the file system during install. It has some basic commands, not much (Remember that nothing has been installed yet). A lot of the python code base is included so you can wget any script and run it if you need to. Its a very good way to poke around and see whats happening. [edit] Remote Access In F13 and later, the 'sshd' parameter on the kernel command line may be used to start a (passwordless) ssh server on the machine once stage2 is running. This is useful for cases where X appears to lock up or where VT switching doesn't work for some other reason. Earlier releases can enable sshd in the kickstart file. Note that you'll have to be able to predict or deduce the IP address that the installing machine will get. If the machine under test is a KVM or Xen guest, you can look at the ARP cache on the host with 'arp -n' to find it, usually 192.168.122.*. If it's a physical machine, watch the logs of the appropriate DHCP server, or else explicitly configure networking in stage1. [edit] Other Log Files There are a couple other log files that may be useful. - /tmp/anaconda.log: This is the primary log file for anaconda, containing all log messages regardless of severity level. - /tmp/anacdump.log: When an exception occurs, anaconda will automatically write this file out. It contains the python traceback, the anaconda.log, kernel messages, and a dump of all python variables. This is a very large file. When people file bug reports, they will usually attach this file to the bug report. If they don't ask for it as this is the main debugging information we need. - /tmp/X.log: This is the log file from when X starts up. If there are problems starting X, this file will help us figure out what's going on. These are usually from bugs in X itself but the X team will ask for this information so it's good to get it up front from the user.
https://fedoraproject.org/w/index.php?title=Anaconda/Stage2DevelopmentGuide&diff=183300&oldid=75554
CC-MAIN-2014-10
refinedweb
1,974
65.32
Is there a way with XB to return a different type, depending on the XML data? As in, instead of just Person, have an interface Person, but concrete classes of GreenPerson and PurplePerson? Actually, looking at it, it seems one could do that based on the Attributes, but what if you need to decide later on down the line? What I'd like to do is use (from admin-console) the org\jboss\admin\model\datasource\DataSource and its concrete classes and return a ConnectionPool or NoTxDataSource, etc depending on what the XML says. Is there a way to decide the type of object after the first line of XML? If you have an example let me know, I checked the testsuite examples and didn't see anything that jumped out at me. Thanks, Actually, what I could do instead is just have a public class DsDotXMLMetaData { ArrayList connectionPools; ArrayList noTxDataSources; ArrayList localTxDataSources; ArrayList XADataSources; } Yeah this should be fine. addConnectionPool, addNoTxDataSource etc. Thanks,
https://developer.jboss.org/message/406620
CC-MAIN-2017-17
refinedweb
164
60.14
There is hardly any documentation on the TestRunner class in the Python 2.3 docs. What are its member functions and attributes ? How does one replace the TextTestRunner used by unittest with another one ? On Saturday 14 August 2004 21:26, Antoine Brenner wrote: >. Hi Antoine, This is not directly supported, but you can get the effect you want by wrapping test methods in an object that calls a clean-up method for you, as shown below. It's a bit hacky, and maybe in future it would be a good idea to add 'onFail()' hook methods to TestCase. Opinions, anyone? Best wishes, -Steve --------- import unittest class AddExceptionHook: def __init__(self, method, hook_method): self.method, self.hook_method = method, hook_method def __call__(self): try: return self.method() except: self.hook_method() raise class MyTestCase(unittest.TestCase): def tearDown(self): print "normal tearDown" def test_that_may_fail(self): raise Exception("deliberate failure") ## Additional tear-down hook def on_fail(): print "Tearing down because we failed" classmethod(on_fail) ## Add clean-up hooks to relevant methods test_that_may_fail = AddExceptionHook(test_that_may_fail, on_fail) if __name__ == '__main__': unittest.main() -- Steve Purcell. Best regards, Antoine Brenner TestSuite objects provide no public mechanism for iteration. We have a need to be able to iterate over them. Up to now, we've been using the _tests attribute, which is bad. If no one objects, I'll add an __iter__ method. (I'll do this in the Python CVS repository.) Jim -- Jim Fulton mailto:jim@... Python Powered! CTO (540) 361-1714 Zope Corporation
https://sourceforge.net/p/pyunit/mailman/pyunit-interest/?viewmonth=200408
CC-MAIN-2017-39
refinedweb
247
68.06
As a software developer, we are focused on writing quality code that can easily be passed to another developer. We’ve read and signed the manifesto for software craftsmanship and are proud to be a polyglot programmer. But the one thing that most developers do is shy away from anything relating to databases. That is the database administrators job right? While you could argue either way, it comes in handy to know several integration points with SQL Server and Visual Studio that could make life easier for you and your DBA colleagues. In this article, we will discuss the tooling that Visual Studio provides for databases and several key features that you should be aware of.. Launch Visual Studio and select File-> New File-> SQL File and begin typing in a SQL Query similar to the one shown in Listing 1. CREATE TABLE CUSTOMERS( ID INT NOT NULL, NAME VARCHAR (20) NOT NULL, AGE INT NOT NULL, ADDRESS CHAR (25) , SALARY DECIMAL (18, 2), PRIMARY KEY (ID) ); Listing 1: Creating a Simple Customers Table You may notice at the bottom of the Visual Studio window that it says “Disconnected”. We need to connect to an instance of our database by clicking on the “Execute” toolbar icon. This will open the “Connect to Server” dialog as shown in Figure 1. From here you will need to make sure that the Server name and Authentication method are correct and press the “Connect” button. Now we are inside our instance of the Database Engine and if you look at the Message log displayed in Visual Studio, it will show the results of our query. In this case, I simply created a table called “CUSTOMERS”. Still inside of Visual Studio, if we select “View” and click on “SQL Server Object Explorer” then we can connect to our SQL server instance and view the Database structure. This will allow us to access the database. I’ve shown a comparison between Visual Studio and SQL Server Management Studio side-by-side in Figure 2. If we select the AdventureWorks2012 database and then click “Create New Project,” we can import the database (as shown in Figure 3) into an off-line project. Now that our project has been imported into Visual Studio, we can simply select a .SQL file from our solution explorer and the individual rows and T-SQL will be displayed as shown in Figure 4. Figure 4 Selecting the EmailAddress.sql file brings up the properties for that table which we can manipulate inside of Visual Studio. From here, we can place a check mark on the “Allow Nulls” field for ModifiedDate, but no changes will be made to the server until it is ready for deployment. Let’s save the file by pressing Ctrl-S and “Build the Solution” by clicking on Build from the context menu and Build Solution. If we switch over to our SQL Server Object Explorer and right click our AdventureWorks2012 project, we can compare our local copy against the production copy as shown in Figure 5. Figure 5: we can see that the only difference is the schema is the ModifiedDate from our local copy to our production copy. With a simple click of the “Update Target Button” shown beside the Compare button, we can send our changes back to the server. The Data Tools Operation dialog will report back any type of errors or if it was successful or not. You can press “Compare” again to see the changes being reflected on the server. It should now report that “No Differences were detected”. Developers using Entity Framework in the past have relied on creating core models from existing database schemas. With the introduction of Code-First (which was first introduced in EF 4.3) you can define your model using C# or VB.NET classes to generate a database schema or to map to an existing database. With the recent release of Entity Framework 6.1 you can specify a default connection factory that Code First will use to locate a database for the context. Let’s stop and take a look at a simple example of this integration. From within Visual Studio, create a New Console Application and gave it the name EntityFrameworkCF. After your project has loaded, then you can install the Entity Framework package by right-clicking on the References folder of your project and selecting ” Manage NuGet Packages” as shown in Figure 6. Select the Install button and the necessary references should be added to your project. Let’s start by examining the App.config file located in the project root directory. <entityFramework> <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework"> <parameters> <parameter value="v12.0" /> </parameters> </defaultConnectionFactory> <providers> <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" /> </providers> </entityFramework> By default when you installed the NuGet package a default connection factory was registered that points to either SQL Express or a LocalDB, depending on which one you have installed. Since we want to add our own SQL Server 2014 database, we will make a change to the parameters as shown below: <parameters> <parameter value="v12.0" /> <parameter value="Data Source=THINKPAD-WIN8; Integrated Security=True; MultipleActiveResultSets=True" /> </parameters> This will now allow us to tie directly into our SQL Server database engine with just one line of code. Now that we have the proper app configuration, let’s create a class called Model.cs and copy and paste the following code snippet: using System.Data.Entity; using System.ComponentModel.DataAnnotations; namespace EntityFrameworkCF { class CustomerContext : DbContext { public DbSet Customers { get; set; } } public class Customer { [Key] public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public int Zip { get; set; } } } This class sets the stage for our Code-First approach. We inherit from DbContext and create a public property called Customers which makes use of DbSet. DbSet represents the collection of all the entities in the context, or that can be queried from the database, of a given type. We then create several common properties found for a typical customer record. Switch back over to Program.cs and enter the following code: static void Main(string[] args) { using (var db = new CustomerContext()) { db.Customers.Add(new Customer { FirstName = "Michael", LastName = "Crump", Address = "111 Street", City = "Birmingham", State = "AL", Zip = 35555 }); db.SaveChanges(); foreach (var customer in db.Customers) { Console.WriteLine(customer.FirstName); Console.WriteLine(customer.LastName); Console.WriteLine(customer.Address); Console.WriteLine(customer.City); Console.WriteLine(customer.State); Console.WriteLine(customer.Zip); Console.ReadLine(); } } } This simply adds one record and displays it inside of our console application as shown in Figure 7. Figure 7: our console application just created the table and one row of data using Entity Framework and SQL Server 2014. Again, without leaving Visual Studio, open the SQL Server Object Explorer and find that it created a Database called “EntityFrameworkCF.CustomerContext” that matches our project name and Class name. If we expand the tables then we can see that our Customers table has been added and our row of data is included as shown in Figure 8. Figure 8: viewing the database and tableour Code-First console application created in SQL Server 2014. I’ve barely scratched the surface with the sorts of things you can do with Entity Framework. There is lots more to be discovered like Migrations, Data Motion, etc. You can read more about Entity Framework here or if you prefer you can use Telerik’s own Data Access which support SQL Server 2014 amongst other SQL versions. Since we are already working with SQL Server, you will be pleased to find out that the Telerik Platform has built-in support for MS SQL Server through Data Connectors. This will allow us to use our backend services to talk directly to an existing table or view of our existing database. If we log into the Telerik Platform and select Backend Services -> Content Types, then we will see the option to “Create Type from a Data Connector” as shown in Figure 9. But before we do that, we must setup a Data Link Server that is hosted in IIS and has network access to your local database. From this screen, we simply give it a name, set the type to MS SQL and provide the Data Link Server URL and connection string. We can test our connection before saving the data connector. We can then switch back to Content Types and see the type name, number of items and storage type as shown in Figure 10. It is worth noting that this data is not stored on our backend services, but on your database instances. We have looked at several integration points with SQL Server, Visual Studio and the Telerik Platform today. To recap each section: See databases aren’t that scary after all! With the support and tooling from Microsoft and Telerik, it makes us thankful that we aren’t doing this anymore. As always, if you have any questions or comments, then please leave them below.
https://developer.telerik.com/featured/working-with-databases-through-visual-studio/
CC-MAIN-2018-22
refinedweb
1,524
52.9
By Lincoln Stein, Doug MacEachern Price: $39.95 USD £28.50 GBP Cover | Table of Contents | Colophon) uses mod_perl to make queries against a vast database of film and television movies. The system rewrites URLs on the fly in order to present pages in the language of the user's choice and to quickly retrieve the results of previously cached searches. In 1998, the site won the coveted Webby award for design and service..,., in the directory /pub/gnu, or via the web at. % cd ~www/build % gunzip -c mod_perl- X.XX.tar.gz | tar xvf - mod_perl-X.XX/t/ mod_perl-X.XX/t/docs/ mod_perl-X.XX/t/docs/env.iphtml mod_perl-X.XX/t/docs/content.shtml mod_perl-X.XX/t/docs/error.txt .... % cd mod_perl- X.XX PERL5LIBto a colon-delimited list of directories to search before Apache starts up or by calling use lib '/path /to /look /in 'when the interpreter is first launched. The first technique is most convenient to use in conjunction with the PerlSetEnv directive, which sets an environment variable. Place this directive somewhere early in your server configuration file: PerlSetEnv PERL5LIB /my/lib/perl:/other/lib/perl #!/usr/local/bin/perl # modify the include path before we do anything else BEGIN { use Apache (); use lib Apache->server_root_relative('lib/perl'); } # commonly used modules use Apache::Registry (); use Apache::Constants(); use CGI qw(-compile :all); use CGI::Carp (); # put any other common modules here # use Apache::DBI (); # use LWP (); # use DB_File (); 1;). #include "httpd.h" #include "http_config.h" #include "http_core.h" #include "http_log.h" #include "http_protocol.h" /* file: mod_hello.c */ /* here's the content handler */ static int hello_handler(request_rec *r) { const char* hostname; r->content_type = "text/html"; ap_send_http_header(r); hostname = ap_get_remote_host(r->connection,r->per_dir_config,REMOTE_NAME); ap_rputs("<HTML>\n" ,r); ap_rputs("<HEAD>\n" ,r); ap_rputs("<TITLE>Hello There</TITLE>\n" ,r); ap_rputs("</HEAD>\n" ,r); ap_rputs("<BODY>\n" ,r); ap_rprintf(r,"<H1>Hello %s</H1>\n" ,hostname); ap_rputs("Who would take this book seriously if the first example didn't\n",r); ap_rputs("say \"hello world\"?\n" ,r); ap_rputs("</BODY>\n" ,r); ap_rputs("</HTML>\n" ,r); return OK; } /* Make the name of the content handler known to Apache */ static handler_rec hello_handlers[] = { {"hello-handler", hello_handler}, {NULL} }; /* Tell Apache what phases of the transaction we handle */ module MODULE_VAR_EXPORT hello_module = { STANDARD_MODULE_STUFF, NULL, /* module initializer */ NULL, /* per-directory config creator */ NULL, /* dir config merger */ NULL, /* server config creator */ NULL, /* server config merger */ NULL, /* command table */ hello_handlers, /* [9] content handlers */ NULL, /* [2] URI-to-filename translation */ NULL, /* [5] check/validate user_id */ NULL, /* [6] check user_id is valid *here* */ NULL, /* [4] check access by host address */ NULL, /* [7] MIME type checker/setter */ NULL, /* [8] fixups */ NULL, /* [10] logger */ NULL, /* [3] header parser */ NULL, /* process initialization */ NULL, /* process exit/cleanup */ NULL /* [1] post read_request handling */ }; CFLAGSenvironment variable before running the configure script: % CFLAGS=-g ./configure ... % gdb httpd (gdb) run -X -f ~www/conf/httpd.conf (gdb)prompt will return and tell you which function caused the crash. Now that you have an idea of where the problem is coming from, a breakpoint can be set to step through and see exactly what is wrong. If we were debugging sub handler { my $r = shift; # do something return SOME_STATUS_CODE; } $r. The handler retrieves whatever information it needs from the request object, does some processing, and possibly modifies the object to suit its needs. The handler then returns a numeric status code as its function result, informing Apache of the outcome of its work. We discuss the list of status codes and their significance in the next section. ($$)indicating that the subroutine takes two scalar arguments, the Perl API treats the handler as an object-oriented method call. In this case, the handler will receive two arguments. The handler's class (package) name or an object reference will be the first argument, and the Apache request object reference will be the second. This allows handlers to take advantage of class inheritance, polymorphism, and other useful object-oriented features. Handlers that use this feature are called "method handlers" and have the following structure: request_rec. Included among its various fields are pointers to a server_recand a conn_recstructure, which correspond to the Perl API's Apache::Server and Apache::Connection objects. We have much more to say about using the request_recin Chapters Chapter 10 and Chapter 11 when we discuss the C-language API in more detail. request_recin C) is the primary conduit for the transfer of information between modules and the server. Handlers can use the request object to perform several types of operations: OKstatus code if header_only() returns true (this is a slight improvement over the original Chapter 2 version of the program). We call get_remote_host( ) to get the DNS name of the remote host machine, and incorporate the name into a short HTML document that we transmit using the request object's print( ) method. At the end of the handler, we return OK. REDIRECTresult code. A complete functional example using mod_perl is only a few lines (Example 4.8). This module, named Apache::GoHome , redirects users to the hardcoded URI. When the user selects a document or a portion of the document tree that this content handler has been attached to, the browser will immediately jump to that URI. REDIRECTerror code from Apache::Constants ( REDIRECTisn't among the standard set of result codes imported with :common). The handler() method then adds the desired location to the outgoing headers by calling Apache:: header_out( ). header_out( ) can take one or two arguments. Called with one argument, it returns the current value of the indicated HTTP header field. Called with two arguments, it sets the field indicated by the first argument to the value indicated by the second argument. In this case, we use the two-argument form to set the HTTP Location field to the desired URI. REDIRECTresult code. There's no need to generate an HTML body, since most HTTP-compliant browsers will take you directly to the Location URI. However, Apache adds an appropriate body automatically in order to be HTTP-compliant. You can see the header and body message using telnet: % telnet localhost 80 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ?character:...
http://www.oreilly.com/catalog/9781565925670/toc.html
crawl-001
refinedweb
1,032
53.81