text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The padata parallel execution mechanism¶ - Date May 2020 Padata is a mechanism by which the kernel can farm jobs out to be done in parallel on multiple CPUs while optionally retaining their ordering. It was originally developed for IPsec, which needs to perform encryption and decryption on large numbers of packets without reordering those packets. This is currently the sole consumer of padata’s serialized job support. Padata also supports multithreaded jobs, splitting up the job evenly while load balancing and coordinating between threads. Running Serialized Jobs¶ Initializing¶ The first step in using padata to run serialized jobs is to set up a padata_instance structure for overall control of how jobs are to be run: #include <linux/padata.h> struct padata_instance *padata_alloc(const char *name); ‘name’ simply identifies the instance. Then, complete padata initialization by allocating a padata_shell: struct padata_shell *padata_alloc_shell(struct padata_instance *pinst); A padata_shell is used to submit a job to padata and allows a series of such jobs to be serialized independently. A padata_instance may have one or more padata_shells associated with it, each allowing a separate series of jobs. Modifying cpumasks¶ The CPUs used to run jobs can be changed in two ways, programatically with padata_set_cpumask() or via sysfs. The former is defined: int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type, cpumask_var_t cpumask); Here cpumask_type is one of PADATA_CPU_PARALLEL or PADATA_CPU_SERIAL, where a parallel cpumask describes which processors will be used to execute jobs submitted to this instance in parallel and a serial cpumask defines which processors are allowed to be used as the serialization callback processor. cpumask specifies the new cpumask to use. There may be sysfs files for an instance’s cpumasks. For example, pcrypt’s live in /sys/kernel/pcrypt/<instance-name>. Within an instance’s directory there are two files, parallel_cpumask and serial_cpumask, and either cpumask may be changed by echoing a bitmask into the file, for example: echo f > /sys/kernel/pcrypt/pencrypt/parallel_cpumask Reading one of these files shows the user-supplied cpumask, which may be different from the ‘usable’ cpumask. Padata maintains two pairs of cpumasks internally, the user-supplied cpumasks and the ‘usable’ cpumasks. (Each pair consists of a parallel and a serial cpumask.) The user-supplied cpumasks default to all possible CPUs on instance allocation and may be changed as above. The usable cpumasks are always a subset of the user-supplied cpumasks and contain only the online CPUs in the user-supplied masks; these are the cpumasks padata actually uses. So it is legal to supply a cpumask to padata that contains offline CPUs. Once an offline CPU in the user-supplied cpumask comes online, padata is going to use it. Changing the CPU masks are expensive operations, so it should not be done with great frequency. Running A Job¶ Actually submitting work to the padata instance requires the creation of a padata_priv structure, which represents one job: the job is done with: int padata_do_parallel(struct padata_shell *ps, struct padata_priv *padata, int *cb_cpu); The ps and padata structures must be set up as described above; cb_cpu points to the preferred CPU to be used for the final callback when the job is done; it must be in the current instance’s CPU mask (if not the cb_cpu pointer is updated to point to the CPU actually chosen). The return value from padata_do_parallel() is zero on success, indicating that the job is in progress. -EBUSY means that somebody, somewhere else is messing with the instance’s CPU mask, while -EINVAL is a complaint about cb_cpu not being in the serial cpumask, no online CPUs in the parallel or serial cpumasks, or a stopped instance. Each job submitted to padata_do_parallel() will, in turn, be passed to exactly one call to the above-mentioned parallel() function, on one CPU, so true parallelism is achieved by submitting multiple jobs. parallel() runs job from this point. The job need not be completed during this call, but, if parallel() leaves work outstanding, it should be prepared to be called again with a new job before the previous one completes. Serializing Jobs¶ When a job does complete, parallel() (or whatever function actually finishes the work) run with local software interrupts disabled. Note that this call may be deferred for a while since the padata code takes pains to ensure that jobs are completed in the order in which they were submitted. Destroying¶ Cleaning up a padata instance predictably involves calling the two free functions that correspond to the allocation in reverse: void padata_free_shell(struct padata_shell *ps); void padata_free(struct padata_instance *pinst); It is the user’s responsibility to ensure all outstanding jobs are complete before any of the above are called. Running Multithreaded Jobs¶ A multithreaded job has a main thread and zero or more helper threads, with the main thread participating in the job and then waiting until all helpers have finished. padata splits the job into units called chunks, where a chunk is a piece of the job that one thread completes in one call to the thread function. A user has to do three things to run a multithreaded job. First, describe the job by defining a padata_mt_job structure, which is explained in the Interface section. This includes a pointer to the thread function, which padata will call each time it assigns a job chunk to a thread. Then, define the thread function, which accepts three arguments, start, end, and arg, where the first two delimit the range that the thread operates on and the last is a pointer to the job’s shared state, if any. Prepare the shared state, which is typically allocated on the main thread’s stack. Last, call padata_do_multithreaded(), which will return once the job is finished. Interface¶ Definition struct padata_priv { struct list_head list; struct parallel_data *pd; int cb_cpu; unsigned int seq_nr; int info; void (*parallel)(struct padata_priv *padata); void (*serial)(struct padata_priv *padata); }; list List entry, to attach to the padata lists. pd Pointer to the internal control structure. cb_cpu Callback cpu for serializatioon. seq_nr Sequence number of the parallelized data object. info Used to pass information from the parallel to the serial function. parallel Parallel execution function. serial Serial complete function. Definition struct padata_list { struct list_head list; spinlock_t lock; }; list List head. lock List lock. Definition struct padata_serial_queue { struct padata_list serial; struct work_struct work; struct parallel_data *pd; }; serial List to wait for serialization after reordering. work work struct for serialization. pd Backpointer to the internal control structure. Definition struct padata_cpumask { cpumask_var_t pcpu; cpumask_var_t cbcpu; }; pcpu cpumask for the parallel workers. cbcpu cpumask for the serial (callback) workers. - struct parallel_data¶ Internal control structure, covers everything that depends on the cpumask in use. Definition struct parallel_data { struct padata_shell *ps; struct padata_list __percpu *reorder_list; struct padata_serial_queue __percpu *squeue; atomic_t refcnt; unsigned int seq_nr; unsigned int processed; int cpu; struct padata_cpumask cpumask; struct work_struct reorder_work; spinlock_t lock; }; ps padata_shell object. reorder_list percpu reorder lists squeue percpu padata queues used for serialuzation. refcnt Number of objects holding a reference on this parallel_data. seq_nr Sequence number of the parallelized data object. processed Number of already processed objects. cpu Next CPU to be processed. cpumask The cpumasks in use for parallel and serial workers. reorder_work work struct for reordering. lock Reorder lock. - struct padata_shell¶ Wrapper around struct parallel_data, its purpose is to allow the underlying control structure to be replaced on the fly using RCU. Definition struct padata_shell { struct padata_instance *pinst; struct parallel_data __rcu *pd; struct parallel_data *opd; struct list_head list; }; pinst padat instance. pd Actual parallel_data structure which may be substituted on the fly. opd Pointer to old pd to be freed by padata_replace. list List entry in padata_instance list. Definition struct padata_mt_job { void (*thread_fn)(unsigned long start, unsigned long end, void *arg); void *fn_arg; unsigned long start; unsigned long size; unsigned long align; unsigned long min_chunk; int max_threads; }; thread_fn Called for each chunk of work that a padata thread does. fn_arg The thread function argument. start The start of the job (units are job-specific). size size of this node’s work (units are job-specific). align Ranges passed to the thread function fall on this boundary, with the possible exceptions of the beginning and end of the job. min_chunk The minimum chunk size in job-specific units. This allows the client to communicate the minimum amount of work that’s appropriate for one worker thread to do at once. max_threads Max threads to use for the job, actual number may be less depending on task size and minimum chunk size. Definition struct padata_instance { struct hlist_node cpu_online_node; struct hlist_node cpu_dead_node; struct workqueue_struct *parallel_wq; struct workqueue_struct *serial_wq; struct list_head pslist; struct padata_cpumask cpumask; struct kobject kobj; struct mutex lock; u8 flags; #define PADATA_INIT 1; #define PADATA_RESET 2; #define PADATA_INVALID 4; }; cpu_online_node Linkage for CPU online callback. cpu_dead_node Linkage for CPU offline callback. parallel_wq The workqueue used for parallel work. serial_wq The workqueue used for serial work. pslist List of padata_shell objects attached to this instance. cpumask User supplied cpumasks for parallel and serial works. kobj padata instance kernel object. lock padata instance lock. flags padata flags. - int padata_do_parallel(struct padata_shell *ps, struct padata_priv *padata, int *cb_cpu)¶ padata parallelization function Parameters struct padata_shell *ps padatashell struct padata_priv *padata object to be parallelized int *cb_cpu pointer to the CPU that the serialization callback function should run on. If it’s not in the serial cpumask of pinst (i.e. cpumask.cbcpu), this function selects a fallback CPU and if none found, returns -EINVAL. Description The parallelization callback function will run with BHs off. Note Every object which is parallelized by padata_do_parallel must be seen by padata_do_serial. Return 0 on success or else negative error code. - void padata_do_serial(struct padata_priv *padata)¶ padata serialization function Parameters struct padata_priv *padata object to be serialized. Description padata_do_serial must be called for every parallelized object. The serialization callback function will run with BHs off. - void padata_do_multithreaded(struct padata_mt_job *job)¶ run a multithreaded job Parameters struct padata_mt_job *job Description of the job. Description See the definition of struct padata_mt_job for more details. - int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type, cpumask_var_t cpumask)¶ Sets specified by cpumask_type cpumask to the value equivalent to cpumask. Parameters struct padata_instance *pinst padata instance int cpumask_type PADATA_CPU_SERIAL or PADATA_CPU_PARALLEL corresponding to parallel and serial cpumasks respectively. cpumask_var_t cpumask the cpumask to use Return 0 on success or negative error code - struct padata_instance * padata_alloc(const char *name)¶ allocate and initialize a padata instance Parameters const char *name used to identify the instance Return new instance on success, NULL on error - void padata_free(struct padata_instance *pinst)¶ free a padata instance Parameters struct padata_instance *pinst padata instance to free - struct padata_shell * padata_alloc_shell(struct padata_instance *pinst)¶ Allocate and initialize padata shell. Parameters struct padata_instance *pinst Parent padata_instance object. Return new shell on success, NULL on error - void padata_free_shell(struct padata_shell *ps)¶ free a padata shell Parameters struct padata_shell *ps padata shell to free
https://www.kernel.org/doc/html/v5.12/core-api/padata.html
CC-MAIN-2022-21
refinedweb
1,805
53.71
UI::Panel::get_inner_h() const: Assertion `tborder_ + bborder_ <= h_' failed. Bug Description Sometimes i get this crash: /trunk/ First i thought this crash happens only for window "You won the game", but today i got it in the middle of a game. Can't remember trying to open a window at all at this point in the game. The output.txt shows the terminal output. Backtrace will follow. To reproduce, select "Watch Replay" and then "Back" repeatedly. Related branches - Klaus Halfmann: Approve (code review) on 2017-03-04 - kaputtnik: Approve (testing) on 2017-03-02 - GunChleoc: Resubmit on 2017-03-02 - Diff: 531 lines (+82/-61)22 files modifiedsrc/ui_basic/box.cc (+7/-6) src/ui_basic/fileview_panel.cc (+3/-0) src/ui_basic/listselect.cc (+1/-2) src/ui_basic/multilineeditbox.cc (+5/-5) src/ui_basic/multilinetextarea.cc (+1/-1) src/ui_basic/panel.cc (+21/-4) src/ui_basic/panel.h (+2/-8) src/ui_basic/scrollbar.cc (+1/-1) src/ui_basic/spinbox.cc (+18/-17) src/ui_basic/table.cc (+5/-8) src/ui_basic/table.h (+5/-4) src/ui_basic/tabpanel.cc (+2/-2) src/ui_basic/textarea.cc (+1/-1) src/ui_basic/window.cc (+1/-1) src/ui_fsmenu/campaign_select.cc (+1/-0) src/ui_fsmenu/internet_lobby.cc (+1/-0) src/ui_fsmenu/loadgame.cc (+2/-0) src/ui_fsmenu/netsetup_lan.cc (+1/-0) src/wui/actionconfirm.cc (+1/-1) src/wui/game_message_menu.cc (+1/-0) src/wui/game_summary.cc (+1/-0) src/wui/maptable.cc (+1/-0) Alexander, do you have a multi monitor setup? On the machine where i encountered this i do use two monitors, both with the same resolution. widelands is running on the right monitor then. Maybe related: bug 1656619 I also see this occasionally, but found no instance of how to repro. I have two monitors most of the times, Widelands runs on the larger one: Resolution: 2880 x 1800 Retina (so this means it is 1440x900 normal) Resolution: 3440 x 1440 @ 60 Hz I can trigger this segfault: int UI::Panel: now every time i try to open "Watch replay". - bzr8241 - Window mode - Resolution 1440x900 - Two monitors, each with resolution 1680x1050 Backtrace attached, output of valgrind attached Don't know if this is related: I compiled this branch without debugging symbols https:/ The "Watch Replay" then looks like the screenshot. While accessing the internet lobby, I just got one in button.cc:202: const Image* entry_text_im = autofit_ I guess these are coming from the new dynamic layouts for fullscreen, where some panels might be initialized with 0 width/height before their actual width/height is set in the layout() function. How about rewriting the functions a bit? Instead of: int get_inner_w() const { assert(lborder_ + rborder_ <= w_); return w_ - (lborder_ + rborder_); } We could have: int get_inner_w() const { assert(w_ == 0 || lborder_ + rborder_ <= w_); return (w_ == 0 ? 0 : w_ - (lborder_ + rborder_); } OK I can confirm this is cause by more dynamic layouts, where w_ og h_ (would) become negative for a while. Changing the Assertion will not help in this case. I will try to add some minumu widht and height. The Alternative would be to first layout() without assertions and then check the layout later. (As of my experience this will result in some ugly circular layout issues. Table::layout() uses uint32 for column width which results in some ugly mix of underflow and casrt back to int. We should replace all the uints in ui_basic with ints. I observed int UI::Panel: :get_inner_ w() const: Assertion `lborder_ + rborder_ <= w_' failed. which may be related.
https://bugs.launchpad.net/widelands/+bug/1653460
CC-MAIN-2018-05
refinedweb
582
52.87
I can't execute command "import" with the VLT Tooljasonjifly Apr 9, 2012 12:36 AM When I use "exec" of Apache Ant to execute command "import" of the VLT, the VLT always reports errors. Only in this situation, it can work: set <jcrPath> "." and set <localPath> "/" . Once I change any of them, an error message will be displayed by VLT in Console. How can I set them, when I want to import a file "C:\main\remote\localhost_4502\jcr_root\apps\vdc\test.txt", and the server url is "jcr_root/apps/vdc/test.txt"?? Thanks 1. Re: I can't execute command "import" with the VLT Tooljustin_at_adobe Apr 9, 2012 7:02 AM (in response to jasonjifly) If you are really just trying to upload a single file, VLT is probably not the right tool for you. Just use curl: curl -TC:\main\remote\localhost_4502\jcr_root\apps\vdc\test.txt -u admin:admin 2. Re: I can't execute command "import" with the VLT Tooljasonjifly Apr 9, 2012 8:16 PM (in response to justin_at_adobe) Your suggestion is very useful to me. My primary purpose is achieving this goal: 1.Maintaining a CQ5 project by more than one person, and there're no code conflict. So we need a version control system. 2.Developing by using Eclipse, the reason is that Eclipse is very powerful, but how to integrate Eclipse with CRXDE Lite, just like Eclipse and Tomcat, when I save the code, it will be synchronized to CRXDE Lite automatically, if we don't use VLT, whether can we do that? I'm sorry for my poor English, I don't know whether I express myself intelligibly. Thank you! 3. Re: I can't execute command "import" with the VLT Tooljustin_at_adobe Apr 10, 2012 4:47 AM (in response to jasonjifly)1 person found this helpful You should use VLT for this, but not import (at least not in day to day operation). When you make a change in Eclipse, run "vlt commit" on the command line to push the changed file into your CRX repository. If there's a new file, run "vlt add <file>", then "vlt commit". If you're removing a file, run "vlt rm <file>" then "vlt commit". I believe there are ways to hook this into Eclipse as some kind of automatically running external tool, but I don't know the details of this. import and export are very coarse grained operations. commit (and, conversly update) only deal with the files that have changed. You do need to use import when you are first setting up a project. Usually, the first developer does a checkout from their local repository, then an import into version control and then subsequent developers do a checkout from version control, an import into their local repository, and then a checkout to set up the VLT working copy. There's also a new "sync" feature available in some versions of vlt which runs as a background process keeping a working copy and repository structure in sync with each other. I'd suggest contacting DayCare if you are interested in this. Be sure to read, especially the section about a filtered checkout. Justin 4. Re: I can't execute command "import" with the VLT Tooljasonjifly Apr 11, 2012 1:02 AM (in response to justin_at_adobe) I only have another question to consult you, if I commit a java file by VLT or curl, whether can CQ5 compile automatically? 5. Re: I can't execute command "import" with the VLT Tooljustin_at_adobe Apr 11, 2012 7:37 AM (in response to jasonjifly) It depends CQ supports Java as a scripting language. This isn't as commonly used as JSP or ESP. You'll see an example here: /libs/foundation/components/parbase/img.GET.java In this case, if you update the java file using vlt, it'll be recompiled (just like a JSP). If you are talking about code in an OSGi bundle, just updating the file won't cause the bundle to be recompiled. You should use a CI server for this. 6. Re: I can't execute command "import" with the VLT Tooljasonjifly Apr 12, 2012 4:08 AM (in response to justin_at_adobe) I really appreciate your help. Thanks! I know CRXDE Lite has a functionality of create bundle, it runs within a browser, so when I create bundle by CRXDE Lite, the browser just makes a http request, so do you think it's possible to simulate this request by "curl"? 7. Re: I can't execute command "import" with the VLT Tooljustin_at_adobe Apr 12, 2012 6:23 AM (in response to jasonjifly) I'm sure this is possible. I'd suggest using something like Firebug to capture the HTTP request. 8. Re: I can't execute command "import" with the VLT Tooljasonjifly Apr 22, 2012 11:29 PM (in response to justin_at_adobe) Hi, Justin, I have found that the command "import" of vlt can only import all project, even the directory is not permitted. Are there any way to upload whole directory and create it on JCR remotely without file .vlt? I think that curl is very useful for uploading a single file, however, it cannot upload the whole directory or create it. Best Regards. Jason 9. Re: I can't execute command "import" with the VLT Toolklcodanr85 Apr 24, 2012 5:25 PM (in response to jasonjifly) If you are using Eclipse, VaultClipse is a tool that might help you. You can create your content structure inside Eclipse and use VaultClipse to Import and Export it. Otherwise, I've also seen people create simple forms that make requests against the Sling REST API, but it's kind of clunky and not very reusable.
https://forums.adobe.com/message/4323639
CC-MAIN-2016-22
refinedweb
955
67.89
Signed-off-by: Kent Overstreet <koverstreet google com> --- Documentation/ABI/testing/sysfs-block-bcache | 156 ++++++++++++++++ Documentation/bcache.txt | 255 ++++++++++++++++++++++++++ drivers/md/Kconfig | 2 + drivers/md/Makefile | 1 + drivers/md/bcache/Kconfig | 41 ++++ drivers/md/bcache/Makefile | 14 ++ include/linux/cgroup_subsys.h | 6 + include/linux/sched.h | 4 + kernel/fork.c | 4 + 9 files changed, 483 insertions(+), 0 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-block-bcache create mode 100644 Documentation/bcache.txt create mode 100644 drivers/md/bcache/Kconfig create mode 100644 drivers/md/bcache/Makefile diff --git a/Documentation/ABI/testing/sysfs-block-bcache b/Documentation/ABI/testing/sysfs-block-bcache new file mode 100644 index 0000000..9e4bbc5 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-block-bcache @@ -0,0 +1,156 @@ +What: /sys/block/<disk>/bcache/unregister +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + A write to this file causes the backing device or cache to be + unregistered. If a backing device had dirty data in the cache, + writeback mode is automatically disabled and all dirty data is + flushed before the device is unregistered. Caches unregister + all associated backing devices before unregistering themselves. + +What: /sys/block/<disk>/bcache/clear_stats +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + Writing to this file resets all the statistics for the device. + +What: /sys/block/<disk>/bcache/cache +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For a backing device that has cache, a symlink to + the bcache/ dir of that cache. + +What: /sys/block/<disk>/bcache/cache_hits +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For backing devices: integer number of full cache hits, + counted per bio. A partial cache hit counts as a miss. + +What: /sys/block/<disk>/bcache/cache_misses +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For backing devices: integer number of cache misses. + +What: /sys/block/<disk>/bcache/cache_hit_ratio +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For backing devices: cache hits as a percentage. + +What: /sys/block/<disk>/bcache/sequential_cutoff +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For backing devices: Threshold past which sequential IO will + skip the cache. Read and written as bytes in human readable + units (i.e. echo 10M > sequntial_cutoff). + +What: /sys/block/<disk>/bcache/bypassed +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + Sum of all reads and writes that have bypassed the cache (due + to the sequential cutoff). Expressed as bytes in human + readable units. + +What: /sys/block/<disk>/bcache/writeback +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For backing devices: When on, writeback caching is enabled and + writes will be buffered in the cache. When off, caching is in + writethrough mode; reads and writes will be added to the + cache but no write buffering will take place. + +What: /sys/block/<disk>/bcache/writeback_running +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For backing devices: when off, dirty data will not be written + from the cache to the backing device. The cache will still be + used to buffer writes until it is mostly full, at which point + writes transparently revert to writethrough mode. Intended only + for benchmarking/testing. + +What: /sys/block/<disk>/bcache/writeback_delay +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For backing devices: In writeback mode, when dirty data is + written to the cache and the cache held no dirty data for that + backing device, writeback from cache to backing device starts + after this delay, expressed as an integer number of seconds. + +What: /sys/block/<disk>/bcache/writeback_percent +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For backing devices: If nonzero, writeback from cache to + backing device only takes place when more than this percentage + of the cache is used, allowing more write coalescing to take + place and reducing total number of writes sent to the backing + device. Integer between 0 and 40. + +What: /sys/block/<disk>/bcache/synchronous +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For a cache, a boolean that allows synchronous mode to be + switched on and off. In synchronous mode all writes are ordered + such that the cache can reliably recover from unclean shutdown; + if disabled bcache will not generally wait for writes to + complete but if the cache is not shut down cleanly all data + will be discarded from the cache. Should not be turned off with + writeback caching enabled. + +What: /sys/block/<disk>/bcache/discard +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For a cache, a boolean allowing discard/TRIM to be turned off + or back on if the device supports it. + +What: /sys/block/<disk>/bcache/bucket_size +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For a cache, bucket size in human readable units, as set at + cache creation time; should match the erase block size of the + SSD for optimal performance. + +What: /sys/block/<disk>/bcache/nbuckets +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For a cache, the number of usable buckets. + +What: /sys/block/<disk>/bcache/tree_depth +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For a cache, height of the btree excluding leaf nodes (i.e. a + one node tree will have a depth of 0). + +What: /sys/block/<disk>/bcache/btree_cache_size +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + Number of btree buckets/nodes that are currently cached in + memory; cache dynamically grows and shrinks in response to + memory pressure from the rest of the system. + +What: /sys/block/<disk>/bcache/written +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For a cache, total amount of data in human readable units + written to the cache, excluding all metadata. + +What: /sys/block/<disk>/bcache/btree_written +Date: November 2010 +Contact: Kent Overstreet <kent overstreet gmail com> +Description: + For a cache, sum of all btree writes in human readable units. diff --git a/Documentation/bcache.txt b/Documentation/bcache.txt new file mode 100644 index 0000000..270c734 --- /dev/null +++ b/Documentation/bcache.txt @@ -0,0 +1,255 @@ +Say you've got a big slow raid 6, and an X-25E or three. Wouldn't it be +nice if you could use them as cache... Hence bcache. + +Userspace tools and a wiki are at: + git://evilpiepirate.org/~kent/bcache-tools.git + + +It's designed around the performance characteristics of SSDs - it only allocates +in erase block sized buckets, and it uses a hybrid btree/log to track cached +extants -w2k -b1M -j64 /dev/sdc + . + +When you register a backing device, you'll get a new /dev/bcache# device: +> + +To enable caching, you need to attach the backing device to the cache set by +specifying the UUID: + echo <UUID> > /sys/block/sdb/bcache/attach + +The cache set with that UUID need not be registered to attach to it - the UUID +will be saved to the backing device's superblock and it'll start being cached +when the cache set does show up. + +This only has to be done once. The next time you reboot, just reregister all +your bcache devices. If a backing device has data in a cache somewhere, the +/dev/bcache#. + + +Other sysfs files for the backing device: + + bypassed + Sum of all IO, reads and writes, than have bypassed the cache + + cache_hits + cache_misses + cache_hit_ratio + Hits and misses are counted per individual IO as bcache sees them; a + partial hit is counted as a miss. + + cache_miss_collisions + Count of times a read completes but the data is already in the cache and + is therefore redundant. This is usually caused by readahead while a + read to the same location occurs. + + cache_readaheads + Count of times readahead occured. + + clear_stats + Writing to this file resets all the statistics. + + flush_delay_ms + flush_delay_ms_sync + Optional delay for btree writes to allow for more coalescing of updates to + the index. Default to 0. + + label + Name of underlying device. + + readahead + Size of readahead that should be performed. Defaults to 0. If set to e.g. + 1M, it will round cache miss reads up to that size, but without overlapping + existing cache entries. + + running + 1 if bcache is running. + + sequential_cutoff + A sequential IO will bypass the cache once it passes this threshhold; the + most recent 128 IOs are tracked so sequential IO can be detected even when + it isn't all done at once. + + sequential_cutoff_average + If the weighted average from a client is higher than this cutoff we bypass + all IO. + + unregister + Writing to this file disables caching on that device + + writeback + Boolean, if off only writethrough caching is done + + writeback_delay + When dirty data is written to the cache and it previously did not contain + any, waits some number of seconds before initiating writeback. Defaults to + 30. + + writeback_percent + To allow for more buffering of random writes, writeback only proceeds when + more than this percentage of the cache is unavailable. Defaults to 0. + + writeback_running + If off, writeback of dirty data will not take place at all. Dirty data will + still be added to the cache until it is mostly full; only meant for + benchmarking. Defaults to on. + +For the cache set: + active_journal_entries + Number of journal entries that are newer than the index. + + average_key_size + Average data per key in the btree. + + average_seconds_between_gc + How often garbage collection is occuring. + + block_size + Block size of the virtual device. + + btree_avg_keys_written + Average number of keys per write to the btree when a node wasn't being + rewritten - indicates how much coalescing is taking place. + + + btree_cache_size + Number of btree buckets currently cached in memory + + btree_nodes + Total nodes in the btree. + + btree_used_percent + Average fraction of btree in use. + + bucket_size + Size of Buckets + + bypassed + Sum of all IO, reads and writes, than have bypassed the cache + + cache_available_percent + Percentage of cache device free. + + clear_stats + Clears the statistics associated with this cache + + dirty_data + How much dirty data is in the cache. + + gc_ms_max + Longest garbage collection. + + internal/bset_tree_stats + internal/btree_cache_max_chain + Internal. Statistics about the bset tree and chain length. Likely to be + hidden soon. + + io_error_halflife + io_error_limit + These determines how many errors we accept before disabling the cache. + Each error is decayed by the half life (in # ios). If the decaying count + reaches io_error_limit dirty data is written out and the cache is disabled. + + root_usage_percent + Percentage of the root btree node in use. If this gets too high the node + will split, increasing the tree depth. + + seconds_since_gc + When was the last garbage collection. + + synchronous + Boolean; when on all writes to the cache are strictly ordered such that it + can recover from unclean shutdown. If off it will not generally wait for + writes to complete, but the entire cache contents will be invalidated on + unclean shutdown. Not recommended that it be turned off when writeback is + on. + + tree_depth + Depth of the btree. + + trigger_gc + Force garbage collection to run now. + + unregister + Closes the cache device and all devices being cached; if dirty data is + present it will disable writeback caching and wait for it to be flushed. + + +For each cache within a cache set: + btree_written + Sum of all btree writes, in (kilo/mega/giga) bytes + + discard + Boolean; if on a discard/TRIM will be issued to each bucket before it is + reused. Defaults to on if supported. + + io_errors + Number of errors that have occured, decayed by io_error_halflife. + + metadata_written + Total Metadata written (btree + other meta data). + + nbuckets + Total buckets in this cache + + priority_stats + Statistics about how recently data in the cache has been accessed. This can + reveal your working set size. + + written + Sum of all data that has been written to the cache; comparison with + btree_written gives the amount of write inflation in bcache. diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig index 10f122a..d977b45 100644 --- a/drivers/md/Kconfig +++ b/drivers/md/Kconfig @@ -185,6 +185,8 @@ config MD_FAULTY In unsure, say N. +source "drivers/md/bcache/Kconfig" + config BLK_DEV_DM tristate "Device mapper support" ---help--- diff --git a/drivers/md/Makefile b/drivers/md/Makefile index 8b2e0df..0d4b86b 100644 --- a/drivers/md/Makefile +++ b/drivers/md/Makefile @@ -26,6 +26,7 @@ obj-$(CONFIG_MD_RAID10) += raid10.o obj-$(CONFIG_MD_RAID456) += raid456.o obj-$(CONFIG_MD_MULTIPATH) += multipath.o obj-$(CONFIG_MD_FAULTY) += faulty.o +obj-$(CONFIG_BCACHE) += bcache/ obj-$(CONFIG_BLK_DEV_MD) += md-mod.o obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o obj-$(CONFIG_DM_BUFIO) += dm-bufio.o diff --git a/drivers/md/bcache/Kconfig b/drivers/md/bcache/Kconfig new file mode 100644 index 0000000..9acd870 --- /dev/null +++ b/drivers/md/bcache/Kconfig @@ -0,0 +1,41 @@ + +config BCACHE + tristate "Block device as cache" + select CLOSURES + ---help--- + Allows a block device to be used as cache for other devices; uses + a btree for indexing and the layout is optimized for SSDs. + + See Documentation/bcache.txt for details. + +config BCACHE_DEBUG + bool "Bcache debugging" + depends on BCACHE + ---help--- + Don't select this option unless you're a developer + + Enables extra debugging tools (primarily a fuzz tester) + +config BCACHE_EDEBUG + bool "Extended runtime checks" + depends on BCACHE + ---help--- + Don't select this option unless you're a developer + + Enables extra runtime checks which significantly affect performance + +config BCACHE_LATENCY_DEBUG + bool "Latency tracing for bcache" + depends on BCACHE + ---help--- + Hacky latency tracing that has nevertheless been useful in the past: + adds a global variable accessible via /sys/fs/bcache/latency_warn_ms, + which defaults to 0. If nonzero, any timed operation that takes longer + emits a printk. + +config CGROUP_BCACHE + bool "Cgroup controls for bcache" + depends on BCACHE && BLK_CGROUP + ---help--- + TODO + diff --git a/drivers/md/bcache/Makefile b/drivers/md/bcache/Makefile new file mode 100644 index 0000000..0e5305d --- /dev/null +++ b/drivers/md/bcache/Makefile @@ -0,0 +1,14 @@ + +obj-$(CONFIG_BCACHE) += bcache.o + +bcache-y := alloc.o btree.o bset.o io.o journal.o\ + writeback.o movinggc.o request.o super.o debug.o util.o trace.o stats.o + +CFLAGS_alloc.o += -std=gnu99 +CFLAGS_btree.o += -std=gnu99 +CFLAGS_bset.o += -std=gnu99 +CFLAGS_journal.o += -std=gnu99 +CFLAGS_movinggc.o += -std=gnu99 +CFLAGS_request.o += -std=gnu99 -Iblock +CFLAGS_super.o += -std=gnu99 +CFLAGS_debug.o += -std=gnu99 diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h index 0bd390c..d698634 100644 --- a/include/linux/cgroup_subsys.h +++ b/include/linux/cgroup_subsys.h @@ -72,3 +72,9 @@ SUBSYS(net_prio) #endif /* */ + +#ifdef CONFIG_CGROUP_BCACHE +SUBSYS(bcache) +#endif + +/* */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 4a1f493..1741596 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1583,6 +1583,10 @@ struct task_struct { struct uprobe_task *utask; int uprobe_srcu_id; #endif +#if defined(CONFIG_BCACHE) || defined(CONFIG_BCACHE_MODULE) + unsigned int sequential_io; + unsigned int sequential_io_avg; +#endif }; /* Future-safe accessor for struct task_struct's cpus_allowed. */ diff --git a/kernel/fork.c b/kernel/fork.c index f00e319..d47494b 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1314,6 +1314,10 @@ static struct task_struct *copy_process(unsigned long clone_flags, p->memcg_batch.do_batch = 0; p->memcg_batch.memcg = NULL; #endif +#ifdef CONFIG_BCACHE + p->sequential_io = 0; + p->sequential_io_avg = 0; +#endif /* Perform scheduler related setup. Assign this task to a CPU. */ sched_fork(p); -- 1.7.7.3
http://www.redhat.com/archives/dm-devel/2012-July/msg00164.html
CC-MAIN-2015-06
refinedweb
2,483
56.55
80052/error-no-module-named-django-core-urlresolvers I am working on Django project where I need to create a form for inputs. I tried to import reverse from django.core.urlresolvers. I got an error: line 2, in from django.core.urlresolvers import reverse ImportError: No module named 'django.core.urlresolvers' I am using Python 3.5.2, Django 2.0 and MySQL. Hello @kartik, If you want to import reverse, import it from Django.urls from django.urls import reverse Hope it works!! Thank You!! You need to download and install the ...READ MORE Hello, Open Cmd or Powershell as Admin. type pip ...READ MORE Use the following command to install tkinter ...READ MORE sudo apt-get install python3-tk Then, >> import tkinter # ...READ MORE Hello @kartik, If you set your database engine ...READ MORE Hello @kartik, You can follow this snippet below: MAYBECHOICE ...READ MORE Hello @kartik, You need to set ownership and ...READ MORE Hii @kartik, Try this code: pip install PyMySQL and then ...READ MORE Hello @kartik, It looks like you don't have ...READ MORE Hello @kartik, You just need to install django ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/80052/error-no-module-named-django-core-urlresolvers
CC-MAIN-2022-21
refinedweb
215
70.7
I have an issue that after I added a gadget on the dashboard, initially it displayed the whole dashboard correctly but after I refreshed the page, dashboard became empty. Link to the screenshot: The gadget I added is my custom gadget which has a controller in the same project. Here is the code how I defined it. [Gadget(Title = "Sales statistics")] public class SalesStatisticsGadgetController : Controller { public ActionResult Index() { // action code here } } But there were also other gadgets. This happens on the server but locally where I have only this gadget, it works fine. Episerver log is empty. No errors or warnings. What could happen with it and how to fix it? Any JS error? No, nothing in the Chrome Dev Tools console. Also, on the Network tab every request is 200. I remember for adding new gadget needed to add this: <module> <assemblies> <add assembly="GadgetAssembly" /> </assemblies> </module> And without this, it was working sometimes. I found out that when I locally add Order Gadget, I get the same behavior. I see in the module.config in the root of my site that it has assembly added like: <module> <assemblies> <add assembly="My.Web" /> </assemblies> </module> As the gadget controller is in the My.Web project (same as the whole website). After some testing, I found out that it doesn't like my gadget together with any of the Commerce gadgets. When I add my gadget and one of the Commerce gadgets - Order Gadget or Overview, then it stops working. But it works fine when my gadget is added together with other gadgets. For example, BVN.404 redirect gadget. Try to enable <clientResources debug="true" /> in the <episerver.framework> section. This will load the unminifed JS files and output more information in the console. Just make sure do not run this in production. Thanks, Magnus, it did help to track the issue. I found out that MenuPin caused that behavior. When I comment out it in my module.config, then everything works. Now have to find out the way how to make it work with MenuPin enabled. © Episerver 2017 | About Episerver World
http://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2017/1/how-to-fix-empty-dashboard/
CC-MAIN-2017-13
refinedweb
352
68.77
When shall VB.NET be a retail product? This Article Covers VB 6 Migration When do you think VB.NET will be a retail product and how much time should I devote to using the beta to be reasonably prepared when it comes out? All indications are that Microsoft will release VB.Net before the end of the year. I expect to see Beta2 go public within the next couple of months and word is that all features will be frozen at that part of your question is harder since what "reasonably prepared" means varies greatly depending on your role. In general, I recommend experimenting with the Beta version until you feel comfortable with the new IDE, have a general overall understanding of the system namespaces, and can use VB.NET's new inheritance features. That should give you a solid foundation to build on once it is officially released. Dig Deeper on VB 6 to VB .NET Migration Have a question for an expert? Please add a title for your question Get answers from a TechTarget expert on whatever's puzzling you. Meet all of our Microsoft .Net Development experts View all Microsoft .Net Development questions and answers Start the conversation
http://searchwindevelopment.techtarget.com/answer/When-shall-VBNET-be-a-retail-product
CC-MAIN-2017-04
refinedweb
202
67.45
First and foremost, I found this answer particularly helpful. However, it made me wonder how one goes about finding such information. I can't seem to figure out how to iterate all the messages in my inbox. My current solution uses Uri.parse("content://mms-sms/conversations") content://mms-sms/conversations "content://mms-sms/conversations" public boolean refresh() { final String[] proj = new String[]{"_id","ct_t"}; cursor = cr.query(Uri.parse("content://mms-sms/conversations"),proj,null,null,null); if(!(cursor.moveToFirst())) { empty = true; cursor.close(); return false; } return true; } public boolean next() { if(empty) { cursor.close(); return false; } msgCnt = msgCnt + 1; Msg msg; String msgData = cursor.getString(cursor.getColumnIndex("ct_t")); if("application/cnd.wap.multipart.related".equals(msgData)) { msg = ParseMMS(cursor.getString(cursor.getColumnIndex("_id"))); } else { msg = ParseSMS(cursor.getString(cursor.getColumnIndex("_id"))); } if(!(cursor.moveToNext())) { empty = true; cursor.close(); return false; } return true; } Well, what I am asking doesn't really seem possible. For those just starting out on such tasks, it's advisable to learn about how content providers work in general. Each Uri value added to the query returns access to specific tables. Spending some time looking at the different Telephony.Mmssms tables that one can access and it seems, from my testing, that the only table you can access is using "content://mms-sms/conversations as using "content://mms-sms" leads to a null cursor. Such is life, and it doesn't really make sense to iterate the messages that way since the content and method of extracting the data differ greatly based on whether or not the msg is an SMS or MMS message. It makes sense to iterate and parse SMS and MMS messages separately and store the interesting data into the same class object type for one to manipulate how they would like at a later date. Useful to such a topic would be the Telephony.Sms documentation. Which is where one can find a descriptions of the column index fields. You can find the same information for Telephony.Mms as well as the sub table Telephony.Mms.Part, with links to each of the base columns to describe the information. With this being said, here is a solution to the question How can I iterate all the SMS/MMS messages in the phone? and here is the solution that worked for me. public class Main extends AppCompatActivity { //Not shown, Overrides, button to call IterateAll(); //implementations to follow IterateAll(); public void ScanMMS(); public void ScanSMS(); public void ParseMMS(Msg msg); public Bitmap getMmsImg(String id); public String getMmsAddr(String id); } IterateAll() just calls the two different functions IterateAll() { ScanMMS(); ScanSMS(); } ScanMMS() will iterate through the content://mms table extracting the data from each MMS. public void ScanMMS() { System.out.println("==============================ScanMMS()=============================="); //Initialize Box Uri uri = Uri.parse("content://mms");("--------------------MMS------------------"); Msg msg = new Msg(c.getString(c.getColumnIndex("_id"))); msg.setThread(c.getString(c.getColumnIndex("thread_id"))); msg.setDate(c.getString(c.getColumnIndex("date"))); msg.setAddr(getMmsAddr(msg.getID())); ParseMMS(msg); //System.out.println(msg); } while (c.moveToNext()); } c.close(); } } As one can see, a lot of the important MMS data is in this table, such as the date of the message, the message id and the thread id. You need to use that message ID to pull more information from MMS. The MMS message is divided into smaller parts of data. Each part contains something different, like an image, or a text portion. You have to iterate each part as I do below. public void ParseMMS(Msg msg) { Uri uri = Uri.parse("content://mms/part"); String mmsId = "mid = " + msg.getID(); Cursor c = getContentResolver().query(uri, null, mmsId, null, null); while(c.moveToNext()) { /* String[] col = c.getColumnNames(); String str = ""; for(int i = 0; i < col.length; i++) { str = str + col[i] + ": " + c.getString(i) + ", "; } System.out.println(str);*/ String pid = c.getString(c.getColumnIndex("_id")); String type = c.getString(c.getColumnIndex("ct")); if ("text/plain".equals(type)) { msg.setBody(msg.getBody() + c.getString(c.getColumnIndex("text"))); } else if (type.contains("image")) { msg.setImg(getMmsImg(pid)); } } c.close(); return; } Each part as the mid field which corresponds to the id of the message found earlier. We search the MMS part library only for that mms id and then iterate the different parts found. ct or content_type as described in the documentation described what the part is, i.e. text, image, etc. I scan the type to see what to do with that part. If it's plain text, I add that text to the current message body (apparently there can be multiple text parts, but I haven't seen it, but I believe it) and if it's an image, than load the image into a bitmap. I imagine Bitmaps will be easy to send with java to my computer, but who knows, maybe want to just load it as a byte array. Anyway, here is how one will get the image data from the MMS part. public Bitmap getMmsImg(String id) { Uri uri = Uri.parse("content://mms/part/" + id); InputStream in = null; Bitmap bitmap = null; try { in = getContentResolver().openInputStream(uri); bitmap = BitmapFactory.decodeStream(in); if(in != null) in.close(); } catch (IOException e) { e.printStackTrace(); } return bitmap; } You know, I'm not entirely sure how opening an input stream on the content resolver really works and how it is giving me just the image and not like all the other data, no clue, but it seems to work. I stole this one from some different sources while looking for solutions. The MMS addresses aren't as straight forward to pull as they are for SMS, but here is how you can get them all. The only thing I haven't been able to do is figure out who the sender was. I'd love it if someone knew that. public String getMmsAddr(String id) { String sel = new String("msg_id=" + id); String uriString = MessageFormat.format("content://mms/{0}/addr", id); Uri uri = Uri.parse(uriString); Cursor c = getContentResolver().query(uri, null, sel, null, null); String name = ""; while (c.moveToNext()) { /* String[] col = c.getColumnNames(); String str = ""; for(int i = 0; i < col.length; i++) { str = str + col[i] + ": " + c.getString(i) + ", "; } System.out.println(str);*/ String t = c.getString(c.getColumnIndex("address")); if(!(t.contains("insert"))) name = name + t + " "; } c.close(); return name; } This was all just for MMS. The good news is that SMS is much simpler. public void ScanSMS() { System.out.println("==============================ScanSMS()=============================="); //Initialize Box Uri uri = Uri.parse("content://sms");("--------------------SMS------------------"); Msg msg = new Msg(c.getString(c.getColumnIndex("_id"))); msg.setDate(c.getString(c.getColumnIndex("date"))); msg.setAddr(c.getString(c.getColumnIndex("Address"))); msg.setBody(c.getString(c.getColumnIndex("body"))); msg.setDirection(c.getString(c.getColumnIndex("type"))); msg.setContact(c.getString(c.getColumnIndex("person"))); System.out.println(msg); } while (c.moveToNext()); } c.close(); } Here is my simple message structure so anyone may compile the above code quickly if wanted. import android.graphics.Bitmap; /** * Created by rbenedict on 3/16/2016. */ //import java.util.Date; public class Msg { private String id; private String t_id; private String date; private String dispDate; private String addr; private String contact; private String direction; private String body; private Bitmap img; private boolean bData; //Date vdat; public Msg(String ID) { id = ID; body = ""; } public void setDate(String d) { date = d; dispDate = msToDate(date); } public void setThread(String d) { t_id = d; } public void setAddr(String a) { addr = a; } public void setContact(String c) { if (c==null) { contact = "Unknown"; } else { contact = c; } } public void setDirection(String d) { if ("1".equals(d)) direction = "FROM: "; else direction = "TO: "; } public void setBody(String b) { body = b; } public void setImg(Bitmap bm) { img = bm; if (bm != null) bData = true; else bData = false; } public String getDate() { return date; } public String getDispDate() { return dispDate; } public String getThread() { return t_id; } public String getID() { return id; } public String getBody() { return body; } public Bitmap getImg() { return img; } public boolean hasData() { return bData; } public String toString() { String s = id + ". " + dispDate + " - " + direction + " " + contact + " " + addr + ": " + body; if (bData) s = s + "\nData: " + img; return s; } public String msToDate(String mss) { long time = Long.parseLong(mss,10); long sec = ( time / 1000 ) % 60; time = time / 60000; long min = time % 60; time = time / 60; long hour = time % 24 - 5; time = time / 24; long day = time % 365; time = time / 365; long yr = time + 1970; day = day - ( time / 4 ); long mo = getMonth(day); day = getDay(day); mss = String.valueOf(yr) + "/" + String.valueOf(mo) + "/" + String.valueOf(day) + " " + String.valueOf(hour) + ":" + String.valueOf(min) + ":" + String.valueOf(sec); return mss; } public long getMonth(long day) { long[] calendar = {31,28,31,30,31,30,31,31,30,31,30,31}; for(int i = 0; i < 12; i++) { if(day < calendar[i]) { return i + 1; } else { day = day - calendar[i]; } } return 1; } public long getDay(long day) { long[] calendar = {31,28,31,30,31,30,31,31,30,31,30,31}; for(int i = 0; i < 12; i++) { if(day < calendar[i]) { return day; } else { day = day - calendar[i]; } } return day; } } Some final comments and notes on this solution. The person field seems to always be NULL and later I plan to implement a contact look up. I also haven't been able to identify who sent the MMS message. I am not super familiar with java and I am still learning it. I am positive there is a data container (ArrayList) (Vector?) that could hold a user defined object. And if sortable by a specific field in the object (date), one could iterate that list and have a chronological order of all the message: both MMS/SMS and both sent/received.
https://codedump.io/share/sTOqkngAWvZP/1/find-and-interate-all-smsmms-messages-in-android
CC-MAIN-2017-04
refinedweb
1,587
57.57
OK I feel really stupid for this, but I can't get it to work. I thought you just needed #include<time.h> at the top to use it, but obviously not, so can anyone tell me what it should be instead of time.h please? OK I feel really stupid for this, but I can't get it to work. I thought you just needed #include<time.h> at the top to use it, but obviously not, so can anyone tell me what it should be instead of time.h please? perhaps this? Code:#include <ctime> Double Helix STL Code:void wait ( int seconds ) { clock_t endwait; endwait = clock () + seconds * CLK_TCK ; while (clock() < endwait) {} } That is a very bad wait function consider what it is doing... you are putting it into an loop where it will be sucking up the processor hardcore. A better solution is Sleep/sleep, search it on the board to find a nice function that is crossplatform You're right, Sleep(); is better. That's in milliseconds, right? So, Sleep(1000); Yes, it's in milliseconds. Sleep(1000); is about 1 second. Make sure to:However, it's platform dependent!However, it's platform dependent!Code:#include <windows.h> Videogame Memories! A site dedicated to keeping videogame memories alive! Share your experiences with us now! "We will game forever!" Oh and remember the uppercase issue with sleep(). In C it is: sleep(1000); In C++ it is: Sleep(1000); Very easy mistake to make Double Helix STL You're kidding, right? C and C++ don't have sleep() as part of the standard. I believe its Sleep() - the WinAPI, and sleep() - the windows.h function. Originally Posted by brewbuck: Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster. Sleep() - win32 sleep() - *nix Or did our friend want this wait()...Or did our friend want this wait()...Code:void wait(size_t ms) { #ifdef WIN32 #include <windows.h> Sleep(ms); #else #include <unistd.h> usleep(ms); #endif } Last edited by jafet; 01-19-2007 at 11:55 PM.;} I ... don't think it's a very good idea to include headers within functions. Oh no, I really don't. What happened to the xp_sleep functions I posted a few weeks back? The board can't find them, and neither can the other board at which I post regularly. Last edited by CornedBee; 01-20-2007 at 07:56 AM. All the buzzt! CornedBeeCornedBee "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law To be or not to be == true
https://cboard.cprogramming.com/cplusplus-programming/87616-wait-undeclared.html
CC-MAIN-2017-26
refinedweb
463
77.03
This article shows you how easy it is to setup a web service using WCF (Windows Communication Framework). Using the source I've provided as a template, you can quickly create your own web services. This article does not deal with a client consuming the web service (you will have to wait until part 2). I've been playing around with WCF for a while, since the early CTP's. I found it difficult to find good samples/examples. Either they didn't work or I was either too lazy or too stupid to get them working. All I wanted was something that worked with very little effort (or understanding) on my part. I just wanted something I could install and get running!!! Of course at some point everyone will have to understand the ABC's (but that can wait for another day). When you start to learn a new technology, especially a Beta or CPT - you just want it to work, figuring out how it works can wait for another day..... So I put together a simple WCF Web service, that you can just download and get running in a few minutes (for lazy developers - like myself!) You are going to need Visual Studio 2005 (it might work with other versions of Visual Studio, but I've not tested in and I'm not going to!!), and .NET 3.0 (I would get the entire package from here instead). Then just download the source from above. Download the example code and open up the solution in Visual Studio 2005. The are two projects, the Web service and the Implementation of the class. [Fig 1] Fig 1 - showing the 2 projects I have chosen to use the dev web server that is built into Visual Studio, it's just easier, less setup and mucking around. But there is no reason not to use IIS (if you know how to). When your code goes into production you will be using IIS, but for now I'm going to leave it alone. There are two parts to the web service, the .svc file and the web.config. WCFService.svc <% @ServiceHost Service="WCFSample.EchoImplementation"%> Web.Config <system.serviceModel> <services> <service name="WCFSample.EchoImplementation" behaviorConfiguration="returnFaults"> <endpoint contract="WCFSample.IEchoContract" binding="basicHttpBinding"/> <endpoint contract="IMetadataExchange" binding="mexHttpBinding" address="mex"> </endpoint> </service> </services> The Service from the ServiceHost attribute in the WCFService.svc file should match one of the service names in the web.config. The service name and endpoint contract should both match the implementation and contracts from the WCF Project Template. Service ServiceHost service name endpoint contract The WCF Project Template is made up of 3 parts, the contract [data contract or message], the implementation and the interface [ServiceContract] (yes, the ABC's had to come in somewhere). contract [ServiceContract] The WCFContract.cs contains the interface for this service. [ServiceContract] interface IEchoContract { [OperationContract] EchoMessage Echo(EchoMessage Message); } The Message which gets sent around is contained in the WCFContract.cs [DataContract] public class EchoMessage { private string _OutMessage; private string _ReturnMessage; [DataMember] public string OutMessage { get { return _OutMessage; } set { _OutMessage = value; } } [DataMember] public string ReturnMessage { get { return _ReturnMessage; } set { _ReturnMessage = value; } } } The implementation of the web service is in WCFImplementation.cs class EchoImplementation : IEchoContract { public EchoMessage Echo(EchoMessage Message) { EchoMessage _returningMessage = new EchoMessage(); _returningMessage.ReturnMessage = Message.OutMessage; return _returningMessage; } } For this example, I used the EchoMessage to pass the data between the client and the web service, but this could be any class that has [DataContract] as an attribute. EchoMessage [DataContract] Now it's your turn. The code sample is a very simple service, but the structure can be copied for other web services, the example can be scaled out so there is a one to many or many to many relation between the .svc files
http://www.codeproject.com/Articles/16973/Simple-Web-Service-using-WCF-Windows-Communication?fid=372431&df=90&mpp=10&sort=Position&spc=None&tid=3880119
CC-MAIN-2014-10
refinedweb
631
54.73
Note Development of the AbsorptionSpectrum module has been moved to the Trident package. This version is deprecated and will be removed from yt in a future release. See for further information. Absorption line spectra are spectra generated using bright background sources to illuminate tenuous foreground material and are primarily used in studies of the circumgalactic medium and intergalactic medium. These spectra can be created using the AbsorptionSpectrum and LightRay analysis modules. The AbsorptionSpectrum class and its workhorse method make_spectrum() return two arrays, one with wavelengths, the other with the normalized flux values at each of the wavelength values. It can also output a text file listing all important lines. For example, here is an absorption spectrum for the wavelength range from 900 to 1800 Angstroms made with a light ray extending from z = 0 to z = 0.4: And a zoom-in on the 1425-1450 Angstrom window: Once a LightRay has been created traversing a dataset using the light-ray-generator, a series of arrays store the various fields of the gas parcels (represented as cells) intersected along the ray. AbsorptionSpectrum steps through each element of the LightRay‘s arrays and calculates the column density for desired ion by multiplying its number density with the path length through the cell. Using these column densities along with temperatures to calculate thermal broadening, voigt profiles are deposited on to a featureless background spectrum. By default, the peculiar velocity of the gas is included as a doppler redshift in addition to any cosmological redshift of the data dump itself. For features not resolved (i.e. possessing narrower width than the spectral resolution), AbsorptionSpectrum performs subgrid deposition. The subgrid deposition algorithm creates a number of smaller virtual bins, by default the width of the virtual bins is 1/10th the width of the spectral feature. The Voigt profile is then deposited into these virtual bins where it is resolved, and then these virtual bins are numerically integrated back to the resolution of the original spectral bin size, yielding accurate equivalent widths values. AbsorptionSpectrum informs the user how many spectral features are deposited in this fashion. To instantiate an AbsorptionSpectrum object, the arguments required are the minimum and maximum wavelengths (assumed to be in Angstroms), and the number of wavelength bins to span this range (including the endpoints) from yt.analysis_modules.absorption_spectrum.api import AbsorptionSpectrum sp = AbsorptionSpectrum(900.0, 1800.0, 10001) Absorption lines and continuum features can then be added to the spectrum. To add a line, you must know some properties of the line: the rest wavelength, f-value, gamma value, and the atomic mass in amu of the atom. That line must be tied in some way to a field in the dataset you are loading, and this field must be added to the LightRay object when it is created. Below, we will add the H Lyman-alpha line, which is tied to the neutral hydrogen field (‘H_number_density’). my_label = 'HI Lya' field = 'H_number_density' wavelength = 1215.6700 # Angstroms f_value = 4.164E-01 gamma = 6.265e+08 mass = 1.00794 sp.add_line(my_label, field, wavelength, f_value, gamma, mass, label_threshold=1.e10) In the above example, the field argument tells the spectrum generator which field from the ray data to use to calculate the column density. The label_threshold keyword tells the spectrum generator to add all lines above a column density of 10 10 cm -2 to the text line list output at the end. If None is provided, as is the default, no lines of this type will be added to the text list. Continuum features with optical depths that follow a power law can also be added. Like adding lines, you must specify details like the wavelength and the field in the dataset and LightRay that is tied to this feature. The wavelength refers to the location at which the continuum begins to be applied to the dataset, and as it moves to lower wavelength values, the optical depth value decreases according to the defined power law. The normalization value is the column density of the linked field which results in an optical depth of 1 at the defined wavelength. Below, we add the hydrogen Lyman continuum. my_label = 'HI Lya' field = 'H_number_density' wavelength = 912.323660 # Angstroms normalization = 1.6e17 index = 3.0 sp.add_continuum(my_label, field, wavelength, normalization, index) Once all the lines and continuua are added, it is time to make a spectrum out of some light ray data. wavelength, flux = sp.make_spectrum('lightray.h5', output_file='spectrum.fits', line_list_file='lines.txt') A spectrum will be made using the specified ray data and the wavelength and flux arrays will also be returned. If you set the optional use_peculiar_velocity keyword to False, the lines will not incorporate doppler redshifts to shift the deposition of the line features. Three output file formats are supported for writing out the spectrum: fits, hdf5, and ascii. The file format used is based on the extension provided in the output_file keyword: .fits for a fits file, .h5 for an hdf5 file, and anything else for an ascii file. The AbsorptionSpectrum analysis module can be run in parallel simply by following the procedures laid out in Parallel Computation With yt for running yt scripts in parallel. Spectrum generation is parallelized using a multi-level strategy where each absorption line is deposited by a different processor. If the number of available processors is greater than the number of lines, then the deposition of individual lines will be divided over multiple processors. This tool can be used to fit absorption spectra, particularly those generated using the ( AbsorptionSpectrum) tool. For more details on its uses and implementation please see (Egan et al. (2013)). If you find this tool useful we encourage you to cite accordingly. To load an absorption spectrum created by ( AbsorptionSpectrum`), we} After loading a spectrum and specifying the properties of the species used to generate the spectrum, an appropriate fit can be generated. the results of a fitted spectrum for further analysis is accomplished automatically using the h5 file format. A group is made for each species that is fit, and each species group has a group for the corresponding N, b, z, and group# values. To generate a fit for a spectrum generate_total_fit() is called. This function controls the identification of line complexes, the fit of a series of absorption lines for each appropriate species, checks of those fits, and returns the results of the fits. function get_test_lines(). Also included in these parameter guesses is an initial guess of a high column cool line overlapping a lower column warm line, indicative of a broad Lyman alpha (BLA) absorber.
https://yt-project.org/doc/analyzing/analysis_modules/absorption_spectrum.html
CC-MAIN-2018-22
refinedweb
1,100
53.21
Windows Store for developers blog Windows 8 app developer blog IE Blog The Windows Blog Inside SkyDrive blog Download Windows 8 Release Preview Windows Dev Center The //build/ conference Windows 8 Release Preview forums Developer forums Three years ago, I wrote a post on the Engineering Windows 7 blog about the Windows 7 development process. For this go-round, we thought we’d let you hear from some of the newer members of the team, by doing an informal Q&A session with two members of our Windows Runtime Experience team, both of whom started in Windows just before we started planning Windows 8 (so, Windows 8 is their first experience with developing Windows from start to finish). Tell me a bit about yourself. Where do you come from and how long have you been at Microsoft? Chris: Hi, my name is Chris Edmonds. A native Oregonian, I attended school at Oregon State University (Go, Beavers!) and have had internships at NASA and Garmin. During these experiences I worked on projects ranging from robotics to avionics and did research on high speed routing for many-core processors. Microsoft recruited me from Oregon State, and I arrived on the Windows team roughly two and a half years ago. Mohammad: Hello, My name is Mohammad Almalkawi. I am a software design engineer in the Windows division at Microsoft. I have also been at Microsoft for about two and a half years. I graduated from the University of Illinois at Urbana-Champaign (Go, Illini!) where I was working on fault-tolerance and real-time systems integration research. What do you work on for Windows 8? Chris: I started working with the Windows team a few months before Windows 7 was released to manufacturing. Shortly thereafter, I joined the newly created Windows Runtime Experience team. The Runtime Experience team builds many pieces of the Windows Runtime (WinRT) infrastructure. During Windows 8 development I had the opportunity to work across many parts of the WinRT. In the first milestone (of three), I worked to define core patterns of the WinRT system. We break the project into three milestones and divide the architecture and implementation across these milestones to get us from a whiteboard sketch to a finished product. We have to include all the work it takes to coordinate the different technologies across Windows 8. In first milestone (M1), we designed patterns for events, object construction, asynchronous methods, and method overloading. It was important to define strong patterns for these basic concepts in order to allow each programming language that interoperates with the WinRT to expose these concepts in natural and familiar way for their developers. In the second milestone, I had the opportunity to build part of our deployment story for Metro style applications. Specifically, I worked on registering Metro style applications with the WinRT so that they can be launched and can interact with contracts. The third milestone included lots of cross-group collaboration, which I learned is crucial to a project as deep and as broad as Windows 8. I worked with a team to define and implement core pieces of the application model for Metro style applications. This work ensured that Metro style apps written in different languages and on different UI platforms behave in a consistent manner in regards to contracts and application lifetime. Mohammad: I had the opportunity to participate in Windows 8 since the very beginning. We had three major feature milestones (M1, M2, and M3) to realize the goals of Windows 8. Each of these milestones consisted of: In the first milestone, I worked on the design and development of the discovery and activation of application extensions. This WinRT infrastructure allows applications to participate in OS-supported contracts (such as search and share) and serves as a basis for exciting Windows features, including the search and share charms. In the second milestone, I was in charge of implementing the Windows metadata resolution feature, which is a key API that ties the Windows metadata generated by the WinRT tool chain and JavaScript and C# language projections. And in M3, I was in charge for the design and development of the namespace enumeration API, which enabled the Chakra JavaScript engine to support reflection functionality over WinRT namespaces and types. CLR also uses this API to implement metadata resolution and Visual Studio uses it to support Intellisense for WinRT types. What’s a normal work day like? Chris:. What has been your biggest surprise? Chris: I think the biggest thing that surprised me about working in Windows was the size of the team and the number of activities that are going on at any point in time. In working on the few features assigned to me I had the opportunity to interact with hundreds of other people across the team to come up with specifications and solutions. It sounds really hectic (and it was a little overwhelming at first) but it always amazes me how well teams communicate together to come up with some really cool solutions to problems. When I think of the number of people who use Windows and the number of ways Windows is used, I guess it seems incredible that we get all this done with as few people as we do. Mohammad:. Of course you are not left alone in the dark, as there are lots of support channels, domain experts, and senior engineers who are there to provide help when you need it. How was Windows 8 different from other projects you’ve worked on? Chris: Having worked mostly on smaller projects at Oregon State and in my prior internships (most code projects are small compared to Windows), the biggest difference is how much code I read every day. I find that I spend a good portion of time reading and debugging code written by other teams before I came to Microsoft, as well as going over code I wrote myself in a previous milestone. This has really made me appreciate well-written code. What has been your biggest challenge that you had to solve? Mohammad: Soon after joining the team, I had to makes fixes in unfamiliar code in COM activation. This code is very infrastructural, as a lot of components in Windows are built on top of it, so it was crucial that my changes would not cause regressions. This code might have seemed straightforward to experts in my team, but certainly that was not the case for a new guy like me. I had to read a lot of code, step through the debugger, and write lots of test cases to improve my understanding and confidence in making the necessary changes without breaking anything. Can you tell me something about what it is like to come up with the plans for Windows 8? Chris: Planning Windows 8 takes different shapes for different people on the team. As a part of the planning effort the newly formed Runtime Experience team took a week to build apps using a variety of languages, stacks, frameworks, and technologies. That’s because a design tenet of Windows 8 is that it can be programmed in multiple languages. Part of the goal of this effort was to force each of us use a language that we were not already familiar with so that we could experience the learning curve. I worked on a 3D terrain generation program using IronPython and XNA, a photo gallery app using HTML\JavaScript, and a simple 2D physics engine in C++ using GDI for drawing. From the app building exercises, we created presentations to give to the team on the experience of building each app, along with a list of the good, bad, and ugly of each experience. What impressed you? Mohammad: I was very impressed by the quality of the Windows engineering system that we have in place; it supports thousands of Windows software engineers and keeps millions lines of code in the operating system healthy with nightly builds and quality gate runs. The automated quality gate runs include critical end-to-end tests, performance tests, application compatibility tests, static code analysis and a few other tests that we use to quickly discover problems and to tightly control their propagation across branches via forward and reverse integrations. What is milestone quality (MQ)? Chris: This milestone is all about getting the code base, engineering tools, and engineering processes ready for the next product cycle. As I learned, MQ is a time to look across the code and do some housekeeping—from just cleaning up source files, to redoing abstractions that prepared us for the work we would do in Windows 8. Code is our asset so dedicating time to maintaining that asset is pretty important. During MQ of Windows 8 I participated in three different efforts. The first was to create a system that automatically reported back code coverage numbers via an internal dashboard for our team based on our daily test passes. This was one of the first things I worked on at Microsoft, and it gave me a great opportunity to learn about our engineering systems. The second effort that I participated in was a code sanitization practice to help standardize the way we use asserts across the code base. Finally, I worked on a prototype system that would use some pieces of our IntelliSense infrastructure to automatically catalog all parts of our SDK. What are you focusing on now? Mohammad: Performance, Performance, and Performance! The features I owned are close to the bottom of the software stack and used very frequently, so their performance is very critical. Therefore, my focus now is on analyzing performance, and prototyping and integrating various performance improvements. We built things from the start to be high performance, so now we are fine-tuning that performance, given the tons of code that has been written to the infrastructure. How do you validate the work end-to-end? Chris: As part of a team dedicated to improving the application developer experience, it is important that we regularly take off our operating system developer hats and don our application developer hats. This is done in small ways in our everyday work, but one of the most structured forms of this are the application building weeks. Based on the initial application building week that took place during planning, we took the time each milestone to develop an application using the WinRT, with different teams focused on different languages and APIs. Writing apps on a platform that is still in development creates some interesting challenges, and these weeks are a fun change of pace. These app building weeks (some of which included more teams) have resulted in numerous bugs being filed, and have caused us to rethink and change some of our API guidance in order to make each developer’s experience more natural and familiar. A “bug” can be anything from a fatal crash, memory leak, or security hole, all the way to a report that “something just doesn’t seem right.” We treat everything like a bug and go through a process of categorizing and prioritizing these reports. The reports come from the groups in Windows building on our APIs, other groups at Microsoft, early partners such as device and PC makers, our interns (as you saw at //build/), and from people in the forums who are building apps now on the Developer Preview. What is the most important lesson you have learned? Mohammad: I got to experience the idea that “anything that can go wrong will go wrong” given the size and scale of the product, and the large number of users (by the way, we do dogfood our work internally from the very beginning on our primary dev machines). This taught me that paying attention to details and focusing on quality in every line of code is very important for the overall stability of the product. Of course, that is just one of many important lessons I learned so far—I’m still working my way through my first Windows release and expect to learn a few more things during the upcoming phases of the product. I can’t wait. Chris: Me too!!
http://blogs.msdn.com/b/b8/archive/2012/03/06/going-behind-the-scenes-building-windows-8.aspx?Redirected=true
CC-MAIN-2014-23
refinedweb
2,028
57.61
I). Problem: In previous versions of ASP.NET developers imported and used both custom server controls and user controls on a page by adding <%@ Register %> directives to the top of pages like so: Note that the first two register directives above are for user-controls (implemented in .ascx files), while the last is for a custom control compiled into an assembly .dll file. Once registered developers could then declare these controls anywhere on the page using the tagprefix and tagnames configured. This works fine, but can be a pain to manage when you want to have controls used across lots of pages within your site (especially if you ever move your .ascx files and need to update all of the registration declarations. Solution: ASP.NET 2.0 makes control declarations much cleaner and easier to manage. Instead of duplicating them on all your pages, just declare them once within the new pages->controls section with the web.config file of your application: You can declare both user controls and compiled custom controls this way. Both are fully supported by Visual Studio when you use this technique -- and both VS 2005 Web Site Projects and VS 2005 Web Application Projects support them (and show the controls in WYSIWYG mode in the designer as well as for field declarations in code-behind files). One thing to note above is the use of the "~" syntax with the user-controls. For those of you not familiar with this notation, the "~" keyword in ASP.NET means "resolve from the application root path", and provides a good way to avoid adding "..\" syntax all over your code. You will always want/need to use it when declaring user controls within web.config files since pages might be using the controls in different sub-directories - and so you always need to resolve paths from the application root to find the controls consistently. Once you register the controls within the web.config file, you can then just use the controls on any page, master-page or user control on your site like so (no registration directives required): Hope this helps, Scott P.S. Special thanks to Phil Haack who blogged about this technique as well earlier this month (for those of you who don't know Phil, he helps build the very popular SubText blog engine and has a great blog). PingBack from Question: I've been using this sparingly... I'm worried that it might impace performance. Do you think so? In other words, if I register 30 user controls and 5 custom server controls in this way, will my pages slow down? Will compilation slow down? Thanks! Thanks for the shot out Scott! Much appreciated, especially from such a great blog and blogger. Yep, I'm a ScottGu fanboy. ;) Hi John, The good news is that there isn't any performance difference between registering them in a web.config file vs. at the top of a page. The runtime performance is exactly the same (they get compiled down to the same instructions in both scenarios). Compilation should also compile at the same performance. Scott, There's a mistake in your code. In the "Solution" in which the registrations are placed in the web.config file, this line: <scott:header should be .... <scottgu:header because you used a different tag prefix ("scottgu") in the revised version. Thanks for the info! Hi Chris, Good catch! ;-) Hopefully you got the point anyway! :-) Thanks, There seems to be an issue with this method and the designer generated code. Hi Adam, There are cases when the designer can't infer the base class of a user-control with VS 2005 Web Application Projects - in which case it just generates a field declaration in the code-behind of the page as type "UserControl". To get a strongly-typed declaration of the user-control, you can manually add the declaration to your code-behind class (not the .designer class) of the type you want (which is typically the UserControl code-behind class name). Once you do this the designer will pick this up and use it as the field declaration and you'll get full intellisense. good trick I have been using this trick but for only those controls which are used through out the Site. I always thought if a user control is used only ate one or 2 place, It should be placed in the Page itself, but if used at many pages in the site then should be declared in the web.config file Thanks Vikram With regards to the user control being declared with the incorrect data type in the designer.cs file - does your solution (manually declare the variable in the code behind file) "force" the invalid declaration to be removed from the designer.cs file? Or will there be a duplicate named variable with a different data type? I am using Web Application Projects and understand this to be a limitation of that project type currently. Is this something that is going to be addressed in a service pack or hotfix to the web application project? Hi Ryan, If you declare the control type/name in your code-behind file in a VS 2005 Web Application Project, then the designer automatically avoids adding it to the .designer.cs/.vb file. This avoids any duplicate name collisions (note: you'll need to flip once to source view or design view and make a change in order to have the designer remove it from the .designer file - from them on out it will be omitted). The reason you sometimes need to-do this are in cases where the control is implemented inside the same assembly as the pages. You can sometimes get into an un-buildable state where the designer can't reflect on the project to infer the type (this isn't a problem with controls implemented in other projects or separate assemblies - since they are already successfully compiled). This situation is less of a "bug" per-se - and more of a design outcome of building everything within the same assembly. We are going to continue to look for ways to make this case less common - but there will always be some times when you can't reflect and get the type details. In cases like this you can just declare the field in your code-behind file and everything will work. Scott quite nice! i didn know it exits. does this add any overhead to the pages that don't implement the controls? Hi Arash, The good news is that there is no performance penality to using this trick. It is just as efficient as without. I actually just chatted with someone on the VS 2005 Web Application Project feature team, and he actually mentioned that they've added some better detection support for this in SP1. So you should actually see the cases where you need to declare a field in the code-behind to get a type-specific value go down. Hi Scott I've used another of the <add> variants - to add an assembly reference - in the past and ran into a problem. Our site is arranged as a set of individual webs - including one at the root of the site. Like this: / /admin /access Each of those three would be a separate ASP.NET web application. Now, the contents of web.config is inherited in these circumstances, I think, so - in the area we're concerned with - the page controls set for /admin starts from the page controls set for the web. Now, I have custom controls (or assemblies for that matter) that only exist in the root web. I get run-time errors when I try to access the sub-webs because the assemblies or whatever don't exist. I could do a <clear/> I think ... but from memory the last time I did that, I cleared the whole set, including everything from machine.config - which wasn't quite what I wanted! Am I missing something, or is this really a problem? BTW - I read somewhere that it was a bad idea locating a web at the root (or, I suppose) having sub-webs). Unfortunately, it's meets the use requirements ... Adrian Hi Adrian, Correct - the collection settings within a web.config file will inherit down to sub-applications by default. What you can do in your sub-applications is to use <remove/> instead or <clear/> - and just specify the setting you wish to remove. That way you won't clear everything. Hey Scott, I have a usercontrol that exists in my master page, and I access its properties with: CType(Master, botwmaster).myusercontrol.Visible = False (for example) Doing this requires that I add a reference to the control from the page: <%@ Reference Control="~/controls/usercontrol.ascx" %> My question is, how would I add that reference in the web.config so I don't have to do that in every page? I'd love to eventually get all control declarations and references to exist in a centralized place (for those that exist on multiple pages) More food for thought - it would be great if things declared in a master page would bubble down to your aspx page - the control is registered in the master page with @register already. Thanks for all the information, Scott. This approach seems cleaner to me than @Register'ing the same controls all over the place. So, I embarked on a 5-minute journey through my application to take the registrations out of the .ASPX pages and all into web.config. I ran into a problem though on pages where I would instantiate one of my user controls in codebehind and add it to a table cell. I've got a UC story.ascx with a public storyId that renders the content for a story - simple enough. I use the following in codebehind on various pages to stick stories in table cells: story myStory; myStory = (story)LoadControl("userControls/story.ascx"); myStory.storyId = 1234; cell.Controls.Add(myStory); However when I moved all the UC registrations to web.config, I ran into "The type or namespace name 'story' could not be found" - can you tell me why or how else I can accomplish this? Thanks! Hi BT, If you are dynamically loading and then type-casting the control with a VS 2005 Web Site Project, you can add a <%@ Reference %> element to the top of your page. This will add an assembly reference for you automatically and allow you to reference it like above. Hi Dave, One option to avoid the <%@ Reference %> directive would be to use the VS 2005 Web Application Project. This compiles the entire project into a single assembly, and so avoids hte need to explictly reference page/controls. Hi Scott, how do i force a page to load my usercontrol 's dll file form another path, instead of the /bin folder. reasons for my wanting to do this. 1) i want to keep my control and its dll separate. 2)i don't want the website restarting each time the admin uploads a new usercontrol. thnx. This is very useful, but it brings some trouble along in my project: if I use a user control in another user control, and both controls are in the same directory in the project (in your example, ~/Controls/ ), this compile error occurs: The page '/Controls/Outer.ascx' cannot use the user control '/Controls/Inner.ascx', because it is registered in web.config and lives in the same directory as the page. I'd rather not move nested controls to a different directory. Is there a solution for this codewise, or will there be a fix in SP1? Cheers Nice tip. I changed my pages to use it. One issue: I have a user control which uses another user control. They both reside in a folder named \Controls. I got this error: The page '/MyApp/Controls/CtlA.ascx' cannot use the user control '/MyApp/Controls/CtlB.ascx, because it is registered in web.config and lives in the same directory as the page. I just left it the old way. BTW - that was a very good error message - it alowed me to pinpoint the problem right away. Scot - In a blog from Aug 28 2005 you discussed building a library of user controls and the ability to then consume them in other web applications. Twice in the blog it was asked how to reference the user control in the a page of the consuming application. For example, one entry said Hi Scott. I am having a similar problem to a previous comment. This all works fine if I include the separate .ascx file. However, as soon as I set the build flag to not allow updating (which as a consequence does not create a separate .ascx file), I cannot get it to work. Your previous response was that ASP.NET can resolve the .ascx references at runtime. How do you register the control? without registering it, compilation fails. Thanks in advance Curtis. I had been trying to do this for better part of two days and when I say the blog entry I thought, "Finally, I'll get the secret sauce". But your response was completely disappointing.. Hi Curtis, What I'd recommend is for you to have the user-control project be marked as updatable, and copy the .ascx file into the consuming project. You could then compile the consuming project as non-updatable. This will then remove the HTML for both the pages on the site, as well as the user-control you are using. How can one develop a distributable library of user controls and protect the source, as you claim is possible with the plethora of build options in VS2005, if you can't do this. I just want to know if it's possible or not and what one has to do? Hi Greg, Here is a blog post that discusses how to convert a user control into a compiled control: To those of you receiving this error: (JP) One solution I have read is to move the control out of the directory its currentyly sharing with outer.ascx, but this isn't feasible in my project. I found that you can simply re-register the control inside of the outer.ascx: <%@ Register TagPrefix="Brian" TagName="Inner" Src="Controls/Inner.ascx" %> and the problem goes away. this "trick" didn't work for me, when i put the register stuff in the web.config: <add assembly="App_Web_mytest2uc.ascx.cdcab7d2" namespace="XY" tagPrefix="xy" /> error: 'The type or namespace name 'XY' could not be found' i've packed my ascx in an assembly as described here (), but i can only instantiate it within my aspx-code behind, when i register the assembly declarative in the aspx-webpage: <%@ Register TagPrefix="xy" Namespace="XY" Assembly="App_Web_mytest2uc.ascx.cdcab7d2" %> any ideas? How about Web Services? I have this Web User Control with a static method that uses LoadControl and executes some methods that use the controls inside the WUC. I am using it to render HTML which I then embed into an email. I want to do this inside a Web Service and nothing seems to be working. The error, of course, is that UcUserControl type is not valid in current context, but I can't add a register directive to the asmx, I can't use an assembly directive (I get ASP.NET runtime error: The file 'ucusercontrol.ascx.cs' cannot be processed because the code directory has not yet been built.) and declaring the user control in the web.config as in this article doesn't work. Any idea how I can use inside a web service the type declared in the code behind of a web user control ? Thanks. Hi Siderite, Have you seen this article of mine: It enables you to use .ascx templates within web-services to dynamically generate HTML. Thanks, Scott! It's a nice implementation, but it uses reflection. I was looking for an ASP.NET 1.1 like use of the web user control object in other objects, but I guess that's out of the question now. It was particularily interesting to see that your code to do the same thing as mine used a completely different method. You used LoadControl, I used Controls.Add (and had a lot of problems with it, of course) and then ServerExecute instead of RenderControl... Very useful stuff. Thanks! Hi as u said that instead of registering a user control once per web page that v r going to use it within,its better to register it in web.config file I did that and gave tagprefix,tagname and src attributes proper values.but the problem is DO I NEED TO DRAG THE USER CONTROL ON MY WEB PAGE(S) OR WUD I HAV TO MANUALLY WRITE THE TAGPREFIX AND TAGNAME ETC IN XYZ.ASPX PAGE TO SET THE VALUE OF ID ETC IF I DO LATER WAY THEN THE VALUES OF TAGPREFIX AND TAGNAME I SPECIFIED ARE NOT PICKED UNDER XYZ.ASPX PAGE.(THEY ARE NOT SHOWN IN POP UP LIST) This works great but when I view the page(in Design mode) that contains the controls the controls show up in red. An error shows for each control that states: This control cannot be displayed because its TagPrefix is not registered in this Web Form. Also, the same error shows up in the Error List. The odd thing is that I can build the solution, run the project and view the page/controls perfectly. Is there anything that can be done to get rid of this annoying error message? Hi Richard, Can you try closing the project and re-open it to see if it makes any difference? Are you opening things the same way in VS as they get invoked via IIS? Meaning, is the web.config file in the root of the project? Superb tip, works a treat and really helps to make the aspx pages look much neater. Hi Scott !!! thanks for sharing this useful information with us.... I also want to know how I can refers to User Control which resides in different directory than my project root directory Hi Shekhar, You can do this by using the "~" syntax when referencing the controls from the root: ~/subdir/mycontrol.ascx Thanks a lot for ur reply..... The solution u have given works perfectly when controls are present in inner directory of root directory. Wat about if the user controls are present in outside of root directory? is it possible to refer that controls ? Thanks again for your kind help Shekhar Unfortunately I don't think it is possible to register user controls outside of the application root directory. Sorry! Hi Scott, I had previously used your tips above and added all of my usercontrols into the web.config file. Great! It's a really useful feature and it worked perfectly. Now it doesn't. For some reason all of my code has stopped working and when trying to debug the build fails (along with intellisense etc.) I was adding controls dynamically e.g. Dim News As Controls_News = LoadControl("~/Controls/News.ascx") Where Controls_News is the partial class name. Adding a register directive to the page solves this issue, I am just wondering if you or any of your readers here would know why it worked for a while and then stopped? I created a new project, cleaned ASP.NET temp files directory and so on and the problem persists.
http://weblogs.asp.net/scottgu/archive/2006/11/26/tip-trick-how-to-register-user-controls-and-custom-controls-in-web-config.aspx
crawl-001
refinedweb
3,258
64.61
On Thu, Jul 5, 2018 at 9:39 AM Dave Hansen <dave.hansen@intel.com> wrote:>> On 07/02/2018 01:29 PM, Pavel Tatashin wrote:> > On Mon, Jul 2, 2018 at 4:00 PM Dave Hansen <dave.hansen@intel.com> wrote:> >>> + unsigned long size = sizeof(struct page) * PAGES_PER_SECTION;> >>> + unsigned long pnum, map_index = 0;> >>> + void *vmemmap_buf_start;> >>> +> >>> + size = ALIGN(size, PMD_SIZE) * map_count;> >>> + vmemmap_buf_start = __earlyonly_bootmem_alloc(nid, size,> >>> + PMD_SIZE,> >>> + __pa(MAX_DMA_ADDRESS));> >>> >> Let's not repeat the mistakes of the previous version of the code.> >> Please explain why we are aligning this. Also,> >> __earlyonly_bootmem_alloc()->memblock_virt_alloc_try_nid_raw() claims to> >> be aligning the size. Do we also need to do it here?> >>> >> Yes, I know the old code did this, but this is the cost of doing a> >> rewrite. :)> >> > Actually, I was thinking about this particular case when I was> > rewriting this code. Here we align size before multiplying by> > map_count aligns after memblock_virt_alloc_try_nid_raw(). So, we must> > have both as they are different.>> That's a good point that they do different things.>> But, which behavior of the two different things is the one we _want_?We definitely want the first one:size = ALIGN(size, PMD_SIZE) * map_count;The alignment in memblock is not strictly needed for this case, but italready comes with memblock allocator.>> >>> + if (vmemmap_buf_start) {> >>> + vmemmap_buf = vmemmap_buf_start;> >>> + vmemmap_buf_end = vmemmap_buf_start + size;> >>> + }> >>> >> It would be nice to call out that these are globals that other code> >> picks up.> >> > I do not like these globals, they should have specific functions that> > access them only, something:> > static struct {> > buffer;> > buffer_end;> > } vmemmap_buffer;> > vmemmap_buffer_init() allocate buffer> > vmemmap_buffer_alloc() return NULL if buffer is empty> > vmemmap_buffer_fini()> >> > Call vmemmap_buffer_init() and vmemmap_buffer_fini() from> > sparse_populate_node() and> > vmemmap_buffer_alloc() from vmemmap_alloc_block_buf().> >> > But, it should be a separate patch. If you would like I can add it to> > this series, or submit separately.>> Seems like a nice cleanup, but I don't think it needs to be done here.>> >>> + * Return map for pnum section. sparse_populate_node() has populated memory map> >>> + * in this node, we simply do pnum to struct page conversion.> >>> + */> >>> +struct page * __init sparse_populate_node_section(struct page *map_base,> >>> + unsigned long map_index,> >>> + unsigned long pnum,> >>> + int nid)> >>> +{> >>> + return pfn_to_page(section_nr_to_pfn(pnum));> >>> +}> >>> >> What is up with all of the unused arguments to this function?> >> > Because the same function is called from non-vmemmap sparse code.>> That's probably good to call out in the patch description if not there> already.>> >>> diff --git a/mm/sparse.c b/mm/sparse.c> >>> index d18e2697a781..c18d92b8ab9b 100644> >>> --- a/mm/sparse.c> >>> +++ b/mm/sparse.c> >>> @@ -456,6 +456,43 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,> >>> __func__);> >>> }> >>> }> >>> +> >>> +static unsigned long section_map_size(void)> >>> +{> >>> + return PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION);> >>> +}> >>> >> Seems like if we have this, we should use it wherever possible, like> >> sparse_populate_node().> >> > It is used in sparse_populate_node():> >> > 401 struct page * __init sparse_populate_node(unsigned long pnum_begin,> > 406 return memblock_virt_alloc_try_nid_raw(section_map_size()> > * map_count,> > 407 PAGE_SIZE,> > __pa(MAX_DMA_ADDRESS),> > 408> > BOOTMEM_ALLOC_ACCESSIBLE, nid);>> I missed the PAGE_ALIGN() until now. That really needs a comment> calling out how it's not really the map size but the *allocation* size> of a single section's map.>> It probably also needs a name like section_memmap_allocation_size() or> something to differentiate it from the *used* size.>> >>> +/*> >>> + * Try to allocate all struct pages for this node, if this fails, we will> >>> + * be allocating one section at a time in sparse_populate_node_section().> >>> + */> >>> +struct page * __init sparse_populate_node(unsigned long pnum_begin,> >>> + unsigned long pnum_end,> >>> + unsigned long map_count,> >>> + int nid)> >>> +{> >>> + return memblock_virt_alloc_try_nid_raw(section_map_size() * map_count,> >>> + PAGE_SIZE, __pa(MAX_DMA_ADDRESS),> >>> + BOOTMEM_ALLOC_ACCESSIBLE, nid);> >>> +}> >>> +> >>> +/*> >>> + * Return map for pnum section. map_base is not NULL if we could allocate map> >>> + * for this node together. Otherwise we allocate one section at a time.> >>> + * map_index is the index of pnum in this node counting only present sections.> >>> + */> >>> +struct page * __init sparse_populate_node_section(struct page *map_base,> >>> + unsigned long map_index,> >>> + unsigned long pnum,> >>> + int nid)> >>> +{> >>> + if (map_base) {> >>> + unsigned long offset = section_map_size() * map_index;> >>> +> >>> + return (struct page *)((char *)map_base + offset);> >>> + }> >>> + return sparse_mem_map_populate(pnum, nid, NULL);> >>> >> Oh, you have a vmemmap and non-vmemmap version.> >>> >> BTW, can't the whole map base calculation just be replaced with:> >>> >> return &map_base[PAGES_PER_SECTION * map_index];> >> > Unfortunately no. Because map_base might be allocated in chunks> > larger than PAGES_PER_SECTION * sizeof(struct page). See: PAGE_ALIGN()> > in section_map_size>> Good point.>> Oh, well, can you at least get rid of the superfluous "(char *)" cast?> That should make the whole thing a bit less onerous.I will see what can be done, if it is not going to be cleaner, I willkeep the cast.Thank you,Pavel
http://lkml.org/lkml/2018/7/9/572
CC-MAIN-2018-30
refinedweb
728
55.44
Plotter Control (obsolete)¶ ↑ lw()¶ ↑ - Name: - lw - laser writer graphical output (or HP pen plotter) - Syntax: h.lw(file) h.lw(file, device) h.lw() - Description: h.lw(file, device)opens a file to keep a copy of subsequent plots (file is a string variable or a name enclosed in double quotes). All graphs which are generated on the screen are saved in this file in a format given by the integer value of the device argument. - device =1 - Hewlett Packard pen plotter style. - device =2 - Fig style (Fig is a public domain graphics program available on the SUN computer). The filter f2pstranslates fig to postscript. - device =3 - Codraw style. Files in this style can be read into the PC program, CODRAW. The file should be opened with the extension, .DRA. Lw keeps copying every plot to the screen until the file is closed with the command, h.lw(). Note that erasing the screen with h.plt(-3)or a Control-e will throw away whatever is in the file and restart the file at the beginning. Therefore, lwkeeps an accurate representation of the current graphic status of the screen. After setting the device once, it remains the same unless changed again by another call with two arguments. The default device is 2. - Example: Suppose an HP plotter is connected to serial port, COM1:. Then the following procedure will plot whatever graphics information happens to be on the screen (not normal text). from neuron import h, gui import os # function for hp style plotter def hp(): h.plt(-1) h.lw() os.system("cp temp com1:") h.lw("temp") h.lw("temp", 1) Notice that the above procedure closes a file, prints it, and then re-opens temp. The initial direct command makes sure the file is open the first time hp is called. Warning It is often necessary to end all the plotting with a h.plt(-1)command before closing the file to ensure that the last line drawing is properly terminated. In our hands the the HP plotter works well at 9600 BAUD and with the line \verb+MODE COM1:9600,,,,P+in the autoexec.bat file.
https://www.neuron.yale.edu/neuron/static/py_doc/programming/io/plotters.html
CC-MAIN-2020-16
refinedweb
359
75.61
Whether it’s building a 3D scanning system with a Kinect, or using a USB TV tuner dongle for software defined radio, there are a lot of interesting off-schedule uses for commodity hardware. The latest comes from the fruitful mind of [sjMoquin] and a Lexmark N2050 WiFi card that runs Linux. This build started off with a Lexmark X6570 all-in-one printer available for about $100 USD on eBay. This printer comes packaged with a Lexmark N2050 WiFi card running BusyBox. After soldering a few wires to the USB/UART pins on the N2050, [sjMoquin] had a very cheap but highly useful single board computer running Linux. There is still a little more work to be done – the WiFi and USB on the N2050 aren’t currently supported. [sjMoquin] and [Julia Longtin] are working on that, so a fully functional embedded Linux board based on a printer’s WiFi card should be available soon. It might be time to hit up eBay for a few of these cards, you know. Neat hack but I can think of many off the shelf devices that will give you linux and wifi in a <$100 package (or even <$50). True the DD-WRT website has a long list of candidate hardware Man, you trolls just never get it! So what if there is a better or cheaper way? This is a DIFFERENT way to futz with hardware on hand — where’s the fun in making the same build twice? Nail squarely on the head. This is an awesome build; Hack-points for resourcefulness and re-purposing. Look up Sheevaplug. and then realize there are gobs of <$100 linux computers available all over the place. [Brian], I guess it WAS you that started the whole $100 bit… the authors of this hack never suggested that you BUY a printer to do this! But in any case… that’s NOT THE POINT. This hack underscores the presence of perfectly respectable embedded linux systems in many pieces of everyday hardware. In this case, a piece of hardware that is sold below cost as a way of making money from ink cartridges. More of these, please! – and to [sjMoquin]: “We salute you”. Captain Obvious to the rescue again. Except missing the point of HaD once again. Thanks, we really appreciate your efforts here fartface. Yes, but how many will get you a Linux computer AND laser printer for a hundred dollars? Crap, I don’t know why I thought that was a laser printer. Inkjet is less compelling. The link won’t load at the moment, but sounds a great hack. I will have to find one of those boards. yah but who doesnt have an old wifi printer laying around Or you can get ~20USD router (TP-Link TL-WR703 for example) and get linux with wifi, ethernet, usb, serial and with usb peripherals also audio, video and many more. if the printer is free… (curbside or donation and yes i do find em curbside, and working) …then this hack shows true hardcore hardware hackers/modifiers/reuse-recycle-eers at thier best! keep up the good work, and in case people dont get it,… the true dream of us kind of people is to be able to pick up any random curbside device with a uC in it and be able to remove it and use it inside some project instead of the 5$. anyone can place an order for 5$, but it takes “true grit” for someone to rework an idea to make use of a random curbside sub-module (sub-circuitboard) THIS PROJECT IS AN EXAMPLE OF WHY I LOVE Hack.A.Day.! Sometimes it seems that there are two kinds of hackers: Those just scraping by, and those who can afford to blow $3000 on materials for a project and then forget to build it. The ones without the money who have to scrounge are typically a bit more capable and driven. I say this as one of the latter. There are thousands and thousands of old guys out there who have fully equipped machine shops with CNC mills and all the tools needed to build anything from birdhouses to a full sized airplane, minisub or giant robot… but it all sits there gathering dust because they’re no longer motivated to invent and tinker. And I say this as a guy who has easily wasted 30% or more of my lifetime income on stupid tech projects and ridiculous tools that I later sold for pennies on the dollar to kids that took the time to tell me their dreams and could convince me that they had the flame. The guys who ask why you’d bother aren’t all bad. It’s just that every dollar you earn eats away a little at your dreams of building a spaceship; every year you’ve been able to afford to empty a radio shack of small parts makes you forget the joy of looking at an old PCB and thinking “God, look at all these parts!” There are exceptions, thank god. Go Space-X! Yea when the apocalypse comes (or your pay cheque hasn’t been deposited yet) and you can’t order then exact parts and tools off the internet people like sjMoquin will still be hacking. because a oversized linux sbc computer will really come in handy during the apocalypse… Matt, to be fair… if novels and movies have taught us anything about our survival, it’s that large single board computers will be venerated after the apocalypse. Also, Apple computer products can be used by everyone from Will Smith to a rag-tag band of Nazi Space Zombies to control spacecraft of both domestic and imported models, as well as make contact with alien civilizations. Scrounging is a necessary and important strain of human behavior. If I’m ever stuck rebuilding civilization from scratch, I’d want to pick those guys for the lifeboats. PS to charlie: It already ran linux. I can’t think of any modern printer line that isn’t running hacked 2.2 or newer kernels. i think their webside was brought down by popularity. true hack equals true project-presentation-website-VIEWER-OVERLOAD XD Free now for the low price of $100! Not a criticism, I just have been reading too much political news lately and this made me laugh. lolz, reminds me of the phrase “free-to-market”, whatever that means (PS: i really reeeally dont know) Admitting ignorance is the first step toward enlightenment. (Just made that up) It seems to have something to do with proving prior patents don’t apply to your product. Nice link about turning Linux routers into usful computing devices This is pretty cool. I bought one of these printers at my local goodwill a few years ago for10$ thinking I could do the same thing. Fizzled out, but has been a great printer since then. And now, when it dies, someone else will have done the legwork for me :-) And this is why it is a cool hack. Sure you can buy one on ebay but you might run into one at a goodwill, on the street, or at a garage sale. murphy’s law says… lol hint: the day after your used pruchase arrives,,, free one on the curb;) nice. I am always saying people on my country waste lots of good stuff throwing it away, instead of re use/ recycle. Such behavior do not match with our economic situation (3rd world) this hack is a good example how to re use stuff that people usually throw away. def i will follow this project, thanks for sharing Now, that’s the kind of hack I want to see on HaD! It’s not really right to compare getting something out of old hardware with products you can buy if they do the same thing. It’s a totally different kind of usage and posts such as these can give others opportunity to hack their own. On the other hand, it is fair to compare a new product with another in terms of cost and features. Isn’t it ironic how a printer company who refuses to provide Linux drivers, has had their equipment hacked to run Linux?? I LOVE IT!!!! Wasn’t hacked, the card was already running embedded Linux. But you are right on about the driver issue. If it costs $100 it’s not really free, now is it? The card only is 10$ on eBay (about 1,5h remaining). I really like this kind of presentation. What intrigues me is the fact the board in question is also host to a LAN port{unpopulated}that very well may be a easy to find generic like that found in embedded boards similar to this one. And also I had noticed that in the boot up attempt that the board was trying to load info from the printer in terms of hosts and device details ; makes for an interesting idea on using the card as an add on for a weather station etc. the external parts being sensors and what not tied to a peripheral board or w/e. i have been thinking of doing this type of thing with screen and board from a printer i had seen by a dumpster. found some doc’s for the screen but not the board it attaches to. the screen is touch and the board it goes to seems to be a type of motherboard (imbedded Linux OS ?). haven’t been able to figure out how to power it up plus i basically ripped it out of the printer lol. i can post some pics of it if anyone is interested and has time to give some pointers . If the link to our wiki doesn’t work for you, just try again in a few minutes. I’ve not noticed any downtime, and the box that is hosting the wiki seems to be at it’s normal CPU/Mem/Swap load. It is a small VPS, so the number of simultaneous Apache clients is limited, but it will automatically recover as soon as it isn’t hammered into the ground. Don’t off and brake my mind. I just stumbled upon this page and Im so glad to see it. I am working on a similar project. I am using the control panel from a lexmark s605. after taking this printer apart I found not only does it have a nice wireless card but the card just uses a mini B connector to plug into the control panel. Also the controll panel is using an arm9 series cpu and decent ram and a decent rom. Ive been trying to flash an image of android or some other form of linux on it and turn it into a little tablet. when I finish I am definitely posting it here! Oh I also found out you dont need the main board to supply power to the control panel there is a main power input you can use instead you dont even need the printers main board. All of the mundane “Appliance Operators” are in a world of not even suspecting what is “Inside” things. We live “Inside” all things. From which- perhaps the largest divergence between Hackers and Mundanes is in our cherishing knowing not only what IS inside, It’s the being able to DO THINGS with that knowledge. Armchair Hackers have a place by intelligent Commentary but the actual Do-Ocracy is where many of us live. I have a stack of Wireless Printers awaiting the day when they get replaced by CIS boxes- and then they may be hacked into controllers. One potential fate being a brainboard for my pool cleaner. has anyone else done this or had the link come back up or alternative i cant seem to fined anything on it
http://hackaday.com/2012/06/09/free-linux-computer-from-a-printers-wifi-card/?like=1&source=post_flair&_wpnonce=ca8cc7cdd9
CC-MAIN-2014-41
refinedweb
1,988
77.57
NAME CTASSERT - compile time assertion macro SYNOPSIS #include <sys/param.h> #include <sys/systm.h> CTASSERT(expression); DESCRIPTION The CTASSERT() macro evaluates expression at compile time and causes a compiler error if it is false. The CTASSERT() macro is useful for asserting the size or alignment of important data structures and variables during compilation, which would otherwise cause the code to fail at run time. IMPLEMENTATION NOTES The CTASSERT() macro should not be used in a header file. It is implemented using a dummy typedef, with a name (based on line number) that may conflict with a CTASSERT() in a source file including that header. EXAMPLES Assert that the size of the uuid structure is 16 bytes. CTASSERT(sizeof(struct uuid) == 16); SEE ALSO KASSERT(9) AUTHORS This manual page was written by Hiten M. Pandya 〈hmp@FreeBSD.org〉.
http://manpages.ubuntu.com/manpages/maverick/man9/CTASSERT.9freebsd.html
CC-MAIN-2015-48
refinedweb
139
63.59
Practical Object Oriented Design for Ruby In the past week at Launch Academy we dived into object oriented programming. Here is a definition by tutorialspoint. Ruby is pure object-oriented language and everything appears to Ruby as an object. Every value in Ruby is an object, even the most primitive things: strings, numbers and even true and false. Even a class itself is an object that is an instance of the Class class. My definition is classes are there to help organize and structure the program so that it may be easily consumed by others. Todays post is about POODR that adds on top of my definition. It is a way to organize and structure program so that it may be easily consumed AND changed as new features are added. I love that! As programmers, not only should we be thinking to making a working app, but also an app that has low replacement cost in the future. The idea is to write code that is SOLID (Single responsibility, Open-closed, Liskov substitution, Interface segregation, and Dependancy Inversion). I’m currently focusing on the single responsibility section of the SOLID. I find it difficult as a programming beginner to write classes that are loosely coupled as I want them to be. I started with classes that were family and knew everything about each other. However, with some easy to implement tricks and tips from the book I am learning to write more loosely coupled programs. For instance, I wrote a program about a company with many employees, who wanted to know if their employee owed taxes or was due a refund. My first instinct was to create a company class and an employee class. I was having difficulty of instantiating a single employee. The problem was that I was iterating and saving the employees into a single hash, then passing the hash to the employee class. This caused 2 problems. First all my employees were a single instance. Second, because they are a single instance when I passed in the instance to the Employee class, the employee class was not reading in 1 employee at a time, but the whole slob. class Company def load_data(file) employees = [] CSV.foreach(file, :headers => true) do |row| employee = row.to_hash employees << employee end employees end end Employee.new(employees) For an experienced developer the solution was right in front of them. The fix was quite simple. I just needed to change where I was instantiating the object. The previous code changed to the following. class Company self.load_data(file) employees = [] CSV.foreach(file, :headers => true) do |row| employee = Employee.new(row.to_hash) employees << employee end employees end This instantiates each employee separately as I pull data from the CSV file. Now when I pass my employee to the Employee class, it will be of 1 instance of an employee. The benefit is that I won’t have to do an iterator in my employee class but instead I can iterate inside my company class. Another trick I found very useful was passing in hashes as arguments instead of a multiple variables. The benefits are that you don’t have to worry about in what order you pass in the arguments. If you don’t pass in a value for a specific instance, you can simply give it a default value. class Employee def initialize(arguments) @employee = arguments[‘employee’] || ‘missing name’ @tax_owed = arguments[‘tax_owed’] || ‘0’ @tax_paid = arguments[‘tax_paid’] || ‘0’ end end Hope you learned something from that. If you have any cool tricks feel free to share in the comments. Happy coding! John
http://jmoon90.github.io/blog/2013/12/08/practical-object-oriented-design-for-ruby/
CC-MAIN-2017-04
refinedweb
599
56.86
Homework 4 Due by 11:59pm on Thursday, 7/7 Instructions. See Lab 0 for instructions on submitting assignments. Using OK: If you have any questions about using OK, please refer to this guide. Readings: You might find the following references useful: Required Questions Acknowledgements. This interval arithmetic example is based on a classic problem.""" return '{0} to {1}'.format(lower_bound(x), upper_bound(x)) def add_interval(x, y): """Return an interval that contains the sum of any value in interval x and any value in interval y.""" lower = lower_bound(x) + lower_bound(y) upper = upper_bound(x) + upper_bound(y) return interval(lower, upper) Question 1 Alyssa's program is incomplete because she has not specified the implementation of the interval abstraction. She has implemented the constructor for you; fill in the implementation of the selectors. def interval(a, b): """Construct an interval from a to b.""" return [a, b] def lower_bound(x): """Return the lower bound of interval x.""" "*** YOUR CODE HERE ***" def upper_bound(x): """Return the upper bound of interval x.""" "*** YOUR CODE HERE ***" Use OK to unlock and test your code: python3 ok -q interval -u python3 ok -q interval Louis Reasoner has also provided an implementation of interval multiplication. Beware: there are some data abstraction violations, so help him fix his code before someone sets it on fire. def mul_interval(x, y): """Return the interval that contains the product of any value in x and any value in y.""" p1 = x[0] * y[0] p2 = x[0] * y[1] p3 = x[1] * y[0] p4 = x[1] * y[1] return [min(p1, p2, p3, p4), max(p1, p2, p3, p4)] Use OK to unlock and test your code: python3 ok -q mul_interval -u python3 ok -q mul_interval Question 2 3.""" "*** YOUR CODE HERE ***" reciprocal_y = interval(1/upper_bound(y), 1/lower_bound(y)) return mul_interval(x, reciprocal_y) Use OK to unlock and test your code: python3 ok -q div_interval -u python3 ok -q div_interval Question 4 r1 and r2, and show that par1 and par2 can give different results. def check_par(): """Return two intervals that give different results for parallel resistors. >>> r1, r2 = check_par() >>> x = par1(r1, r2) >>> y = par2(r1, r2) >>> lower_bound(x) != lower_bound(y) or upper_bound(x) != upper_bound(y) True """ r1 = interval(1, 1) # Replace this line! r2 = interval(1, 1) # Replace this line! return r1, r2 Use OK to test your code: python3 ok -q check_par Question 5 multiple reference problem...""" Question 6 ***" Use OK to test your code: python3 ok -q quadratic Extra Questions Extra questions are not worth extra credit and are entirely optional. They are designed to challenge you to think creatively! Question 7 ***" Use OK to test your code: python3 ok -q polynomial
https://inst.eecs.berkeley.edu/~cs61a/su16/hw/hw04/
CC-MAIN-2018-05
refinedweb
449
55.54
hgdistver 0.25 obsoleted by setuptools_scm Warning this module is superseeded by setuptools_scm This module is a simple drop-in to support setup.py in mercurial and git based projects. Alternatively it can be a setup time requirement. It extracts the last Tag as well as the distance to it in commits from the scm, and uses these to calculate a version number By default, it will increment the last component of the Version by one and append .dev{distance} in case the last component is .dev, the version will be unchanged Tis requires always using all components in tags (i.e. 2.0.0 instead of 2.0) to avoid misstakenly releasing higher version (i.e. 2.1.devX instead of 2.0.1.devX) It uses 4 strategies to archive its task: - try to directly ask hg for the tag/distance - try to infer it from the .hg_archival.txt file - try to read the exact version the cache file if it exists - try to read the exact version from the ‘PKG-INFO’ file as generated by setup.py sdists (this is a nasty abuse) The most simple usage is: from setuptools import setup from hgdistver import get_version setup( ..., version=get_version(), ..., ) get_version takes the optional argument cachefile, which causes it to store the version info in a python script instead of abusing PKG-INFO from a sdist. The setup requirement usage is: from setuptools import setup setup( ..., get_version_from_hg=True, setup_requires=['hgdistver'], ..., ) The requirement uses the setup argument cache_hg_version_to instead of cachefile. - Author: Ronny Pfannschmidt - License: MIT - Categories - Package Index Owner: ronny - DOAP record: hgdistver-0.25.xml
https://pypi.python.org/pypi/hgdistver/
CC-MAIN-2018-09
refinedweb
268
56.05
October 10, 2018 Single Round Match 739 Editorials SRM 739 was held on October 10th. Thanks to Blue.Mary for the problems and majk for testing the round and writing the editorials. HungryCowsEasy – Div. 2 Easy The limits are small, so we can afford iterating over all cow-barn pairs. For each such pair, we calculate the cow’s distance from the barn. For a fixed cow, we select the barn with the smallest distance. There can be at most two such barns, and if there are two, we pick the one with smaller x-coordinate. class HungryCowsEasy { public: vector findFood(vector cowPositions, vector barnPositions) { vector ans; for (int cow: cowPositions) { int best = 0; for (int i = 0; i < barnPositions.size(); ++i) { int curPos = barnPositions[i]; int bestPos = barnPositions[best]; if (abs(curPos-cow) > abs(bestPos-cow)) best = i; if (abs(curPos-cow) == abs(bestPos-cow) && curPos < bestPos)) best = i; } ans.push_back(best); } return ans; } }; The complexity of this approach is O(n^2). It can also be solved in O(n log n) or even in O(n) if both arrays would be initially sorted. ForumPostMedium – Div. 2 Medium We start by converting the timestamps into a format that is easier to work with: the number of seconds from the midnight. Consider the difference between the two timestamps. We can observe that each potential label is used on a single interval. One can thus use few simple if statements. One slight complication is that the post might be from a previous day. In that case, the number of seconds between the (yesterday’s) midnight and the posting time is larger than the number of seconds between the (today’s) midnight and the current time. In that case, we need to add 86400 (the number of seconds in a day) to the difference. class ForumPostMedium { public: string getShownPostTime(string currentTime, string exactPostTime) { stringstream ct(currentTime), ept(exactPostTime); int h, m, s; char c, d; ct >> h >> c >> m >> d >> s; int t1 = 3600*h + 60*m + s; ept >> h >> c >> m >> d >> s; int t2 = 3600*h + 60*m + s; int diff = t1 - t2; if (diff < 0) diff += 24 * 60 * 60; if (diff < 60) return "few seconds ago"; else if (diff < 60 * 60) { stringstream ans; ans << diff / 60 << " minutes ago"; return ans.str(); } else { stringstream ans; ans << diff / 60 / 60 << " hours ago"; return ans.str(); } } }; The complexity of this approach is O(1). CheckPolygon – Div. 2 Hard In this task, one needs to carefully translate the requirements from words into code. To prevent any nasty surprises, we should perform all computations on integers, and avoid floats. First, a few definitions: typedef pair Point; #define x first #define y second We use a simple orientation test, that returns positive value for left hand turn, negative value for right hand turn, and zero for collinear points. int orientation(Point a,, Point b, Point c) { auto o = (b.x-a.x)*(c.y-a.y) - (b.y-a.y)*(c.x-a.x); return (o>0) - (o<0); } bool collinear(Point a, Point b, Point c) { return orientation(a, b, c) == 0; } Next we need a test to check whether a point lies on segment. To check that, all three points must be collinear, and both the x and y-coordinates of the point on the segment has to be within the bounding box of the segment: bool onSegment(Point a, Point b, Point c) { return orientation(a, c, b) == 0 && ((c.x <= max(a.x, b.x) && c.x >= min(a.x, b.x)) && (c.y <= max(a.y, b.y) && c.y >= min(a.y, b.y))); } To determine whether two segments intersect, we first check whether any of the four points lies on the other segment. Then, we can use the fact that the segments intersect if and only if the points a and b are in different half-planes determined by the line cd, and the points c and d are in different half-planes determined by the line ab. This condition is easily checked using orientation tests: bool segmentsIntersect(Point a, Point b, Point c, Point d) { return onSegment(a, b, c) || onSegment(a, b, d) || onSegment(c, d, a) || onSegment(c, d, b) || (orientation(a, b, c) != orientation(a, b, d) && orientation(c, d, a) != orientation(c, d, b)) } Next we need a method to calculate the area. Since all the points are at integer coordinates, the area is an integer or a half of an integer, so we can again perform all calculations in integers. ll doubleSignedArea(int N, vector P) const { ll area = 0; for (int i = 0; i < N; ++i) area += P[i].x * P[i+1].y - P[i+1].x * P[i].y; return area; } And now we put everything together. We verify two things: No three consecutive points are collinear No two non-consecutive line segments intersect. If both conditions are satisfied we report the area. class CheckPolygon { public: string check(vector X, vector Y) { int N = X.size(); vector P(N+2); for (int i = 0; i < N; ++i) P[i] = {X[i], Y[i]}; P[N] = P[0]; P[N+1] = P[1]; for (int i = 0; i < N; ++i) { if (collinear(P[i], P[i+1], P[i+2])) return "Not simple"; for (int j = i+2; j < N; ++j) { if (i == 0 && j == N-1) continue; if (segmentsIntersect(P[i], P[i+1], P[j], P[j+1])) return "Not simple"; } } ll area = abs(doubleSignedArea(N, P)); stringstream s; s << area/2 << '.' << "05"[area%2]; return s.str(); } }; The complexity of this approach is clearly O(n^2). ForumPostEasy – Div. 1 Easy The obvious thing to do in this task is to try every possible starting time and see whether it matches the observations. The property of being the lexicographically smallest time equals to being the closest time to midnight. Note that we should parse the input just once to avoid having too large constant in the time complexity. class ForumPostEasy { public: string getCurrentTime(vector exactPostTime, vector showPostTime){ vector Times, Result; for (int i = 0; i < exactPostTime.size(); ++i) { stringstream ept(exactPostTime[i]); int h, m, s; char c, d; ept >> h >> c >> m >> d >> s; int t = 3600*h + 60*m + s; Times.push_back(t); char ch = showPostTime[i][showPostTime[i].size()-6]; if (ch == 'd') Result.push_back(0); else { stringstream spt(showPostTime[i]); int d; spt >> d; if (ch == 'e') Result.push_back(d); else Result.push_back(-d); } } for (int t = 0; t < 86400; ++t) { bool ok = true; for (int i = 0; i < exactPostTime.size() && ok; ++i) { int exp = 0, diff = t - Times[i]; if (diff < 0) diff += 86400; if (diff >= 3600) exp = -(diff / 3600); else if (diff >= 60) exp = diff / 60; ok &= exp == Result[i]; } if (ok) { stringstream ans; ans << t / 36000 << (t / 3600)%10 << ':' << (t / 600)%6 << (t / 60)%10 << ':' << (t / 10) % 6 << t%10; return ans.str(); } } return "impossible"; } }; The complexity of this approach is O(t n), where t is the number of seconds in a day. HungryCowsMedium – Div. 1 Medium We can binary search on the answer. How do we check whether the cows can make it in time? First note that if cow has appetite a_i and a barn has coordinate x_j, then the cow cannot possibly eat at this barn if a_i + x_j > t. For this reason, we sort all the barns by their increasing coordinate, and all cows by their decreasing appetite and use the following greedy algorithm. We pick k such that sum_i=1^k a_i <= t – x_1. Then k of the cows with the most appetite will eat their full portion in the first barn. If there is time left in this barn, we take the k+1-th cow and use the remaining capacity. To have the maximum flexibility, it is clear that this is the first cow that should eat. In the next barn, this half-fed cow needs to eat last, and reduces the capacity accordingly. To show that this strategy is indeed optimal, one needs to use the exchange argument to show that we can avoid the following configurations in an optimal solution: a cow that eats at both i-th and j-th barn where j-i >= 2 two cows that eat at both i-th and i+1-th barn cows a_i > a_j eating (at least partially) at barns x_k > x_l, respectively class HungryCowsMedium { public: ll getWellFedTime(vector C, vector B) { sort(C.begin(),C.end()); sort(B.begin(),B.end()); return binary_search_smallest(0LL, ll(4e11), [&](ll T) { auto cow = C.rbegin(); ll time = 0; for (auto barn = B.begin(); barn != B.end() && cow != C.rend(); ++barn) { if (*cow > T - *barn) return false; time += T - *barn - *cow; while (++cow != C.rend() && *cow <= time) time -= *cow; } return cow == C.rend(); }); } }; MakePolygon – Div. 1 Hard A bit of trial and error suggests that on an infinite grid, the optimal solution has area (N-2)/2. The question is how to put it into a grid that is relatively small, and how to avoid the neighbouring segments to be collinear. One approach would be to make an inward spiral, such as this one with 572 points, built from blocks of 8 points in a 2×5 rectangle. If fewer than 500 points are requested, we may omit some of the points from either side. Keep in mind that this may introduce two collinear consecutive segments, i.e. when we remove the four points at the inner end of a spiral. To combat this, we may either remove some points from both ends of a spiral and check whether the conditions hold, or we may be more careful in the spiral construction. The approach used in the solution below is to not generate all the six points in a rectangle if we are approaching the correct number of points. To do that in a manageable fashion, we grow two sides of the polygon, called left and right, in alternating fashion, and stop when the needed amount of points is reached. Afterwards, the left and right side of the polygon are merged together. class MakePolygon { public: int N; vector L, R; bool space(int n = 0) { return (L.size() + R.size() + n< N); } void left(int x, int y) { if (space() && (L.empty() || L.back() != 100*x+y) && x >= 1 && x <= 25 && y >= 1 && y <= 25) L.push_back(100*x+y); } void right(int x, int y) { if (space() && (R.empty() || R.back() != 100*x+y) && x >= 1 && x <= 25 && y >= 1 && y <= 25) R.push_back(100*x+y); } void yway(int x, int y, int d, int end) { bool beg = true; while (y != end) { left(x, y); right(x+d, y); if (!beg && space(4)) left(x-d, y); if (!beg && space(3)) left(x, y+d); if (!beg && space(5)) right(x+d+d, y); if (!beg && space(5) && y+d+d != end) right(x+d+d+d, y+d); left(x+d, y+d); right(x+d+d, y+d); y += d+d; beg = false; } } void xway(int x, int y, int d, int end) { bool beg = true; while (x != end) { if (!beg && space(4)) right(x, y-d-d); if (!beg && space(4)) left(x, y+d); left(x, y); right(x, y-d); right(x+d, y-d-d); if (!beg && space(5) && x+d+d != end) right(x+d, y-d-d-d); left(x+d, y-d); if (space(4)) left(x+d, y); x += d+d; beg = false; } } vector make(int N) { this->N = N; for (int j = 0; j <= 8; j += 4) { yway(1+j, 1+max(j-2,0), 1, 25-j); left(2+j, 25-j); xway(3+j, 25-j, 1, 25-j); left(25-j, 24-j); yway(25-j, 23-j, -1, 1+j); left(24-j, 1+j); xway(23-j, 1+j, -1, 5+j); left(5+j, 2+j); } reverse(L.begin(),L.end()); L.insert(L.end(), R.begin(), R.end()); return L; } }; majk
https://www.topcoder.com/blog/single-round-match-739-editorials/
CC-MAIN-2022-27
refinedweb
2,007
70.33
Creating TreeTables in Swing Just Use a JTree to Render JTable Cells Note: please also see part 2 and part 3 for further updates of TreeTable By Philip Milne A TreeTable is a combination of a Tree and a Table -- a component capable of both expanding and contracting rows, as well as showing multiple columns of data. The Swing package does not contain a JTreeTable component, but it is fairly easy to create one by installing a JTree as a renderer for the cells in a JTable. This article explains how to use this technique to create a TreeTable. It concludes with a example application, named TreeTableExample0, which displays a working TreeTable browser that you can use to browse a local file system (see illustration). In Swing, the JTree, JTable, JList, and JComboBox components use a single delegate object called a cell renderer to draw their contents. A cell renderer is a component whose paint() method is used to draw each item in a list, each node in a tree, or each cell in a table. A cell renderer component can be viewed as a "rubber stamp": it's moved into each cell location using setBounds(), and is then drawn with the component's paint() method. By using a component to render cells, you can achieve the effect of displaying a large number of components for the cost of creating just one. By default, the Swing components that employ cell renderers simply use a JLabel, which supports the drawing of simple combinations of text and an icon. To use any Swing component as a cell renderer, all you have to do is create a subclass that implements the appropriate cell renderer interface: TableCellRenderer for JTable, ListCellRenderer for JList, and so on. Here's an example of how you can extend a JCheckBox to act as a renderer in a JTable: public class CheckBoxRenderer extends JCheckBox implements TableCellRenderer { public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { setSelected(((Boolean)value).booleanValue())); return this; } } The code snippet shown above -- part of a sample program presented in full later in this article -- shows how to use a JTree as a renderer inside a JTable. This is a slightly unusual case because it uses the JTree to paint a single node in each cell of the table rather than painting a complete copy of the tree in each of the cells. We start in the usual way: expanding the JTree into a cell render by extending it to implement the TableCellRenderer interface. To implement the required behavior or a cell renderer, we must arrange for our renderer to paint just the node of the tree that is visible in a particular cell. One simple way to achieve this is to override the setBounds() and paint() methods, as follows: public class TreeTableCellRenderer extends JTree implements TableCellRenderer { protected int visibleRow; public void setBounds(int x, int y, int w, int h) { super.setBounds(x, 0, w, table.getHeight()); } public void paint(Graphics g) { g.translate(0, -visibleRow * getRowHeight()); super.paint(g); } public Component getTableCellRendererComponent(JTable table, object value, boolean isSelected, boolean hasFocus, int row, int column) { visibleRow = row; return this; } } As each cell is painted, the JTable goes through the usual process of getting the renderer, setting its bounds, and asking it to paint. In this case, though, we record the row number of the cell being painted in an instance variable named visibleRow. We also override setBounds(), so that the JTree remains the same height as the JTable, despite the JTable's attempts to set its bounds to fit the dimensions of the cell being painted. To complete this technique we override paint(), making use of the stored variable visibleRow, an operation that effectively moves the clipping rectangle over the appropriate part of the tree. The result is that the JTree draws just one of its nodes each time the table requests it to paint. In addition to installing the JTree as a renderer for the cells in the first column, we install the JTree as the editor for these cells also. The effect of this strategy is the JTable then passes all mouse and keyboard events to this "editor" -- thus allowing the tree to expand and contract its nodes as a result of user input. The example program presented with this article creates and implements a browser for a file system. Each directory can be expanded and collapsed. Other columns in the table display important properties of files and directories, such as file sizes and dates. Note: Correct Swing Version Required To compile and run the example program provided with this article, you must use Swing 1.1 Beta 2 or a compatible Swing release. To compile and run the example program provided with this article, you must use Swing 1.1 Beta 2 or a compatible Swing release. Here is the full list of classes used in the example program, along with a brief description of what each class does (you can download or view each file by clicking its link):
http://java.sun.com/products/jfc/tsc/articles/treetable1/index.html
crawl-002
refinedweb
844
55.37
The Polyglot Programmer - ACID Transactions with STM.NET By Ted Neward | January 2010 While this column has focused specifically on programming languages, it’s interesting to note how language ideas can sometimes bleed over into other languages without directly modifying them. One such example is the Microsoft Research language C-Omega (sometimes written Cw, since the Greek omega symbol looks a lot like a lower-case w on the US keyboard layout). In addition to introducing a number of data- and code-unifying concepts that would eventually make their way into the C# and Visual Basic languages as LINQ, C-Omega also offered up a new means of concurrency called chords that later made it into a library known as Joins. While Joins hasn’t, as of this writing, made it into a product (yet), the fact that the whole chords concept of concurrency could be provided via a library means that any run-of-the-mill C# or Visual Basic (or other .NET language) program could make use of it. Another such effort is the Code Contracts facility, available from the Microsoft DevLabs Web site and discussed in the August 2009 issue of MSDN Magazine. Design-by-contract is a language feature that was prominent in languages like Eiffel, and originally came to .NET through the Microsoft Research language Spec#. Similar kinds of contractual guarantee systems have come through Microsoft Research as well, including one of my favorites, Fugue, which made use of custom attributes and static analysis to provide correctness-checking of client code. Once again, although Code Contracts hasn’t shipped as a formal product or with a license that permits its use in production software, the fact that it exists as a library rather than as a standalone language implies two things. First, that it could (in theory) be written as a library by any .NET developer sufficiently determined to have similar kinds of functionality. And second, that (assuming it does ship) said functionality could be available across a variety of languages, including C# and Visual Basic. If you’re sensing a theme, you’re right. This month I want to focus on yet another recently announced library that comes from the polyglot language world: software transactional memory, or STM. The STM.NET library is available for download via the DevLabs Web site, but in stark contrast to some of the other implementations I’ve mentioned, it’s not a standalone library that gets linked into your program or that runs as a static analysis tool—it’s a replacement and supplement to the .NET Base Class Library as a whole, among other things. Note, however, that the current implementation of STM.NET is not very compatible with current Visual Studio 2010 betas, so the usual disclaimers about installing unfinished/beta/CTP software on machines you care about applies doubly so in this case. It should install side-by-side with Visual Studio 2008, but I still wouldn’t put it on your work machine. Here’s another case where Virtual PC is your very good friend. Beginnings The linguistic background of STM.NET comes from a number of different places, but the conceptual idea of STM is remarkably straightforward and familiar: rather than forcing developers to focus on the means of making things concurrent (focusing on locks and such), allow them to mark which parts of the code should execute under certain concurrency-friendly characteristics, and let the language tool (compiler or interpreter) manage the locks as necessary. In other words, just as database admins and users do, let the programmer mark the code with ACID-style transactional semantics, and leave the grunt work of managing locks to the underlying environment. While the STM.NET bits may appear to be just another attempt at managing concurrency, the STM effort represents something deeper than that—it seeks to bring all four qualities of the database ACID transaction to the in-memory programming model. In addition to managing the locks on the programmer’s behalf, the STM model also provides atomicity, consistency, isolation and durability, which of themselves can make programming much simpler, regardless of the presence of multiple threads of execution. As an example, consider this (admittedly wildly overused) pseudocode example: What happens if the Credit fails and throws an exception? Clearly the user will not be happy if the debit to the from account still remains on the record when the credit to the account isn’t there, which means now the developer has some additional work to do: This would seem, at first blush, to be overkill. Remember, however, that depending on the exact implementation of the Debit and Credit methods, exceptions can be thrown before the Debit operation completes or after the Credit operation completes (but doesn’t finish). That means the BankTransfer method must ensure that all data referenced and used in this operation goes back to exactly the state it was in when the operation began. If this BankTransfer gets at all more complicated—operating on three or four data items at once, for example—the recovery code in the catch block is going to get really ugly, really quickly. And this pattern shows up far more often than I’d like to admit. Another point worth noting is isolation. In the original code, another thread could read an incorrect balance while it was executing and corrupt at least one of the accounts. Further, if you simply slapped a lock around it, you could deadlock if the from/to pairs were not always ordered. STM just takes care of that for you without using locks. If, instead, the language offered some kind of transactional operation, such as an atomic keyword that handled the locking and failure/rollback logic under the hood, just as BEGIN TRANSACTION/COMMIT does for a database, coding the BankTransfer example becomes as simple as this: You have to admit, this is a lot less to worry about. The STM.NET approach, however, being library based, isn’t going to get quite this far since the C# language doesn’t allow quite that degree of syntactic flexibility. Instead, you’re going to be working with something along the lines of: The syntax isn’t quite as elegant as an atomic keyword would be, but C# has the power of anonymous methods to capture the block of code that would make up the body of the desired atomic block, and it can thus be executed under similar kinds of semantics. (Sorry, but as of this writing, the STM.NET incubation effort only supports C#. There is no technical reason why it couldn’t be extended to all languages, but the STM.NET team only focused on C# for the first release.) Getting Started with STM.NET The first thing you’ll need to do is download the Microsoft .NET Framework 4 beta 1 Enabled to use Software Transactional Memory V1.0 bits (a long-winded name, which I’ll shorten to STM.NET BCL, or just STM.NET) from the DevLabs Web site. While you’re there, download the STM.NET Documentation and Samples as well. The former is the actual BCL and STM.NET tools and supplemental assemblies, and the latter contains, among the documentation and sample projects, a Visual Studio 2008 template for building STM.Net applications. Creating a new STM.NET-enabled application begins like any other app, in the New Project dialog (see Figure 1). Selecting the TMConsoleApplication template does a couple of things, some of which aren’t entirely intuitive. For example, as of this writing, to execute against the STM.NET libraries, the .NET application’s app.config requires this little bit of versioning legerdemain: Figure 1 Starting a New Project with the TMConsoleApplication Template .png) Other settings will be present, but the requiredRuntime value is necessary to tell the CLR launcher shim to bind against the STM.NET version of the runtime. In addition, the TMConsoleApplication template binds the assembly against versions of the mscorlib and System.Transactions assemblies installed in the directory where STM.NET is installed, rather than the versions that come with the stock .NET Framework 3.0 or 3.5 CLR. This is necessary, when you think about it, because if STM.NET is going to provide transactional access for anything beyond just the code that you write, it’s going to need to use its own copy of mscorlib. Plus, if it’s going to interact correctly with other forms of transactions—such as the lightweight transactions provided by the Lightweight Transaction Manager (LTM)—it needs to have its own version of System.Transactions as well. Other than that, an STM.NET application will be a traditional .NET application, written in C# and compiled to IL, linked against the rest of the unmodified .NET assemblies, and so on. STM.NET assemblies, like the COM+ and EnterpriseServices components of the last decade, will have a few more extensions in them describing transactional behaviors for the methods that interact with the STM.NET transactional behavior, but I’ll get to that in time. Hello, STM.NET As with the Axum example in the September 2009 issue of MSDN Magazine, writing a traditional Hello World application as the starting point for STM.NET is actually harder than you might think at first, largely because if you write it without concern for transactions, it’s exactly the same as the traditional C# Hello World. If you write it to take advantage of the STM.NET transactional behavior, you have to consider the fact that writing text to the console is, in fact, an un-undoable method (at least as far as STM.NET is concerned), which means that trying to roll back a Console.WriteLine statement is difficult. So, instead, let’s take a simple example from the STM.NET User Guide as a quick demonstration of the STM.NET bits. An object (called MyObject) has two private strings on it and a method to set those two strings to some pair of values: Because the assignment of the parameter to the field is itself an atomic operation, there’s no concern around concurrency there. But just as with the BankAccount example shown earlier, you want either both to be set or neither, and you don’t want to see partial updates—one string being set, but not the other—during the set operation. You’ll spawn two threads to blindly set the strings over and over again, and a third thread to validate the contents of the MyObject instance, reporting a violation in the event Validate returns false (see Figure 2). [AtomicNotSupported] static void Main(string[] args) { MyObject obj = new MyObject(); int completionCounter = 0; int iterations = 1000; bool violations = false; Thread t1 = new Thread(new ThreadStart(delegate { for (int i = 0; i < iterations; i++) obj.SetStrings("Hello", "World"); completionCounter++; })); Thread t2 = new Thread(new ThreadStart(delegate { for (int i = 0; i < iterations; i++) obj.SetStrings("World", "Hello"); completionCounter++; })); Thread t3 = new Thread(new ThreadStart(delegate { while (completionCounter < 2) { if (!obj.Validate()) { Console.WriteLine("Violation!"); violations = true; } } })); t1.Start(); t2.Start(); t3.Start(); while (completionCounter < 2) Thread.Sleep(1000); Console.WriteLine("Violations: " + violations); ... Note that the way this example is constructed, validation fails if the two strings in obj are set to the same thing, indicating that Thread t1’s SetStrings(“Hello”, “World”) is partially updated (leaving the first “Hello” to match the second “Hello” set by t2). A cursory glance at the SetStrings implementation shows that this code is hardly thread-safe. If a thread switch occurs in the middle (which is likely given the Thread.Sleep call, which will cause the currently-executing thread to give up its time slice), another thread could easily jump into the middle of SetStrings again, putting the MyObject instance into an invalid state. Run it, and with enough iterations, violations will start to appear. (On my laptop, I had to run it twice before I got the violations, proving that just because it runs without an error once doesn’t mean the code doesn’t have a concurrency bug.) Modifying this to use STM.NET requires only a small change to the MyObject class, as shown in Figure 3. class MyObject { private string m_string1 = "1"; private string m_string2 = "2"; public bool Validate() { bool result = false; Atomic.Do(() => { result = (m_string1.Equals(m_string2) == false); }); return result; } public void SetStrings(string s1, string s2) { Atomic.Do(() => { m_string1 = s1; Thread.Sleep(1); // simulates some work m_string2 = s2; }); } } As you can see, the only modification required was to wrap the bodies of Validate and SetStrings into atomic methods using the Atomic.Do operation. Now, when run, no violations appear. Transactional Affinity Observant readers will have noticed the [AtomicNotSupported] attribute at the top of the Main method in Figure 2, and perhaps wondered at its purpose, or even wondered if it served the same purpose as those attributes from the COM+ days. As it turns out, that’s entirely correct: the STM.NET environment needs some assistance in understanding whether methods called during an Atomic block are transaction-friendly so that it can provide the necessary and desirable support for those methods. Three such attributes are available in the current STM.NET release: - AtomicSupported—the assembly, method, field or delegate supports transactional behavior and can be used inside or outside of atomic blocks successfully. - AtomicNotSupported—the assembly, method, field or delegate doesn’t support transactional behavior and thus shouldn’t be used inside of atomic blocks. - AtomicRequired—the assembly, method, field or delegate not only supports transactional behavior, it should only be used inside of atomic blocks (thus guaranteeing that using this item will always be done under transactional semantics). Technically there is a fourth, AtomicUnchecked, which signals to STM.NET that this item shouldn’t be checked, period. It’s intended as an escape hatch to avoid checking the code altogether. The presence of the AtomicNotSupported attribute is what leads the STM.NET system to throw an AtomicContractViolationException when the following (naïve) code is attempted: Because the System.Console.WriteLine method is not marked with AtomicSupported, the Atomic.Do method throws the exception when it sees the call in the atomic block. This bit of security ensures that only transaction-friendly methods are executed inside of the atomic block, and provides that additional bit of safety and security to the code. Hello, STM.NET (Part Two) What if you really, really want to write the traditional Hello World? What if you really want to print a line to the console (or write to a file, or perform some other non-transactional behavior) alongside two other transactional operations, but only print it out if both of those other operations succeed? STM.NET offers three ways to handle this situation. First, you can perform the non-transactional operation outside the transaction (and only after the transaction commits) by putting the code inside of a block passed to Atomic.DoAfterCommit. Because the code inside that block will typically want to use data generated or modified from inside the transaction, DoAfterCommit takes a context parameter that is passed from inside the transaction to the code block as its only parameter. Second, you can create a compensating action that will be executed in the event that the transaction ultimately fails, by calling Atomic.DoWithCompensation, which (again) takes a context parameter to marshal data from inside the transaction to the committing or compensating block of code (as appropriate). Third, you can go all the way and create a Transactional Resource Manager (RM) that understands how to participate with the STM.NET transactional system. This is actually less difficult than it might seem—just inherit from the STM.NET class TransactionalOperation, which has OnCommit and OnAbort methods that you override to provide the appropriate behavior in either case. When using this new RM type, call OnOperation at the start of your work with it (effectively enlisting the resource into the STM.NET transaction). Then call FailOperation on it in the event that the surrounding operations fail. Thus, if you want to transactionally write to some text-based stream, you can write a text-appending resource manager like the one shown in Figure 4. This then allows you—in fact, by virtue of the [Atomic-Required] attribute, requires you—to write to some text stream via the TxAppender while inside an atomic block (see Figure 5). public class TxAppender : TransactionalOperation { private TextWriter m_tw; private List<string> m_lines; public TxAppender(TextWriter tw) : base() { m_tw = tw; m_lines = new List<string>(); } // This is the only supported public method [AtomicRequired] public void Append(string line) { OnOperation(); try { m_lines.Add(line); } catch (Exception e) { FailOperation(); throw e; } } protected override void OnCommit() { foreach (string line in m_lines) { m_tw.WriteLine(line); } m_lines = new List<string>(); } protected override void OnAbort() { m_lines.Clear(); } } public static void Test13() { TxAppender tracer = new TxAppender(Console.Out); Console.WriteLine( "Before transactions. m_balance= " + m_balance); Atomic.Do(delegate() { tracer.Append("Append 1: " + m_balance); m_balance = m_balance + 1; tracer.Append("Append 2: " + m_balance); }); Console.WriteLine( "After transactions. m_balance= " + m_balance); Atomic.Do(delegate() { tracer.Append("Append 1: " + m_balance); m_balance = m_balance + 1; tracer.Append("Append 2: " + m_balance); }); Console.WriteLine( "After transactions. m_balance= " + m_balance); } This is obviously the longer route and will be suitable only in certain scenarios. It could fail for some kinds of media types, but for the most part, if all the actual irreversible behavior is deferred to the OnCommit method, this will suffice for most of your in-process transactional needs. Putting STM.NET to Work Working with an STM system takes a little getting used to, but once you’re acclimated, working without it can feel crippling. Consider some of the potential places where using STM.NET can simplify coding. When working with other transacted resources, STM.NET plugs in to existing transacted systems quickly and easily, making Atomic.Do the sole source of transacted code in your system. The STM.NET examples demonstrate this in the TraditionalTransactions sample, posting messages to an MSMQ private queue and making it obvious that, when the Atomic block fails, no message is posted to the queue. This is probably the most obvious usage. In dialog boxes—particularly for multi-step wizard processes or settings dialogs—the ability to roll back changes to the settings or dialog data members when the user hits the Cancel button is priceless. Unit tests such as NUnit, MSTest, and other systems exert great effort to ensure that, when written correctly, tests cannot leak results from one test to the next. If STM.NET reaches production status, NUnit and MSTest can refactor their test case execution code to use STM transactions to isolate test results from each other, generating a rollback at the end of each test method, and thus eliminating any changes that might have been generated by the test. Even more, any test that calls out to an AtomicUnsupported method will be flagged at test execution time as an error, rather than silently leaking the test results to some medium outside the test environment (such as to disk or database). STM.NET can also be used in domain object property implementation. Although most domain objects have fairly simple properties, either assigning to a field or returning that field’s value, more complex properties that have multiple-step algorithms run the risk of multiple threads seeing partial updates (if another thread calls the property during its set) or phantom updates (in the event another thread calls the property during its set, and the original update is eventually thrown away due to a validation error of some form). Even more interesting, researchers outside of Microsoft are looking into extending transactions into hardware, such that someday, updating an object’s field or a local variable could be a transaction guarded at the hardware level by the memory chip itself, making the transaction blindingly fast in comparison to today’s methods. However, as with Axum, Microsoft depends on your feedback to determine if this technology is worth pursuing and productizing, so if you find this idea exciting or interesting, or that it’s missing something important to your coding practice, don’t hesitate to let them know. Ted Neward is a Principal with Neward and Associates, an independent firm specializing in .NET and Java enterprise systems. He has written numerous books, is a Microsoft MVP Architect, INETA speaker, and PluralSight instructor. Reach Ted at ted@tedneward.com, or read his blog at blogs.tedneward.com. Thanks to the following technical experts for reviewing this article:Dave Detlefs and Dana Groff Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/ee291549.aspx
CC-MAIN-2019-43
refinedweb
3,440
52.9
It looks like you're new here. If you want to get involved, click one of these buttons! The problem is that you are calling. There is a minimal example of this which works on my end: package org.broadinstitute.sting.queue.qscripts import org.broadinstitute.sting.queue.QScript import org.broadinstitute.sting.queue.util.QScriptUtils class HaplotypeCallerStep extends QScript { // Create an alias 'qscript' to be able to access variables // in the HaplotypeCallerStep. // 'qscript' is now the same as 'HaplotypeCallerStep.this' qscript => // Required arguments. All initialized to empty values. @Input(doc="input BAM file - or list of BAM files", fullName="input", shortName="I", required=true) var bamFile: File = _ /*************************************************** * main script ***************************************************/ def script() { val bamFilesList = QScriptUtils.createSeqFromFile(bamFile) val sampleNo = bamFilesList.size System.err.println("------ samples ------") System.err.println(sampleNo) System.err.println("======================") } } Hope this solution works for you. Answers Hi Francesco, Unfortunately, we just don't have the resources to help people with programming questions. Perhaps you'll have better luck posting this in the Ask the Community section. I will say that it looks like your problem is unrelated and has something to do with the Haplotype Caller. Perhaps you are using GATK lite? Good luck! Eric Banks, PhD -- Director, Data Sciences and Data Engineering, Broad Institute of Harvard and MIT Hi Eric, I assumed QScriptUtils was part of Queue and createSeqFromFile not to be a general scala method, and therefore that's why I posted here the question. I am using GATK2-2.2 full version, not the light one. QScriptUtils and createSeqFromFile are part of Queue. But I'm 99% sure that inn't what is causing your problem. As Eric points out, it is more likely something wrong in the haplotype caller part. As stated in the error message: "Could not create module HaplotypeCallerStep because Cannot instantiate class (Invocation failure) caused by exception null" If you can post your entire script here, or somewhere else, it would be easier to give an answer as to what is causing the problem. My guess is that it is caused by a problem in your extension class for the Haplotype caller. Thanks Johan, that's really very kind of you. Apart the few lines I wrote above, I didn't modify the extension of the class in other parts, and without reference to import org.broadinstitute.sting.queue.util.QScriptUtils it works (if I don't add the method of course) This is my script, it might be useful to other people as well. Thanks Hi Francesco - Is it failing when you specify a "list of BAM files"? Because I would expect that to cause problems right here: You should be giving it bamFilesList(the actual bams), not bamFile(the file containing a list of filenames). The same problem is in AnnotationArguments. I don't know if that would lead to the error you're seeing, though... thanks, but the script works perfectly that way if I don't add the lines I reported initially. the sintax you refer to is present in ExampleUnifiedGenotyper.scala provided with Queue examples the bamFilesListis something I create here for the sole purpose of calculating the size, i.e. the number of samples Just to clarify, if I comment out these three lines the script I posted works just fine. FYI, I have moved this thread to "Ask the community". Carry on, folks; and thanks for jumping in to help. Geraldine Van der Auwera, PhD Hmm, didn't realize the input would work that way. That's pretty cool. In my own script, I walk through the BAMs and check the RGs to get a count of samples. Here's my code, I'm pretty sure I adapted countSamplesfrom DataProcessingPipeline: Indeed, that's precisely where I found the method createSeqFromFile In the count function, you pass a countSamples(bamFiles: Seq[File])which is defined as a Seq. Because it's been converted by that method I'm interested in. You can see it is a method of QScriptUtils In order to be able to use it, you need to import it with So I'm doing precisely the same as in DataProcessingPipeline, except for the .sizemethod, which is anyway a method of Seq. However, as soon as I put that import into the HaplotypeCaller class, it seems not capable to instantiate it anymore. GATK team members say it has nothing to do with their code... but then I don't understand why the very same code of just three lines conflicts with a script working otherwise. Can you recreate the problem in a minimal script? Perhaps your copy of Queue is corrupt The problem is that you are calling There is a minimal example of this which works on my end:. Hope this solution works for you. Thanks Johan that's great! which confirms my knowledge of scala proximal to zero :-) I've simply moved the code into the class Target now, and it works. this way I can add a conditional on the number of samples anywhere I want in the other classes, outside the main script. thanks a lot!! Francesco
http://gatkforums.broadinstitute.org/gatk/discussion/1779/counting-bams-in-a-bam-list-with-queue
CC-MAIN-2016-07
refinedweb
850
65.73
In the previous tutorial, we discussed multiplexing seven-segment displays (SSDs). Continuing with the display devices, in this tutorial, we will cover how to interface character LCD when using Arduino. Character LCDs are the most common display devices used in embedded systems. These low-cost LCDs are widely used in industrial and consumer applications. Display devices in embedded systems Most devices require some sort of display for various reasons. For example, an air-conditioner requires a display that indicates the temperature and AC settings. A microwave oven requires a display to present the selected timer, temperature, and cooking options. A car dashboard uses a display to track distance, fuel indication, mileage, and fuel efficiency. Even a digital watch requires a display to showcase the time, date, alarm, and modes. There are also several reasons industrial machinery and electrical or electronic devices require a display. The display devices used in embedded circuits — whether they are industrial devices, consumer electronic products, or fancy gadgets — are used either to indicate some information or to facilitate machine-human interface. For instance, LEDs are used as indicators of mutually exclusive conditions. The SSDs are used to display numeric information. The Liquid Crystal Displays (LCDs), TFTs, and OLED displays are used to present the more complicated information in embedded applications. Often, this complication arises due to the text or graphical nature of the information or the interface. LCDs are the most common display devices used in all sorts of embedded applications. There are two types of LCD displays available: 1. Character LCDs 2. Graphical LCDs. The character LCDs are used where the information or interface is of a textual nature. The graphical LCDs are used where the information or interface is of a graphical nature. The graphical LCDs that are used to design machine-human interfaces may also have touchscreens. Character LCDs Character LCDs are useful in showing textual information or to provide a text-based, machine-human interface. It’s even possible to display some minimal graphics on these LCDs. These are low-cost LCD displays that fit in a wide range of embedded applications. Generally, character LCDs do not have touchscreens. And unlike graphical LCDs, these LCDs do not have continuous pixels. Instead, the pixels on character LCDs are arranged as a group of pixels or dot-matrix of pixels of fixed dimensions. Each dot-matrix of a pixel is intended to display a text character. This group of pixels is usually of 5×7, 5×8, or 5×10 dimensions — where the first digit indicates the number of columns of pixels and the second digit indicates the number of rows of pixels. For example, if each character has 5×8 dimensions, then the character is displayed by illuminating 5 columns and 8 rows of pixels/dots. This may include pixels used to show the cursor. The character LCDs are classified by their size, which is expressed as the number of characters that can be displayed. The number of possible characters that can display at a time on the LCD is indicated as the number of columns of characters and the number of rows of characters. The common size of character LCDs is 8×1, 8×2, 10×2, 16×1, 16×2, 16×4, 20×2, 20×4, 24×2, 30×2, 32×2, 40×2, etc. For example, a 16×2 character LCD can display 32 characters at a time in 16 columns and 2 rows. Generally, characters are displayed as a matrix of black dots while the backlight of LCD may be a monochromatic color like blue, white, amber, or yellow-green. LCDs are available as one of three types: 1. Twisted Nematic (TN) 2. Super Twisted Nematic (STN) 3. Focus Super Twisted Nematic (FSTN). The character LCDs may use any one of these types. The TN types are low-cost but have a narrow viewing angle and low contrast. The FSTN offers the best contrast and widest viewing angle, but they are more costly. Even character LCDs that use the FSTN display are still cheaper in comparison to graphical LCDs, TFTs, and OLEDs. Most of the character LCDs use LED backlight and the backlight color can be white, blue, amber, or yellow-green. The other types of a backlight in character LCDs include EL, CCFL, internal power, external power, and 3.3 and 5V backlights. EL and LED backlights are the most common. The LCD may have a reflective, trans-reflective, or transmissive rear polarizer. The quality of display depends on the LCD type, the backlight, and the nature of a rear polarizer used in the LCD panel. When selecting an LCD panel for an embedded application, it’s important to decide on the quality of the LCD display, according to the requirements. This includes per the application, class of the device, nature of use (such as indoor or outdoor), target users of the device, intended user-experience, operating conditions (such as temperature and operating voltage), and cost limitations. For example, a character LCD that has to be used for the machine-human interface must have better contrast, a wide viewing angle, and a good backlight. The following table summarizes the important characteristics of any character LCD. Even on a character LCD, a large number of pixels have to be controlled to display the text. A 16×2 character LCD in which each character is 5×8 pixels means that a total of 1280 pixels (16×2 characters x 5×8 Pixels) have to be controlled. This requires interfacing the pixels across 16 rows (2 rows of characters x 8 rows in each character) and 80 columns (16 columns of characters x 5 columns in each character) of connections. This is when pixels are black dots and merely require switching either ON or OFF by the controller to display text characters. On a typical microcontroller, there are not these many I/O pins that can be dedicated to controlling the pixels of an LCD panel. That is why LCD modules have integrated controllers that control the pixels of the LCD. The integrated controller can interface with a microcontroller or a processor via an 8-bit/4-bit parallel port or a serial interface (like I2C). The integrated controller receives data and commands from the microcontroller/processor to display text on the LCD panel via a 4-bit/8-bit parallel or serial interface. In fact, the LCD module is a complete embedded system comprising of an LCD panel, LCD driver, LCD controller, LED Backlight, internal flags, Address Counter, Display Data RAM (DDRAM), Character Generator ROM (CGROM), Character Generator RAM (CGRAM), Data Register (DR), Instruction Register (IR), and Cursor Control Circuit. Functional blocks of the LCD module A character LCD module has these functional blocks: 1. LCD Panel. The character LCDs have the dot-matrix LCD panel. The text characters are displayed on the panel according to the commands and data received by the integrated controller. 2. System Interface. This module has a 4-bit and an 8-bit interface to connect with microcontrollers/processors. Some LCD modules also have a built-in serial interface (I2C) for communication with a controller. The selection of interface (4-bit or 8-bit) is determined by the DL bit of the Instruction Register (IR). 3. Data Register (DR). Data Register is an internal register that stores data received by the microcontroller via the system interface. The value populated in the data register is compared with character patterns in Character Generator ROM (CGROM) to generate different standard characters. 4. Instruction Register (IR). Instruction Register is an internal register that stores instructions received by the microcontroller via the system interface. 5. Character Generator ROM (CGROM). It’s an internal Read-Only Memory (ROM) on the LCD module where the patterns for the standard characters are stored. For example, a 16×2 LCD module, CGROM has 5×8 dots, 204 character patterns, and 5×10 dots of 32 characters pattern that are stored. So, the patterns for the 204 characters are permanently stored in the CGROM. 6. Character Generator RAM (CGRAM). The user-defined characters can also be displayed on a character LCD. The patterns for custom characters are stored in CGRAM. On the 16×2 LCD, 5 characters of the 5×8 pixels can be defined by a user program. The user needs to write the font data (which is the character pattern defining what pixels/dots must ON and which must OFF to properly display the character) to generate these characters. 7. Display Data RAM (DDRAM). The data sent to the LCD module by the microcontroller remains stored in DDRAM. In 16×2 character LCD, DDRAM can store a maximum of 80 8-bit characters where the maximum of 40 characters for each row can be stored. 8. Address Counter (AC). The Address Counter is an internal register that stores DDRAM/CGRAM addresses that are transferred by the Instruction register. The AC reads the DDRAM/CGRAM addresses from bits DB0-DB6 of the instruction register. After writing into the DDRAM/CGRAM, the AC is automatically increased by one, while after reading from the DDRAM/CGRAM, the AC is automatically decreased by one. 9. Busy Flag (BF). The bit DB7 of the instruction register is a busy flag of the LCD module. When the LCD is performing some internal operations, this flag is set (HIGH). During this time, the instruction register does not accept any new instruction via the system interface from the microcontroller. New instructions can be written to the IR but only when the busy flag is clear (LOW). 10. Cursor/Blink Control Circuit. This controls the ON/OFF status of the cursor/blink at the cursor position. The cursor appears at the DDRAM address currently set in the AC. For example, if the AC is set to 07H, then the cursor is displayed at the DDRAM address 07H. 11. LCD Driver. It controls the LCD panel and the display. In the 16×2 character LCD, the LCD driver circuit consists of 16 common signal drivers and 40 segment signal drivers. 12. Timing Generation Circuit. It generates the timing signals for the operation of internal circuits, such as the DDRAM, CGRAM, and CGROM. The timing signals for reading RAM (DDRAM/CGRAM) module are generated separately to display characters and timing signals for the internal operations of the integrated controller/processor of LCD. This is so that the display does not interfere with the internal operations of the integrated controller of the LCD module. Interfacing character LCDs Most of the character LCDs have a 14-pin or 16-pin system interface for communication with a microcontroller/processor. The 16-pin system interface is the most common. It has this pin configuration: The pin descriptions of the LCD module’s system interface are summarized in this table: To interface the LCD module with a microcontroller or Arduino, the digital I/O pins of the microcontroller must be connected with the RS, RW, EN, and data pins DB0 to DB7. Typically, Arduino (or any microcontroller) does not need to read data from the LCD module, so the RW pin can be hard-wired to ground. - If the LCD is interfaced with Arduino in an 8-bit mode, the RS, EN, and all of the data pins must be connected to the digital I/O pins of Arduino. - If the LCD is interfaced with Arduino in 4-bit mode, the RS, EN, and the data bits DB7 to DB4 must be connected to Arduino’s GPIO. In 4-bit mode, two pulses are required at the EN pin to write data/instruction to the LCD. At first, the higher nibble of data or the instruction is latched. Then, in the second pulse lower nibble of the data/instruction is transferred. In an 8-bit mode, the entire 8-bit data/instruction is written to the LCD in a single pulse at the EN pin. So, the 4-bit mode saves the microcontroller pins but has a slight latency in comparison to the 8-bit mode of operation. The 8-bit mode suffers less from latency but engages 4 extra pins from the microcontroller. It’s also possible to interface the LCD module with Arduino using a serial-to-parallel converter. Then, only two pins of Arduino are required to interface with the LCD module. The ground pin of the LCD module (pin 1) must be connected to the ground while the VCC pin (pin 2) must be connected to the supply voltage. The 3.3 or 5V pin of Arduino can be used to supply voltage to the LCD module. The VEE pin must be connected to the variable terminal of a variable resistor, and the fixed terminals of the variable resistor must be connected to the VCC and ground. The LED+ (pin 15) must be connected to the VCC via a current-limiting resistor and the LED (pin 16) must be connected to the ground. How character LCD works It is possible to read/write data with the LCD module. To write data/instructions to the LCD module, the RW pin must be clear. Then, if the RS is set, an 8-bit data sent by the microcontroller stores in the data register (DR) of the LCD module. This 8-bit data sent by the microcontroller will store in the instruction register (IR) of the LCD module. The data is transferred to the LCD from the microcontroller when a HIGH to LOW pulse is applied at the EN pin of the module. When data is sent to the LCD module (RW=0, RS=1, EN=1->0), it is written in the DDRAM and the Address Counter of the LCD is increased by one. The LCD controller compares the 8-bit data with the CGROM addresses and displays the appropriate character on the LCD at the associated DDRAM address. This serves as the instruction to show that the display has been received. When the instruction is sent to the LCD module (RW=0, RS=0, EN=1->0), it is stored in the instruction register and according to the pre-defined instruction set of the LCD controller, the appropriate operation is executed on the display (to set display ON, set display OFF, set cursor ON, set cursor OFF, clear DDRAM, etc.). Sometimes, the microcontroller may need to read data from the LCD. A microcontroller can read content from the instruction register, DDRAM, and CGRAM of the LCD. To read data from the LCD, the RW pin must be set. When the RW is set and the RS is clear, the microcontroller reads the content of Instruction Register (IR) — including the busy flag (DB7 of IR) and address counter (DB6 to DB0 of IR) — when applying a HIGH to LOW pulse at EN pin. When the RW is set and the RS is set, the microcontroller reads the content of the DDRAM or CGRAM according to the current value of the address counter when applying a HIGH to LOW pulse at EN pin. So, the microcontroller reads the content of the instruction register when: RW=1, RS=0, and EN=1->0. It reads the content of the DDRAM/CGRAM at the current address counter when: RW=1, RS=1, and EN=1->0. LCD characters set The following characters, with the given patterns and data register values, are supported on a 16×2 LCD. LCD commands A 16×2 LCD module supports the following an 8-bit commands: LCD functions using Arduino If the LCD module is interfaced to typical microcontrollers (8051, PIC, AVR, etc.), the RS, RW, EN, and the data bits need to be set individually to perform the read/write operations. Arduino has a Liquid Crystal library (LiquidCrystal.h) available that makes programming LCD with Arduino extremely easy. This library can be imported by the following statement: #include <LiquidCrystal.h> The library uses these methods to control a character LCD: 1. LiquidCrystal() 2. lcd.begin() 3. lcd.clear() 4. lcd.home() 5. lcd.setCursor(col, row) 6. lcd.write(data) 7. lcd.print(data)/lcd.print(data, BASE) 8. lcd.cursor() 9. lcd.noCursor() 10. lcd.blink() 11. lcd.noBlink() 12. lcd.display() 13. lcd.noDisplay() 14. lcd.scrollDisplayLeft() 15. lcd.scrollDisplayRight() 16. lcd.autoscroll() 17. lcd.noAutoscroll() 18. lcd.leftToRight() 19. lcd.rightToLeft() 20. lcd.createChar(num, data) LiquidCrystal() method This method is used to create a Liquid Crystal object. The object must be created according to the circuit connections of the LCD module when using Arduino. The object takes the pin numbers of the Arduino as arguments. The pin numbers where the RS, RW, EN, and the data pins (DB7-DB0 for 8-bit mode and DB7-DB4 for 4-bit mode) of the LCD are connected, has to be passed as arguments in the object definition. This method has this syntax: If the LCD is connected in a 4-bit mode and the R/W pin is grounded: LiquidCrystal(rs, enable, d4, d5, d6, d7); or LiquidCrystal lcd(rs, enable, d4, d5, d6, d7); If the LCD is connected in a 4-bit mode and the R/W pin is also connected to Arduino: LiquidCrystal(rs, rw, enable, d4, d5, d6, d7); or LiquidCrystal lcd(rs, rw, enable, d4, d5, d6, d7); If the LCD is connected in an 8-bit mode and the R/W pin is grounded: LiquidCrystal(rs, enable, d0, d1, d2, d3, d4, d5, d6, d7); or LiquidCrystal lcd(rs, enable, d0, d1, d2, d3, d4, d5, d6, d7); If the LCD is connected in an 8-bit mode and the R/W pin is connected to Arduino: LiquidCrystal(rs, rw, enable, d0, d1, d2, d3, d4, d5, d6, d7); or LiquidCrystal lcd(rs, rw, enable, d0, d1, d2, d3, d4, d5, d6, d7); The LiquidCrystal class has this source code: The LiquidCrystal method has this definition in the source code: lcd.begin() method This method is used to initialize the LCD module. The function takes the size of the LCD (expressed by number of columns and rows in the LCD) as the arguments. It has this syntax: lcd.begin(cols, rows) This function has the following source code: lcd.clear() method This method clears the LCD display and positions the cursor to the top-left corner. It has this syntax: lcd.clear() This function has the following source code: lcd.setCursor() method This method positions the cursor at the given location on the LCD panel. It takes the column and row as the argument where the cursor has to be placed and a subsequent character has to be displayed. It has this syntax: lcd.setCursor(col, row) This method has the following source code: lcd.print() method This method is used to print text to the LCD. It takes a string argument, which has to be displayed at the current cursor position on the LCD. It can take base of the value passed as an optional argument — if only printing numbers. It has this syntax: lcd.print(data) lcd.print(data, BASE) This method comes from Print.h library that is included in the LiquidCrystal.h library. This method has the following source code: How to check the LCD A common concern when interfacing the LCD module is to identify whether or not the LCD module is, indeed, working. When connecting the LCD with Arduino (or any other MCU), if only the lower line of the LCD brightens, then the LCD module is working. When connecting LCD with Arduino, if both lines of the LCD (16×2 LCD) brightens, then the LCD is not working properly. Sometimes when you try to print on the LCD, nothing occurs, except the lower line of the LCD illuminating. In this case, the possible reasons can be one of the following: 1. There may be loose connections between Arduino (MCU) and the LCD module. 2. The LCD module might have been interfaced in the reverse pin order (i.e. instead of pins 1 to 16, circuit connections might have been made from pins 16 to 1 of the LCD module). 3. There may be shorting between LCD terminals due to faulty soldering. 4. The contrast of the LCD at the VEE pin might not have been adjusted properly. If the adjustment of contrast does not work, try connecting the VEE pin directly to the ground, so that the LCD module is adjusted to maximum contrast. 5. If after checking all the circuit connections, LCD panel still does not display text, check if the code uploaded to Arduino is correct or not. For example, it is possible that if the LCD display is not cleared after initialization, garbage values may display on the LCD instead of the intended text. Recipe: Printing text on the 16X2 character LCD In this tutorial, we will print simple text on the 16×2 LCD panel from Arduino UNO. Components required 1. Arduino UNO x1 2. 16×2 character LCD x1 3. 10K Pot x1 4. 4-bit mode. Pin 1 (GND) and 16 (LED) of the LCD module are connected to ground while pin 2 (VCC) is connected to the VCC. The pin 15 (LED+) from the LCD module is, once again, connected to the VCC via a small-value resistor. The pin 3 (VEE) is connected to the variable terminal of a pot while the fixed terminals of the pot are connected to the ground and VCC. The R/W pin is connected to the ground as Arduino will only write data to the LCD module. The RS, EN, DB4, DB5, DB6, and DB7 pins of the LCD are connected to pins 13, 11, 7, 6, 5, and 4 of Arduino UNO, respectively. The breadboard supplies the common ground. The 5V supplies the rail from one of the ground pins and 5V pin of the Arduino UNO, respectively. Circuit diagram Arduino sketch How the project works The LCD module is connected with Arduino in a 4-bit mode. First, the LCD is initialized and the display is cleared to get rid of any garbage values in the DDRAM. The cursor is set to column 1 of the line 0, and the text, “EEWORLDONLINE” is printed on LCD. Next, the cursor is moved to column 0 of line 1 and text, “EngineersGarage” is printed on the LCD. A delay of 750 milliseconds is given and the LCD is cleared again. The cursor is moved to column 0 of the line 0 and the text, “EngineersGarage” is printed on the LCD. The cursor is then moved to column 1 of line 1 and text, “EEWORLDONLINE” is printed on the LCD. Arduino UNO will keep repeating the code and, alternatively, printing both of the text strings on the lines 0 and 1. Programming guide The LiquidCrystal.h library is imported in the code. Then, an object defined by the variable “lcd” is defined for the LiquidCrystal class. #include <LiquidCrystal.h> //LiquidCrystal lcd(RS, E, D4, D5, D6, D7); LiquidCrystal lcd(13, 11, 7, 6, 5, 4); In the setup() function, the LCD is initialized to the 16×2-size, using the begin() method like this: void setup() { lcd.begin(16, 2); } In the loop() function, the LCD display is cleared using the clear() method and he cursor is set at column 1 of line 0 by using the setCursor() method. The text “EEWORLDONLINE” is printed using the print() method on the “lcd” object. Similarly, the text “EngineersGarage” is printed at column 0 of line 1. A delay of 750 milliseconds is given by using the delay() function. void loop() { lcd.clear(); lcd.setCursor(1, 0); lcd.print(“EEWORLDONLINE”); lcd.setCursor(0, 1); lcd.print(“EngineersGarage”); delay(750); Next, the LCD display is cleared again and the position of both texts is inversed. lcd.clear(); lcd.setCursor(0, 0); lcd.print(“EngineersGarage”); lcd.setCursor(1, 1); lcd.print(“EEWORLDONLINE”); delay(750); } The body of the loop() function will keep repeating itself until Arduino is shutdown. Therefore, both texts keep displaying on the LCD module, alternating their position between line 0 and 1 of the panel. In the next tutorial, we will discuss how to use scrolling text on the LCD module.
https://www.engineersgarage.com/microcontroller-projects/articles-arduino-16x2-character-lcd-interfacing-driver/
CC-MAIN-2020-24
refinedweb
4,033
62.38
Hi guys, I'm new to Python and I'd like some answers to get a better understanding of how python works. In the Reverse exercice (see link above) you have to create a function that takes a string and pints it backwards. My code does so but a "None" gets printed after the last letter of the reversed string is printed. My output for the code below is : !nohtyPNone Why does it do that and how to solve it ? Thanks in advance from __future__ import print_function def reverse(text): for y in range(-1,(-1*len(text))-1,-1): print (text[y],end='') reverse("Python!")
https://discuss.codecademy.com/t/reverse-works-but-adds-none-after-my-revsered-string-why/63306
CC-MAIN-2018-39
refinedweb
106
72.46
This tutorial shows how to implement drift detection, and a limited characterization, on time-stamped data. This data can be from almost any quantum circuit based experiment on one or more qubits, as long as the data is taken using a suitable time-ordering of the experiments, and the data is recorded as a time series. For example, possible experiments include suitably time-ordered GST, RPE, Ramsey or RB experiments. This notebook is an introduction to these tools, and it will be either augmented with further notebooks, or updated to be more comprehensive, at a later date. from __future__ import print_function # Importing the drift module is essential from pygsti.extras import drift # Importing all of pyGSTi is optional, but often useful. import pygsti We now give a quick overview of the drift detection and characterization methods in the drift module. Further details are given later in this tutorial. As we demonstrate below, the analysis can be implemented with only two steps: pyGSTidataset. drift.do_basic_drift_characterization(). Here we demonstrate this with time series GST data, on the $G_i$, $G_x$, $G_y$ gateset, generated from a simulation (the code required to run these simulations is not currently available in pyGSTi). In this simulation the $G_i$ gate has low-frequency drift, the $G_x$ has high-frequency drift, and the $G_y$ gate is drift-free (where "low" and "high" frequency are with respect to the sample rate). More details on the input data format are given later. ds = pygsti.io.load_tddataset("tutorial_files/timeseries_data.txt") # This takes 5 - 10 minutes, but can be sped up a lot with more user input (see below) results_gst = drift.do_basic_drift_characterization(ds) Thats it! Everything has been calculated, and we can now look at the results. Is there any detectable drift? One useful result is printed below: a yes/no outcome for whether or not drift is detected. This is calculated using multiple statistical tests on the data at a specified global confidence level (which defaults to 0.95 when no user-specified value is passed to drift.do_basic_drift_characterization). That is, here there is a probability of at most 0.05 that this function will report drift when there is none. results_gst.any_drift_detect() Statistical tests set at a global confidence level of: 0.95 Result: The 'no drift' hypothesis *is* rejected. We can plot power spectra These spectra should be flat - up to statistical flucations, due to finite-sampling noise, around the mean noise level - if there is no drift. There are a range of power spectra that we can plot, but the most useful for an overview of the data is the "global power spectrum", obtained from averaging power spectra calculated from the individual data for each of the different gate sequences (again, details on exactly what this is are given later). This is plotted below. If there are peaks above the significance threshold, this power spectra provides statistically significant evidence of drift. results_gst.plot_power_spectrum() We can extract the drift frequencies If we have detected drift, we would probably likely like to know the frequencies of the drift. This information can be extracted from the results object as shown below. All frequencies will be in Hz if the timestamps have been provided in seconds (again, details later). Note that these are the frequencies in the drifting outcome probabilities -- they are not directly the frequencies of drift in, say, a Hamiltonian parameter. However, they are closely related to those frequencies. print(results_gst.global_drift_frequencies) [0.001 0.003 0.005 0.008 0.011 0.016 0.2 ] Is there drift for a particular sequence? There are individual power spectra for all of the sequences. E.g., if we are interested in whether the $G_xG_i^{128}G_y$ sequence shows signs of drift, we can plot the power spectrum: # The gatestring we are interested gstr = pygsti.objects.GateString(None,'Gx(Gi)^128Gy') # We hand the gatestring to the plotting function results_gst.plot_power_spectrum(sequence=gstr,loc='upper right') Box-plots for GST data If the data is from GST experiments, or anything with a GST-like structure of germs and fudicials, we can create a box-plot which shows the maximum power in the spectrum for each sequence. This maximum power is a reasonable proxy for comparing how "drifty" the data from the different sequences appears to be. But note that the maximum power should not be used to directly compare the level of drift in two different datasets with different parameters, particularly if the number of timestamps is different - because this maximum power will increase with more data, for a fixed level of drift. More on this at a later date. In the plot below we see that the amount of drift appears to be increasing with sequence length, as would be expected with gate drift. Without performing a detailed analysis, by eye it is clear that the $G_i$ gate is the most drifty, that the $G_x$ gate has some drift, and that the data looks consistent with a drift-free $G_y$ gate. # This box constructs some GST objects, needed to create any sort of boxplot with GST data from pygsti.construction import std1Q_XYI # The gateset used with the GST data we imported # This manually specifies the germ and fiducial structure for the imported data. fiducial_strs = ['{}','Gx','Gy','GxGx','GxGxGx','GyGyGy'] germ_strs = ['Gi','Gx','Gy','GxGy','GxGyGi','GxGiGy','GxGiGi','GyGiGi','GxGxGiGy','GxGyGyGi','GxGxGyGxGyGy'] log2maxL = 9 # log2 of the maximum germ power # Below we use the maxlength, germ and fuducial lists to create the GST structures needed for box plots. fiducials = [pygsti.objects.GateString(None,fs) for fs in fiducial_strs] germs = [pygsti.objects.GateString(None,gs) for gs in germ_strs] max_lengths = [2**i for i in range(0,log2maxL)] gssList = pygsti.construction.make_lsgst_structs(std1Q_XYI.gates, fiducials, fiducials, germs, max_lengths) # Create a workspace to show the boxplot w = pygsti.report.Workspace() w.init_notebook_mode(connected=False, autodisplay=True)
https://nbviewer.jupyter.org/github/pyGSTio/pyGSTi/blob/v0.9.4/jupyter_notebooks/Tutorials/19%20Basic%20drift%20characterization.ipynb
CC-MAIN-2021-39
refinedweb
971
52.8
Understanding JWTs In this article I take a look at JSON Web Tokens (JWTs). JWTS seem to cause a lot of confusion for our customers at Nexmo. I was new to them when I started at Nexmo. So I decided to write up some notes that will hopefully clarify what they are, what they are used for, and how to create them. I will also present some Python code that creates a JWT and uses it in an API call. What is a JWT used for? A JWT is a special token that can be used in a variety of applications. The use case I look at here is using a JWT to authenticate a REST API call. I will use the Nexmo REST API to demonstrate this. What is a JWT? As the name implies the JWT is essentially a piece of JSON code. It has three main parts: - Header - Payload - Signature These are joined by the '.' character in this format: xxxxx.yyyyy.zzzzz Header The header typically looks like this: { "alg": "RS256", "typ": "JWT" } The header is also Base64Url encoded to form the first part of the JWT. What exactly is Base64Url encoding? I will cover that in another article as it's quite interesting, but for now you don't need to worry about it as the JWT library we will use does this for you. The main feature of the header is it specifies the algorithm that is going to be used to sign the JWT. Nexmo requires the JWT to be signed using RS256, so that's what I specified here. Payload The payload contains what are called claims. The three types of claims are: - Registered - these are predefined. - Public - these can be user-defined. - Private - these are used to share information between parties concerned. The payload is essentially a piece of JSON into which you put certain things. Those things are sometimes mandatory, and sometimes optional. For example, when using JWTs to authenticate a Nexmo API call the application_id is a required (public) claim. It also requires the private claims iat and jti in the payload. Don't worry about those for now, I will get to them shortly. Once your payload is built it is Base64Url encoded to form the second part of the JWT. The claims required for Nexmo API calls are as follows: Claim | Description | Mandatory application_id | The unique ID allocated to your application by Nexmo. | Yes iat | The Unix timestamp at UTC + 0 indicating the moment the JWT was requested. | Yes jti | The unique ID of the JWT. | Yes nbf | The Unix timestamp at UTC + 0 indicating the moment the JWT became valid. | No exp | The Unix timestamp at UTC + 0 indicating the moment the JWT is no longer valid. A minimum value of 30 seconds from the time the JWT is generated. A maximum value of 24 hours from the time the JWT is generated. A default value of 15 minutes from the time the JWT is generated. | No The observant of you will now understand why I looked at Unix timestamps in the previous article! In our code we will generate the JWT dynamically each time we make a call, so we are fine with the default expiry time of 15 minutes. So we can ignore the exp claim (although I show how to set it in the complete example code). We can also ignore nbf item as well - we want our JWT to be valid immediately. As iat is mandatory we will need to create that. It is essentially a Unix timestamp for "now" that can be created in Python using time.time(). You see how it's generated later. The jti is a nonce or UUID to uniquely identify the JWT. There are no particular requirements on this as long as the ID is unique. In PAW for example a nonce is created of the form vvXjP5vxCgRliyo8ApQOyKqcotfQdaB5 - that is, a 32 character string consisting of lower case letters, upper case letters and digits. I really should talk about PAW in another article. I've been using it about eight months now and this it's very good. Anyway, I digress. In this article I will generate the jti using Python's uuid4() function. You know all about UUIDs as I use them a lot on this site. Signature The third part is the trickiest. The signature is basically the first two parts concatenated using '.' and then signed using a secret and the algorithm specified in the header. In the case of the Nexmo API the secret used is the private key for the application you are invoking API calls against. Authentication in Nexmo is done on a per-application basis. This is also why the application ID is contained in the payload. For the Nexmo API the JWT algorithm must be RS256. Creating a JWT Without further ado here's the code snippet (I will list the complete code later) for creating a JWT that is suitable for authenticating the Nexmo API. NOTE: You don't normally need to do this if you are using one of the Nexmo client libraries as the library generates the JWTs for you automatically. I am using Nexmo simply as a way to verify that my JWT generation worked correctly. So, I'm using the standard JWT library imported with import jwt. From this you can see it's fairly easy to build the payload: application_id = "your_application_id" payload = { 'application_id': application_id, 'iat': int(time.time()), 'jti': str(uuid4()), } # Read in private key from store filename = "path/to/private.key" f = open(filename, 'r') private_key = f.read() f.close() jwt = jwt.encode(payload, private_key, algorithm='RS256') The private key generated for the Nexmo application is used to sign the JWT. Also, I have created the payload in the simplest possible way here, but in future I will show you how to write a proper Python method to more flexibly and robustly create the payload. Using the JWT Now that the JWT has been generated, we need to test it out. I do this by making a Nexmo API call that is authenticated via JWT. I used the requests library to do this. Instead of Python code you could also have used the JWT in a PAW request, or via a Curl request on the API. Here's the API call snippet: auth = b'Bearer '+jwt headers = {'Authorization': auth, 'Content-Type': 'application/json'} r = requests.get('', headers=headers) print(r) You'll notice the JWT is contained in the header. This particular API call returns all the Nexmo calls I've made. I won't go into the Nexmo response in any detail here as I just needed to verify a 200 response back (200 is the "all good" status code). Complete example code For the sake of completeness (and possible future reference on my part) I list the complete test code here: import jwt import time import json import requests from uuid import uuid4 from pprint import pprint application_id = "APP_ID" filename = "private.key" expiry = 1*60*60 # JWT expires after one hour (default is 15 minutes) payload = { 'application_id': application_id, 'iat': int(time.time()), 'jti': str(uuid4()), 'exp': int(time.time()) + expiry, } # Read in private key from store f = open(filename, 'r') private_key = f.read() f.close() jwt = jwt.encode(payload, private_key, algorithm='RS256') # Then make a call - retrieve info for all calls auth = b'Bearer '+jwt headers = {'Authorization': auth, 'Content-Type': 'application/json'} r = requests.get('', headers=headers) j = r.json() calls = j['_embedded']['calls'] for call in calls: pprint(call) I used the Requests library to make the HTTP call. I really love the simplicity of Requests. That is another library I will need to talk about more in a future article. In this case the JWT is passed in the header of the call. If you were doing this API using Curl it would look like: curl '' -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpYXQiOiIxNTQxMDYwMjMwIiwiZXhwIjoxNTQxMDYwMjkwLCJqdGkiOiJZNThTRFJ7aTBvZUVwUm5MSlVFaDNnQ09qS292MVl5bSIsImFwcGxpY2F0aW9uX2lkIjoiOWVhZDIwYTktZjdkOC00YzRmLWExMTEtZTkwMmI1ODUyZmQ0In0.fOFocUiugpE-tzUBSFOQcIYqPiEnK8MBhOTn1QczylIm56ObVcrbLX-7xiHmz5lMXTkHL3Vf12Iq8NGY9RgNxLwBQ63ZwGk_UxKiYt7RfbJajPfram29ofByznGyGeaT960rqCbVu-wOLtQO-rAarpr2w_mAuqQulaQNNrXEq5xG6DD5_LQA_4R1s7haXwvtZt7QOSIiJ2RiCDl1a4qlvc_EkjHHeE7FRIQL1CnbYvnbpIjABCpKzcQCCKNBQT8NtBEDeDzkZ_Uea2GeDSwJ2GR_eSGR394vVIl93WXFZROBmz_UJWBJQ8GC5WQCyuo6EclsbTfpTVFYK2jh7yhBjw' -H 'Content-Type: application/json' If you look carefully at the JWT you will see the xxx.yyy.zzz structure. If you look at the documentation for this particular API you'll see it needs to be JSON application type so that is added to the header too. Summary I hope this article has demystified JWTs a little for you. There's a lot we didn't cover, but hopefully you now have a good starting point. Resources - - a great site for testing out your generated JWTs for validity. - - -
https://tonys-notebook.com/articles/understanding-jwts.html
CC-MAIN-2022-05
refinedweb
1,416
65.32
When exporting a large amount of frames is there a possibility to track the duration of this process? - MishaHeesakkers last edited by gferreira I'm currently setting up a Sublime Text 3 workflow and I was wondering if I could get print out some feedback of when the build is started and when it is ended? The code below gets called when the saveImage is done building. Is there a way to bind another callback function to saveImage? os.system("open --background -a Preview " + EXPORT_PATH) Thanks in advance! this should give you the perceived execution time of script: import time start = time.time() # do something end = time.time() print(end - start) there are plenty of options to have some sort of process bar that gets updated for each frame: see If this doesn't makes sense I can narrow it down to an simpler example! good luck! - MishaHeesakkers last edited by @frederik makes sense! I can easily print out the progress of executing frames but I can't figure out how I can get some kind of progress feedback when the saveImage() function is called when exporting a larger .mp4 file. Any ideas? oh, there is indeed no progress callback on saveImage(..) Doing some googling on 'progress' and 'ffmpeg' does not results in a clear solution. DrawBot has a small wrapper around ffmpeg to generate movies: see
https://forum.drawbot.com/topic/134/when-exporting-a-large-amount-of-frames-is-there-a-possibility-to-track-the-duration-of-this-process
CC-MAIN-2020-05
refinedweb
227
71.34
06 February 2009 16:09 [Source: ICIS news] WASHINGTON (ICIS news)--Nearly 600,000 US workers lost their jobs in January, the Labor Department said on Friday, sending unemployment to 7.6% with 11.6m American workers jobless, including many in the plastics and chemicals industries. The department said that a total of 598,000 jobs were lost in January, kicking the unemployment rate to 7.6% from the 7.2% rate seen in December’s numbers. ?xml:namespace> The department said that job losses in January were large and widespread across nearly all major industries, but construction and manufacturing were particularly hard-hit among non-farm payroll sectors. In construction, which has been in decline due to the nation’s long-running housing crisis, unemployment was at 18.2% in January, the department said, up sharply from the 11% rate seen in that industry in January 2008. Job losses in manufacturing have accelerated, with unemployment in that broad sector at 10.9% in January compared with 5.1% in January a year ago. The rate of job losses in manufacturing is even more severe than in construction, according to the department’s figures. From January 2008 to January 2009, layoffs in the construction sector increased by 65% while job cuts in manufacturing increased by nearly 114%. In the In January 2008 the plastics industry employed 750,000 workers, but that figure has fallen to slightly more than 680,000 jobs in January this year, a decline of 9.3%. In chemicals, 2,400 jobs were lost in January, leaving the industry’s workforce at 835,300, down 2.5% from the January 2008 figure of 857,200 workers. Christina Romer, chairwoman of the White House Council on Economic Advisers, noted on Friday that the 3.6m jobs lost since the start of the recession in December 2007 constitute “the largest 13-month job loss since payroll employment records began in 1939”. She said January’s job losses are “the latest evidence
http://www.icis.com/Articles/2009/02/06/9190984/us-chemical-plastics-job-losses-among-600000-in-january.html
CC-MAIN-2014-52
refinedweb
332
56.05
Red Hat Bugzilla – Full Text Bug Listing python-daemon-1367 [details] root.log root.log for i386 Created attachment 509368 [details] build.log build.log for i386 Created attachment 509369 [details] mock.log mock.log for i386 Created attachment 509370 [details] root.log root.log for x86_64 Created attachment 509371 [details] build.log build.log for x86_64 Created attachment 509372 [details] mock.log mock.log for x86_64 Tests in minimock are failing: ERROR: DaemonContext component should have specified stderr file. ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/python-daemon-1.5.2/test/test_runner.py", line 207, in setUp set_runner_scenario(self, 'simple') File "/builddir/build/BUILD/python-daemon-1.5.2/test/test_runner.py", line 97, in set_runner_scenario testcase, testcase.scenario['pidlockfile_scenario_name']) File "/builddir/build/BUILD/python-daemon-1.5.2/test/test_runner.py", line 108, in set_pidlockfile_scenario testcase.lockfile_class_name) File "/builddir/build/BUILD/python-daemon-1.5.2/test/test_pidlockfile.py", line 296, in setup_lockfile_method_mocks tracker=testcase.mock_tracker) File "/usr/lib/python2.7/site-packages/minimock.py", line 200, in mock original = tmp.__dict__[attrs[-1]] KeyError: 'read_pid' ====================================================================== ERROR: DaemonContext component should have specified stdin file. Updated rawhide with 1.6 version python-daemon-1.6-1.fc15 has been submitted as an update for Fedora 15. (In reply to comment #9) > python-daemon-1.6-1.fc15 has been submitted as an update for Fedora 15. > This is broken in Fedora 15: $ python Python 2.7.1 (r271:86832, Apr 12 2011, 16:16:18) [GCC 4.6.0 20110331 (Red Hat 4.6.0-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from daemon import pidfile Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/site-packages/daemon/pidfile.py", line 18, in <module> from lockfile.pidlockfile import PIDLockFile ImportError: No module named pidlockfile If you plan to bump python-lockfile to 0.9 in order to fix this, please review bug #612403. I did a local build of python-lockfile to 0.9.1 . The import test as mention above worked well. Can we get the python-lockfile updated now ? When the updated lockfile still works with minimock, then it's ok. lockfile 0.9 had a huge abi break, and minimock was unusable. When that has changed in 0.9.1 (and an update of minimock) it's ok. (but can't check that right now All tests of python-daemon 1.6 pass again with lockfile-0.9.1, so I think, it's ok to update it. Can the testsuite of python-daemon be run again, when building, please? python-daemon-1.6-1.fc15.noarch.rpm from FC15 testing repo breaks Orbited (orbited-0.7.10-7.fc15.noarch) due to bug described above (error when importing pidlockfile), which crashes when started as system service then. Rolling back to python-daemon-1.5.2-3.fc15.noarch.rpm and python-lockfile-0.8-2.fc15.noarch.rpm fixes it. This package appears to be building successfully for all current branches:
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=716183
CC-MAIN-2017-47
refinedweb
511
55.2
The. Time Complexity of this algorithm is O(n2). Here is the source code of the C program to sort integers using Selection Sort technique. The C program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C Program to Implement Selection Sort */ #include <stdio.h> void selectionSort(int arr[], int size) { int i, j; for (i = 0 ; i < size;i++) { for (j = i ; j < size; j++) { if (arr[i] > arr[j]) swap(&arr[i], &arr[j]); } } } //fucntion to swap to variables void swap(int *a, int *b) { int temp; temp = *a; *a = *b; *b = temp; } int main() { int array[10], i, size; printf("How many numbers you want to sort: "); scanf("%d", &size); printf("\nEnter %d number", size); for (i = 0; i < size; i++) scanf("%d", &array[i]); selectionSort(array, size); printf("\nSorted array is "); for (i = 0; i < size;i++) printf(" %d ", array[i]); return 0; } $ gcc selectionsort.c -o selectionsort $ ./selectionsort How many numbers you want to sort: 5 Enter 5 numbers : 34 13 204 355 333 Sorted array is : 13 34 204 333 355 Sanfoundry Global Education & Learning Series – 1000 C Programs. Here’s the list of Best Reference Books in C Programming, Data Structures and Algorithms.
http://www.sanfoundry.com/c-program-implement-selection-sort/
CC-MAIN-2017-30
refinedweb
208
58.92
Problem downloading DSON importer (solved) edited October 2012 in Poser Discussion I can not download the DSON importer for poser. It says "The maximum quantity allowed for purchase is 1." I added it to the cart BEFORE I logged in, then I logged in, but it removed the DSON imported from the shopping cart. When I attempt to add the DSON importer to the cart again, it gives me the above error. Yet I have not downloaded it at all! Does someone know how to fix this? I've submitted a support request as well, but I don't like my chances of getting a quick response. Thanks. Post edited by DAZ_bfurner on Make sure you're logged in, then add another item to the cart. Then they should both show up, and you can remove the second item. I got it downloaded... and installed it but when I try to open (any) of the included free Genesis figures and props in Poser 9 I get this Traceback (most recent call last): File "C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\libraries\Character\figurendaz\DAZ People\Genesis.py", line 1, in import dson.dzdsonimporter ImportError: No module named dson.dzdsonimporter Installing content to Program Files or Program Files (x86) will give you problems in Win7 or Vista. Also you should be installing to the folder containing the Runtime folder, not to a folder inside Runtime. yes that worked thanks Thanks for the answer.. but which of the downloadable products should go into the poser 9 outsite of runtime folder ?? So it did not work like thiat ZigZag321 (Renderosity user) wrote me how to do it.. and now it works BUT next issue One has it all working in Poser 9 but.. now we have a figure. Genesis and we have downloaded the starter package We change the length of legs and size of body but the clothes and the hair do no follow, even when conformed how to make it all follow the genesis figure ? Hi, I am having the same problem. 1.- Poser 9 installed. 2.- Dson importer installed in Poser 9 directory 3.- Both Genesis starter Ess & EssPoserCF installed in "My Library" 4.- Added "Genesis starter Ess" Runtime directory 4.- When I try to load any Genesis figure keeps on telling me: Traceback (most recent call last): File "C:\Users\Rafa\Documents\Genesis Starter Essentials\Runtime\libraries\Character\DAZ People\Genesis.py", line 1, in import dson.dzdsonimporter ImportError: No module named dson.dzdsonimporter I followed up the installation instructions of the product as it is written in the manual even with the same names and places !!! Efron_24, can you tell me how you solved it ! thanks in advance, Rafa Poser 9 is 32-bit only. So you did install the 32-bit version of the DSON Importer, correct? Installing content to Program Files or Program Files (x86) will give you problems in Win7 or Vista. Also you should be installing to the folder containing the Runtime folder, not to a folder inside Runtime. I'm having the same problem. But, at this point, I've had so much trouble getting DSON to work at all that I've installed and uninstalled it in several places, and I have no idea where it's looking for the files, though I told it to look for them in My Documents. *sigh* TheWheelMan - Yes, I was aware of that, I did install the 32 bit version for poser 9 but not working :( Hi all, Could ZigZag321 from Renderosity let us know how to solve the problem listed above? It would be greatly appreciated and might save some mental health. :-) Thanks for any help.
http://www.daz3d.com/forums/viewreply/156225/
CC-MAIN-2016-44
refinedweb
614
73.68
Note: This document is work in progress. The coding rules aim to guide Qt Creator developers, to help them write understandable and maintainable code, and to minimize confusion and surprises. As usual, rules are not set in stone. If you have a good reason to break one, do so. But first make sure that at least some other developers agree with you. To contribute to the main Qt Creator source, you should comply to the following rules: To submit code to Qt Creator, you must understand the tools and mechanics as well as the philosophy behind Qt development. For more information about how to set up the development environment for working on Qt Creator and how to submit code and documentation for inclusion, see Guidelines for Contributions to the Qt Project. The following list describes how the releases are numbered and defines binary compatibility and source code compatibility between releases: We do not currently guarantee API nor ABI (application binary interface) compatibility between major releases and minor releases. However, we try to preserve backward and forward binary compatibility and forward and backward source code compatibility in patch releases, so: Note: This is not yet mandatory. For more information on binary compatibility, see Binary Compatibility Issues With C++. Follow the guidelines for code constructs to make the code faster and clearer. In addition, the guidelines allow you to take advantage of the strong type checking in C++. ++T; --U; -NOT- T++; U--; Container::iterator end = large.end(); for (Container::iterator it = large.begin(); it != end; ++it) { ...; } -NOT- for (Container::iterator it = large.begin(); it != large.end(); ++it) { ...; } foreach (QWidget *widget, container) doSomething(widget); -NOT- Container::iterator end = container.end(); for (Container::iterator it = container.begin(); it != end; ++it) doSomething(*it); Make the loop variable const, if possible. This might prevent unnecessary detaching of shared data: foreach (const QString &name, someListOfNames) doSomething(name); - NOT - foreach (QString name, someListOfNames) doSomething(name); Use camel case in identifiers. Capitalize the first word in an identifier as follows: For pointers or references, always use a single space before an asterisk (*) or an ampersand (&), but never after. Avoid C-style casts when possible: char *blockOfMemory = (char *)malloc(data.size()); char *blockOfMemory = reinterpret_cast<char *>(malloc(data.size())); -NOT- char* blockOfMemory = (char* ) malloc(data.size()); Of course, in this particulare case, using new might be an even better option. Do not use spaces between operator names and function names. The equation marks (==) are a part of the function name, and therefore, spaces make the declaration look like an expression: operator==(type) -NOT- operator == (type) Do not use spaces between function names and parentheses: void mangle() -NOT- void mangle () Always use a single space after a keyword, and before a curly brace: if (foo) { } -NOT- if(foo){ } As a base rule, place the left curly brace on the same line as the start of the statement: if (codec) { } -NOT- if (codec) { } Exception: Function implementations and class declarations always have the left brace in the beginning of a line:; } Note: This could be re-written as: if (address.isEmpty()) return false; if (!isValid()) return false; if (); Use parentheses to group expressions: if ((a && b) || c) -NOT- if (a && b || c) (a + b) & c -NOT- a + b & c if (longExpression || otherLongExpression || otherOtherLongExpression) { } -NOT- if (longExpression || otherLongExpression || otherOtherLongExpression) { } const char aString[] = "Hello"; QString a = "Joe"; QString b = "Foo"; -NOT- QString a = "Joe", b = "Foo"; Note: QString a = "Joe" formally calls a copy constructor on a temporary that is constructed from a string literal. Therefore, it is potentially more expensive than direct construction by QString a("Joe"). However, the compiler is allowed to elide the copy (even if this has side effects), and modern compilers typically do so. Given these equal costs, Qt Creator code favours the '=' idiom as it is in line with the traditional C-style initialization, it cannot be mistaken as function declaration, and it reduces the level of nested parantheses in more initializations. int height; int width; char *nameOfThis; char *nameOfThat; -NOT- int a, b; char *c, *d; namespace MyPlugin { void someFunction() { ... } } // namespace MyPlugin namespace MyPlugin { class MyClass; } Read Qt In Namespace and keep in mind that all of Qt Creator is namespace aware code. The namespacing policy within Qt Creator is as follows: Qt Creator API expects file names in portable format, that is, with slashes (/) instead of backslashes (\) even on Windows. To pass a file name from the user to the API, convert it with QDir::fromNativeSeparators first. To present a file name to the user, convert it back to native format with QDir::toNativeSeparators. When comparing file names, consider using FileManager::fixFileName which makes sure that paths are clean and absolute and also takes Windows case-insensitivity into account (even if it is an expensive operation). A plugin extension point is an interface that is provided by one plugin to be implemented by others. The plugin then retrieves all implementations of the interface and uses them. That is, they extend the functionality of the plugin. Typically, the implementations of the interface are put into the global object pool during plugin initialization, and the plugin retrieves them from the object pool at the end of plugin initialization. For example, the Find plugin provides the FindFilter interface for other plugins to implement. With the FindFilter interface, additional search scopes can be added, that appear in the Advanced Search dialog. The Find plugin retrieves all FindFilter implementations from the global object pool and presents them in the dialog. The plugin forwards the actual search request to the correct FindFilter implementation, which then performs the search. You can add objects to the global object pool via ExtensionSystem::PluginManager::addObject(), and retrieve objects of a specific type again via ExtensionSystem::PluginManager::getObjects(). This should mostly be used for implementations of Plugin Extension Points. Note: Do not put a singleton into the pool, and do not retrieve it from there. Use the singleton pattern instead. Hint: Use the compile autotest to see whether a C++ feature is supported by all compilers in the test farm. \xnn(where nn is hexadecimal). For example: QString s = QString::fromUtf8("\213\005"); Using a plain zero (0) for null pointer constants is always correct and least effort to type. void *p = 0; -NOT- void *p = NULL; -NOT- void *p = '\0'; -NOT- void *p = 42 - 7 * 6; Note: As an exception, imported third party code as well as code interfacing the native APIs (src/support/os_*) can use NULL. If you create a new file, the top of the file should include a header comment equal to the one found in other source files of Qt Creator. QString s; // crash at runtime - QString vs. const char * return condition ? s : "nothing"; Whenever a pointer is cast such that the required alignment of the target is increased, the resulting code might crash at runtime on some architectures. For example, if a are aligned at integer-boundaries: union AlignHelper { char c; int i; }; Even if the execution time of the initializer is defined for shared libraries, you will get into trouble when moving that code in a plugin or if the library is compiled statically: // global scope -NOT- // Default constructor needs to be run to initialize x: static const QString x; -NOT- // Constructor that takes a const char * has to be run: static const QString y = "Hello"; -NOT- QString z; -NOT- // Call time of foo() undefined, might not be called at all: static const int i = foo(); Things you can do: // global scope // No constructor must be run, x set at compile time: static const char x[] = "someText"; // y will be set at compile time: static int y = 7; // Will be initialized statically, no code being run. static MyStruct s = {1, 2, 3}; // Pointers to objects are OK, no code needed to be run to // initialize ptr: static QString *ptr = 0; // Use Q_GLOBAL_STATIC to create static global objects instead: Q_STATIC_GLOBAL(QString, s) void foo() { s()->append("moo"); } Note: Static objects in function scope are no problem. The constructor will be run the first time the function is entered. The code is not reentrant, though. // Condition is always true on platforms where the // default is unsigned: if (c >= 0) { ... } for (Container::const_iterator it = c.constBegin(); it != c.constEnd(); ++it) -NOT- for (Container::const_iterator it = c.begin(); it != c.end(); ++it) Inheriting from template or tool classes has the following potential pitfalls: For example, library A has class Q_EXPORT X: public QList<QVariant> {}; and library B has class Q_EXPORT Y: public QList<QVariant> {};. Suddenly, QList symbols are exported from two libraries which results in a clash. Our public header files have to survive the strict settings of some of our users. All installed headers have to follow these rules: class B: public A { #ifdef Q_NO_USING_KEYWORD inline int val() { return A::val(); } #else using A::val; #endif }; #if defined(Foo) && Foo == 0 -NOT- #if Foo == 0 -NOT- #if Foo - 0 == 0 #if defined(Foo) -NOT- #if defined Foo We use the "m_" prefix convention, except for public struct members (typically in *Private classes and the very rare cases of really public structures). The d and q pointers are exempt from the "m_" rule. The d pointers ("Pimpls") are named "d", not "m_d". The type of the d pointer in class Foo is FooPrivate *, where FooPrivate is declared in the same namespace as Foo, or if Foo is exported, in the corresponding {Internal} namespace. If needed (for example when the private object needs to emit signals of the proper class), FooPrivate can be a friend of Foo. If the private class needs a backreference to the real class, the pointer is named q, and its type is Foo *. (Same convention as in Qt: "q" looks like an inverted "d".) Do not use smart pointers to guard the d pointer as it imposes a compile and link time overhead and creates fatter object code with more symbols, leading, for instance to slowed down debugger startup: ############### bar.h #include <QScopedPointer> //#include <memory> struct BarPrivate; struct Bar { Bar(); ~Bar(); int value() const; QScopedPointer<BarPrivate> d; //std::auto_ptr<BarPrivate> d; }; ############### bar.cpp #include "bar.h" struct BarPrivate { BarPrivate() : i(23) {} int i; }; Bar::Bar() : d(new BarPrivate) {} Bar::~Bar() {} int Bar::value() const { return d->i; } ############### baruser.cpp #include "bar.h" int barUser() { Bar b; return b.value(); } ############### baz.h struct BazPrivate; struct Baz { Baz(); ~Baz(); int value() const; BazPrivate *d; }; ############### baz.cpp #include "baz.h" struct BazPrivate { BazPrivate() : i(23) {} int i; }; Baz::Baz() : d(new BazPrivate) {} Baz::~Baz() { delete d; } int Baz::value() const { return d->i; } ############### bazuser.cpp #include "baz.h" int bazUser() { Baz b; return b.value(); } ############### main.cpp int barUser(); int bazUser(); int main() { return barUser() + bazUser(); } Results: Object file size: 14428 bar.o 4744 baz.o 8508 baruser.o 2952 bazuser.o Symbols in bar.o: 00000000 W _ZN3Foo10BarPrivateC1Ev 00000036 T _ZN3Foo3BarC1Ev 00000000 T _ZN3Foo3BarC2Ev 00000080 T _ZN3Foo3BarD1Ev 0000006c T _ZN3Foo3BarD2Ev 00000000 W _ZN14QScopedPointerIN3Foo10BarPrivateENS_21QScopedPointerDeleterIS2_EEEC1EPS2_ 00000000 W _ZN14QScopedPointerIN3Foo10BarPrivateENS_21QScopedPointerDeleterIS2_EEED1Ev 00000000 W _ZN21QScopedPointerDeleterIN3Foo10BarPrivateEE7cleanupEPS2_ 00000000 W _ZN7qt_noopEv U _ZN9qt_assertEPKcS1_i 00000094 T _ZNK3Foo3Bar5valueEv 00000000 W _ZNK14QScopedPointerIN3Foo10BarPrivateENS_21QScopedPointerDeleterIS2_EEEptEv U _ZdlPv U _Znwj U __gxx_personality_v0 Symbols in baz.o: 00000000 W _ZN3Foo10BazPrivateC1Ev 0000002c T _ZN3Foo3BazC1Ev 00000000 T _ZN3Foo3BazC2Ev 0000006e T _ZN3Foo3BazD1Ev 00000058 T _ZN3Foo3BazD2Ev 00000084 T _ZNK3Foo3Baz5valueEv U _ZdlPv U _Znwj U __gxx_personality_v0 The documentation is generated from source and header files. You document for the other developers, not for yourself. In the header files, document interfaces. That is, what the function does, not the implementation. In the .cpp files, you can document the implementation if the implementation is not obvious.
http://doc.qt.digia.com/qtcreator-extending/coding-style.html
CC-MAIN-2014-41
refinedweb
1,910
52.29
Java JAI API helps the programmers to write program for handling the different types of image in the Java Program. In this article we will see how to read the JPEG image file. We will use the javax.imageio.ImageIO class for reading the image file data into java.awt.Image object. The javax.imageio.ImageIO class provides a method ImageIO.read(file) which actually reads the image data into the Image object. The javax.imageio.ImageIO class provides many static methods which can be used for locating ImageReaders and ImageWriters, and performing simple encoding and decoding. Here is the video tutorial of "Image processing in Java Example program": Here is the important methods of the javax.imageio.ImageIO class used for reading the image: static BufferedImage read(File input) This method is used for reading the data into BufferedImage and it returns the BufferedImage. It takes File as input. I automatically choose one of the registered ImageReader. static BufferedImage read(ImageInputStream stream) This method takes the ImageInputStream as parameter and reads the data using one of the registered ImageReader. It returns the BufferedImage. static BufferedImage read(InputStream input) This method is used to read the image data from the InputStream. static BufferedImage read(URL input) If you want to read the image from the URL then use this method of the class. Here is the example code of reading image data from a file: import java.awt.Image; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; public class ReadImageExampleFromFlile { public static void main(String[] args) throws IOException { //Ge the file to read File file = new File("DeepakKumar.jpg"); //Read it using the read method of ImageIO class Image image = ImageIO.read(file); //Get width int width = image.getWidth(null); //Get height int height = image.getHeight(null); //Print the data System.out.println("Width:" + width + " Height:" + height); } } In the above example you can see that program loads the image and then prints it width and height. After loading the program in the Image object you can perform the operations on the image. Here is an example of reading the image file from URL: import java.awt.Image; import java.io.File; import java.io.IOException; import java.net.URL; import javax.imageio.ImageIO; public class ReadImageExampleFromURL { public static void main(String[] args) throws IOException { //Ge the url of image to read URL url = new URL(""); //Read it using the read method of ImageIO class Image image = ImageIO.read(url); //Get width int width = image.getWidth(null); //Get height int height = image.getHeight(null); //Print the data System.out.println("Width:" + width + " Height:" + height); } } We should construct the URL of the image and then pass it to the ImageIO.read(url) function. Check more tutorials at Image Processing Tutorials in Java Programming Language. content of JPEG file in Java? Post your Comment
http://www.roseindia.net/java/imageprocessing/How-to-read-content-of-JPEG-file-in-Java.shtml
CC-MAIN-2017-04
refinedweb
475
52.15
I wrote this to get the feel of how values are stored in the array. when I run the program i get this: 1 is 1075160376 2 is 1 3 is 2 4 is 3 5 is 4 instead of: 1 is 1 2 is 2 3 is 3 4 is 4 5 is 5 How do i solve this problem? Code: #include <stdio.h> int main(void) { int array[5]; int a, num; for(a = 1; a <= 5; a++) { printf("Enter a number: "); scanf("%d", &num); array[num] = num; } printf("\n1 is %d", array[0]); printf("\n2 is %d", array[1]); printf("\n3 is %d", array[2]); printf("\n4 is %d", array[3]); printf("\n5 is %d", array[4]); return 0; }
https://cboard.cprogramming.com/c-programming/59573-stored-values-elements-array-help-printable-thread.html
CC-MAIN-2018-05
refinedweb
122
78.62
Thanks for reply. I think about some "similar" variable that exist on all Unix-like systems (or most of them). #ifndef PATH_MAX # ifdef _POSIX_PATH_MAX # define PATH_MAX _POSIX_PATH_MAX # endif #endif Try removing echo flag by #include<termios.h> ... struct termios t; tcgetattr(mfd, &t); t.c_lflag |= ~ECHO; tcsetattr(mfd, TCSANOW, &t); That should remove echo when writing to master device.]]>. backtick]]>...)]]> Even over a secure channel. But as you noted, if you send other, more sensitive info it's not worth the trouble. Basically the less serious and sensitive the server, the more important it gets to protect the user's password. Oh ok, I did not know you could do that! I thought I could only use sed. Its works. Thanks!]]> Oh, and for the best network efficiency, you should leave Nagle enabled and/or set TCP_CORK during the fread()+sendall() loop, as well... That will enable packing all the file data into as few network packets as possible... Magnificent phrase]]>.]]> Thanks for the help but the screen appears before the login option. And its appearing in 1 one of the terminal client. I will try to attach a screen shot of the screen. And I have not done any kind of bios update or firmware update or software update. I am still using NCT 2000-xp software only. So please help me as how do I eliminate this error.]]> Thanks a lot. getdirentries() seems better than syscall(..),]]> A wonderful article…. In my life, I have never seen a man be so selfless in helping others around him to get along and get working.]]> When you need to deal with lines of arbitrary length, may I suggest using getline(), if you're using glibc... That will dynamically allocate and resize your input buffer as large as it needs to be to hold each line...
http://developerweb.net/extern.php?action=feed&fid=63&type=atom
CC-MAIN-2019-13
refinedweb
304
76.01
Hello y'll Its fantastic to be part of JavaHeads world. I believe um gonna learn a lot frm y'll. Thnx 4 having me y'll. Um out...Later!!! from SYBA_TH y'll Its fantastic to be part of JavaHeads world. I believe um gonna learn a lot frm y'll. Thnx 4 having me y'll. Um out...Later!!! from SYBA_THOT Welcome to Java Programming Forums, you look like quite an expressive person. Chris Hello and welcome! // Json Hello SYBA_THOT Welcome to the forums!! I hope your wearing old clothes because your about to get covered in Java. Please use [highlight=Java] code [/highlight] tags when posting your code. Forum Tip: Add to peoples reputation by clicking theby clicking the button on their useful posts.button on their useful posts. Looking for a Java job? Visit - Java Programming Careers Hi JavaPF! From head to toe um all old. Awaiting Java baptism. By the Way! What is the program structure of Java like. i.e. The basic way to write Java source code. Word! SYBA_THOT public class HellWorld { public static void main(String[] args){ System.out.println("Hello world"); } } Chris
http://www.javaprogrammingforums.com/member-introductions/1081-java-me-up-ladies-gentlemen.html
CC-MAIN-2016-07
refinedweb
191
80.28
skel - skeleton man page for section 9 entries #include <linux/linux.h> Describe the function(s) and its parameters. This section should not be considered an introduction to kernel program- ming, just an english text description of the function at hand. It is OK to presume some basic knowledge of driver programming. Describe the return values. Enumerate all the distinct values and all the ranges. List kernel versions, and if restricted to certain architec- tures, say so. man(1), man(7), intro(9) Also list some source files for the kernel that implement the functions of the page. Who are you? Describe any misfeatures or suprises that the use of these functions may lead to. The may not be errors, just unfor- tunate side effects.
http://www.linuxsavvy.com/resources/linux/man/man9/skel.9.html
CC-MAIN-2019-13
refinedweb
125
68.97
A Fix for CToolBar with IE4 The new version of COMCTL32.DLL that is shipped with the release version of IE4 (4.71.1712.3) has a problem when combined with the MFC CToolBar. When an MFC application is run under IE4, and a toolbar with separators (groups) is docked vertically, or floated with >1 row) then the bottom button/row is clipped. The reason for this is that CToolBar was written with the assumption the horizontal separators are 8 pixels wide and vertical separator are (8*2/3) = 5 pixels high. This was the case until IE4 came along. Now the vertical separators are 8 pixels high instead of 5 pixels. So CToolBar does not allocate enough room for the tooblar buttons. The code below assumes that you have the code I published earlier the detects the version of COMCTL32 that is currently loaded. It then makes the fixes required based on which version is running. If it is the IE4 version, then it uses 8 pixels for the height, otherwise it uses 5 pixels as before (8*2/3) Step 1: To make the fix, copy the following functions verbatim from the "BARTOOL.CPP" in your MFC source tree, and change all occurences of "CToolBar" to "CMyToolBar" void CMyToolBar::_GetButton(int nIndex, TBBUTTON* pButton) const { ... } void CMyToolBar::_SetButton(int nIndex, TBBUTTON* pButton) { ... } #ifdef _MAC #define CX_OVERLAP 1 #else #define CX_OVERLAP 0 #endif void CMyToolBar::SizeToolBar(TBBUTTON* pData, int nCount, int nLength, BOOL bVert) { ... } } struct _AFX_CONTROLPOS { int nIndex, nID; CRect rectOldPos; }; CSize CMyToolBar::CalcLayout(DWORD dwMode, int nLength) { ... } CSize CMyToolBar::CalcFixedLayout(BOOL bStretch, BOOL bHorz) { ... } CSize CMyToolBar::CalcDynamicLayout(int nLength, DWORD dwMode) { ... } Step 2: Copy and MODIFY the CalcSize routine from "BARTOOL.CPP" as shown .. my changes and additions are commented with // RO CSize CMyToolBar::CalcSize(TBBUTTON* pData, int nCount) { // Check for COMCTL32 version number // RO bool isfullheightsep = false; // RO if (QWinApp::ComCtl32Version() >= COMCTL32_471) { // RO isfullheightsep = true; // RO } // RO ASSERT(pData != NULL && nCount > 0); CPoint cur(0,0); CSize sizeResult(0,0); for (int i = 0; i < nCount; i++) { if (pData[i].fsState & TBSTATE_HIDDEN) continue; int iBitmapx = pData[i].iBitmap; // RO int iBitmapy = iBitmapx; // RO if (! isfullheightsep) iBitmapy = iBitmapy * 2 / 3; // RO if (pData[i].fsStyle & TBSTYLE_SEP) { // A separator represents either a height or width if (pData[i].fsState & TBSTATE_WRAP) sizeResult.cy = max(cur.y + m_sizeButton.cy + iBitmapy,sizeResult.cy); // RO // RO sizeResult.cy = max(cur.y + m_sizeButton.cy + pData[i].iBitmap * 2 / 3, sizeResult.cy); else sizeResult.cx = max(cur.x + iBitmapx, sizeResult.cx); // RO // RO sizeResult.cx = max(cur.x + pData[i].iBitmap,sizeResult.cx); } else { sizeResult.cx="max(cur.x" + m_sizeButton.cx, sizeResult.cx); sizeResult.cy="max(cur.y" + m_sizeButton.cy, sizeResult.cy); } if (pData[i].fsStyle & TBSTYLE_SEP) cur.x +="pData[i].iBitmap;" else cur.x +="m_sizeButton.cx" CX_OVERLAP; if (pData[i].fsState & TBSTATE_WRAP) { cur.x = 0; cur.y += m_sizeButton.cy; if (pData[i].fsStyle & TBSTYLE_SEP) cur.y +=iBitmapy; // RO cur.y +="pData[i].iBitmap" * 2 / 3; } } return sizeResult; } In summary I have added 5 lines at the top to detect the version, added 3 lines before "if (pData[i].fsState & TBSTYLE_SEP)", replaced two lines in the following "if" and replaced one line near the end. Step 3: You need to define a CToolBar derived CMyToolBar (or CFlatToolBar-derived, or make the changes to CFlatTOolBar anyway which is what I did). In that class, you will need to copy the declarations for the functions we have copied - don't forget to make CalcDynamicSize and CalcFixedSize VIRTUAL functions !! That should do it. Now your toolbars will work under all versions of COMCTL32.DLL - well, at least until MS releases a new version :-) GGFGPosted by Legacy on 06/10/2003 12:00am Originally posted by: JAIME HHHFFFReply Does anyone know how to get rid of the default spacing between buttonsPosted by Legacy on 07/25/2001 12:00am Originally posted by: Tony Karam Does anyone know how to get rid of the default spacing between buttons ?Reply Already fixed by MFC but still buggyPosted by Legacy on 06/09/1999 12:00am Originally posted by: Angela R�sch I spent a lot of time with the pixel errors around my toolbars I created with VC++6.0 and IE4.Reply First I was lucky to find this page which seems to be the solution for my pixel problem. But when I started to implement the bugfix, I saw that the code was already extended in that way! But what about my still existing error ?!? So I visited the code guru again and - found a second entry which *really* fixed my problem: on the "Toolbar Open FAQ" list the entry named "Fixing Painting Problem With Flat Toolbar". I hope this tip protect some other users from wasting time.
http://www.codeguru.com/cpp/controls/toolbar/flattoolbar/article.php/c2525/A-Fix-for-CToolBar-with-IE4.htm
CC-MAIN-2015-18
refinedweb
789
58.99
As it is knowed that xml can be used as the middle layer of data transform. But i have a question that how to keep the information secret and how to guarantee it's security? Thanks "maggie" <wxpinetree@21cn.com> wrote: > >As it is knowed that xml can be used as the middle layer of data transform. >But i have a question that how to keep the information secret and how to >guarantee it's security? >Thanks The .NET framework provides a namespace called System.Security.Cryptography.Xml, which implements the closest thing to the XML Signatures specification known to man. This can be used to verify that your XML has not changed, and it is a standard way to do this. Of course, this does nothing to ensure that people can't look at your XML, it just prevents them from being able to change it without your knowledge. The XML Encryption standard is not actually complete yet, and it's not implemented in .NET anyway. Therefore, if you want to protect your XML data from prying eyes, you'll have to implement something that is essentially non-standard to XML. Might I suggest using the Rijndael algorithm in .NET? System.Security.Cryptography. Use this along with something like SHA1 to hash your password, and you are pretty safe. However, if the password is built into your code or stored on the system, there is always a way to hack in, although it might not be obvious. There are also facilities in .NET for securely exchanging keys over an insecure network, if you need that kind of thing. Again, a lot of this has nothing to do with XML. You can use this to protect any data you want to. It's also going to require a fair amount of coding to do all of this. Good luck. Forum Rules
http://forums.devx.com/showthread.php?4299-XML-and-Internationalization&goto=nextnewest
CC-MAIN-2013-48
refinedweb
311
72.56
Opened 9 years ago Closed 9 years ago #7003 closed (duplicate) Permalink documentation doesn't mention you can use named urls Description I think it would be useful to mention in the permalink documentation in model-api that you can use a named url in place of a view name. So, for example: def get_absolute_url(self): return ('people_detail', [str(self.id)]) get_absolute_url = permalink(get_absolute_url) Would find a url like this: url(r'^people/(\d+)/$', 'people.views.details', name='people_detail'), Attachments (1) Change History (4) Changed 9 years ago by comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by Added a quick stab at this myself. comment:3 Changed 9 years ago by Note: See TracTickets for help on using tickets. Documentation patch
https://code.djangoproject.com/ticket/7003
CC-MAIN-2017-39
refinedweb
127
57.2
Ok I've spent more time than you would believe on this one. I built the program, started from scratch. Over the course of the last week I've spent no shit about 30 hours on this and still cant get it to work right. What is suppose to happen is the user is suppose to enter a dollaramount, the program then proceeds to give them a readout of the fewest number of coins required for that amount (IE $1.42 converts to 1 dollar, 1 quarter, 1 dime, 1 nickel, and 2 pennies.) My current code is as follows... using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace MoneyCalculator { public partial class Form1 : Form { public Form1() { InitializeComponent(); } public enum moneyvalues { dollar = 100, quarter = 25, dime = 10, nickel = 5, penny = 1, } private void button1_click(object sender, EventArgs e) { string ida = textBox1.Text; decimal nda = (decimal.Parse(ida) * 100); if (moneyvalues.dollar >= nda) { (nda = nda - moneyvalues.dollar); dolcnt++; } else if (moneyvalues.quarter >= nda) { (nda = nda - moneyvalues.quarter); qrtcnt++; } else if (moneyvalues.dime >= nda) { (nda = nda - moneyvalues.dime); dmecnt++; } else if (moneyvalues.nickel >= nda) { (nda = nda - moneyvalues.nickel); nklcnt++; } else if (moneyvalues.penny >= nda) { (nda = nda - moneyvalues.penny); pnycnt++; } else text = ("You have " + (string.dolcnt) + "dollar bill(s), " + (string.qrtcnt) + "quarter(s), " + (string.dmecnt) + "dime(s), " + (string.nklcnt) + "nickel(s), " + (string.pnycnt) + "pennies!"); textBox2 = text; } private void Form1_Load(object sender, EventArgs e) { } } } } As of right now I am getting a "Type or namespace definition or end-of-file expected build error. I've been through this a 100 times, and the brackets are all there, so I've got no clue as to what could be wrong. I cant even check to see if the rest of the program is working right because of this. Any help you can give would be greatly appreciated. Edited by __avd: Added [code] tags. For easy readability, always wrap programming code within posts in [code] (code blocks).
https://www.daniweb.com/programming/software-development/threads/277851/c-program
CC-MAIN-2018-39
refinedweb
338
54.59
do anything on the page that a legitimate user can do. Why is XSS a problem? An important concept in web security is the Same Origin Policy (SOP). According to this policy, scripts on a web page can only access data in another page if it is of the same origin. Two pages have the same origin if the protocol, host and port are identical. What would happen without the Same-Origin policy? Let’s say you are logged into your online banking account. Suddenly, you receive an instant message from a friend: “Hey, you must watch this funny dog video:”. Normally, you don’t click any unsolicited links, but because it comes from your friend and you are a dog-person, you make an exception. But it wasn’t really your friend who sent you this link. His IM account has been hacked and the website actually belongs to a hacker. So while you watch a cute wiener dog doing summersaults, the malicious web page opens your bank’s website in a hidden iframe. The malicious site runs a script that controls the iframe and executes a wire transfer, emptying your bank account. If your palms got all sweaty, you can relax because the Same Origin Policy prevents exactly this kind of behavior. However, when your bank’s website has a Cross-Site Scripting vulnerability, this attack can happen. Because the malicious code is injected into your bank’s website, the browser treats it as having the same origin and gives it full access to all data on the page. The “Hello World” of XSS is to inject this code snippet: <script>alert(1)</script> When successfully injected into a page, the browser opens a dialogue displaying “1”. This looks deceivingly harmless and might be dismissed as an annoying prank, but make no mistake: once an attacker manages to exploit an XSS vulnerability, it’s Game Over. Almost anything you can do on the vulnerable site, an attacker can do. Types of XSS So how does an actual Cross-Site Scripting attack work? Cross-site scripting can be differentiated into several categories. The difference between these categories is how the malicious code, called the payload, finds its way into the vulnerable website. Reflected XSS In a reflected XSS attack, the payload gets sent by the victim’s browser and then it is returned back as part of the response by the server. A common example is a search function, where the search term you entered is sent as an URL parameter and return as part of the results page. To execute the attack, the attacker tricks you into accessing a specially crafted URL. Persistent XSS In a persistent XSS attack, the payload is stored as part of some user input in the server’s database. As soon as the victim accesses a URL that uses this user input to render the page, the payload is executed. A persistent XSS vulnerability can be especially harmful because it doesn’t require any action from a user besides visiting the vulnerable page. An example could be a comment that an attacker posted on your photo on a photo-sharing page. As soon as you open the page containing the comment, the code planted by the attacker get executed. Part of the attack might be to post the same malicious comment on all of your friend’s profiles. When they check out the comment that seemingly came from you, it gets posted to their friends’ profiles and thus spreads like wild fire. An example of this attack in the wild was the Samy worm that made headlines in 2005 by infecting over 1 million MySpace profiles in just 20 hours. Client-side/DOM-based In a client-side or DOM-based attack, the attacker exploits a vulnerability in the code that runs on the client-side, most commonly JavaScript. Again, one attack vector could be an attacker tricking you into accessing a specially crafted URL where the URL parameter contains the payload. Self XSS Last but not least, there is self-XSS, where the attacker actually tricks you to hack yourself, e.g. by entering the payload into the JavaScript debug console of your web browser. You wonder why anyone would do this? How about someone tells you about a secret method that unlocks a hidden functionality on Facebook? All you have to do is to visit facebook.com, open the developer console and paste this blob of gibberish. You say that sound ridiculous and nobody would fall for this scam? Well, apparently enough people fall for it that Facebook feels the need to display a warning if you open the development console on facebook.com. XSS Examples Let’s look at some really simple examples how an XSS vulnerability looks like. Let’s assume for a second you just started learning Django, read about Models, URLs and Views, and got really excited to build the next Facebook. You haven’t fully grasped the concept of templates yet, but you know how to concatenate strings, so you start coding. # NOTE: THIS IS BAD CODE, FOR DEMONSTRATION ONLY! def show_user(request): username = requests.GET.get('username') ty: user = User.objects.get(username=username) except User.DoesNotExist: return HttpResponseNotFound('<html><body>User with username "%s" does not exist</body></html>', status_) ... def add_comment(request): if request.method == 'POST' and 'comment' in request.POST: Comment.objects.create(text=request.POST['comment']) return HttpResponse('ok') def show_comments(request): body = '<html><head></head><body>' for comment in Comment.objects.all(): body += '<p>' + comment.text + '</p>' body += '</body></html>' return HttpResponse(body) Can you spot the XSS vulnerabilities and identify their type? The view show_user reads the query parameter username and if no user with that username can be found, the username is used verbatim to build an error message. What happens if show_user get called with the query string username=<script>alert(1)</script>? There is probably no such user, so this error message would be displayed: <html><body>User with username "<script>alert(1)</script>" does not exist</body></html> This is a textbook example of a reflected XSS vulnerability. add_comment and show_comments together form a persistent XSS vulnerability. add_comment takes data from a POST request and stores in the database without any processing. This itself is not an issue, but show_comment uses this data to build a (really simple) HTML site. XSS protection So how do you prevent XSS attack? The underlying principle of Cross-site Scripting, like any injection attack, is that some kind of user input is unintentionally interpreted as code and executed. There are two ways how to prevent that, escaping and sanitizing. When you sanitize, you try to filter out the potentially troublesome data, either by forbidding certain things (blacklisting) or by only allowing certain things (whitelisting). Whitelisting is usually the more secure approach. Blacklisting Let’s say you are building a Twitter clone and want to allow people to post status updates containing some formatting using HTML. Obviously, you don’t want people to post JavaScript, so you filter out script tags with a regular expression. safe_comment = re.sub(r'</?script.*?>', '', comment) # DON'T DO THIS! Pretty simple, right? Here are two ways how this simple filter can be circumvented: <img src="" onerror="javascript:..."> <scri<script>pt>...</scrip</script>t>– when the filter is only run once, it removes the one pair of <script>tags and makes the scrambled tags valid. Of course, these could be filtered out as well, but the point is that blacklisting is a loosing battle and should be avoided. Whitelisting The alternative approach is to whitelist only valid data. One example can be found in Django’s URL patterns. When you use regular expressions to capture view parameters, make the regular expression as strict as possible. Let’s say you have view profile_view(request, user_id) that takes an user ID as URL parameter. Here are two ways how you could define the corresponding URL pattern. url(r'/profile/(?P<user_id>.+)/$', profile_view), # Bad, accepts any character url(r'/profile/(?P<user_id>\d+)/$', profile_view), # Good, only accepts digits The second approach is better because it is the most strict definition and ensures that only digits ever get passed to the view. The URL /profile/<script>alert(1)<%2Fscript>/ wouldn’t do any harm and just result in a 404 Not Found. When dealing with HTML, you could allow <i></i>, <b></b> and <u>, but reject any input that contains any other tags. Whitelisting is easier to get right than blacklisting, but you have to be really careful not to allow too much. Escaping The better approach is to escape any user input. Escaping means expressing a symbol in an alternative way that results in a different interpretation. Let’s look at a practical example: The characters < and > have a special meaning in HTML, they mark the begin and end of an HTML tag. So if we want to actually display one of these characters on a web page, we have to write them as < and >. The Ampersand & has a special meaning too, so we have to escape it as & if we want to use it. Last but not least, you also have to escape the ” mark as " because it is used around attributes. There are essentially two places where you should think about escaping and/or sanitizing data: when the data enters the server and when it leaves the server. - Sanitize incoming data - Escape outgoing data In most cases, it doesn’t make any sense to store escaped data, because the correct escaping depends on the context. When dealing with HTML you have to escape different characters than when constructing a URL. XSS protection in Django Proper escaping can be cumbersome. Luckily, Django has you covered: ever since version 1.0, Django automatically escapes all template variables. This can be easily confirmed in the console: >>> from django.template import engines >>> django_engine = engines['django'] >>> template = django_engine.from_string("Hello {{ name }}!") >>> template.render({'name': 'Daniel'}) 'Hello Daniel!' >>> template.render({'name': '<script>alert(1)</script>'}) 'Hello <script>alert(1)</script>!' So all we have to do to fix those broken views from the example above is to rewrite them with proper templates, as you would probably have done in the first place. def show_user(request): username = requests.GET.get('username') ty: user = User.objects.get(username=username) except User.DoesNotExist: return render(request, 'user_not_found.html', {'username': username}, status=404) ... def add_comment(request): if request.method == 'POST' and 'comment' in request.POST: Comment.objects.create(text=request.POST['comment']) return HttpResponse('ok') def show_comments(request): return render(request, 'comment_list.html', {'comments': Comment.objects.all()}) # user_not_found.html <html> <body> User with username "{{ username }}" does not exist </body> </html> # comment_list.html <html> <head></head> <body>' {% for comment in comments %} <p>{{ comment.text }}</p> </body> </html> Potential XSS vulnerabilities in Django Just because Django automatically escapes template variables doesn’t mean that Django applications can’t have XSS vulnerabilities. Here are a couple of ways how autoescaping can be circumvented: - With the template tag {% autoescape off %} - With the template filters safeand safeseq - Using SafeBytes, SafeString, SafeText, SafeUnicodeor mark_safe()from django.utils.safestring - Using format_html(), format_html_join(), or html_safefrom django.utils.html - By creating a class with a __html__()method - Using the wrong escaping, e.g. the escapejsfilter to display HTML. - Not using templates to create a response All of these have valid use cases. Autoescaping is convenient, but sometimes, it gets in the way. Maybe you build a Content Management System to power your website and want to allow your staff to edit the HTML of individual pages. The HTML is stored in the database and to properly render the page, you do not want it to be escaped. But don’t turn off escaping without careful consideration. If you find yourself using one of these functions or classes or encounter any of them in a code review, step back and have a look at the data they are using. Following the data back to its source and make sure untrusted user input never gets marked as safe. What exactly is user input Let’s talk about user input for a second. The term user input is much broader than it seems at first glance. “User input” implies that it was typed in by a user on a keyboard, but that is misleading. Everything that comes across the network must be considered user input: the URL, the request body and also headers like user-agent or referer. Imagine an administrative page that creates statics about which browsers are used to access your site by reading the user-agent header. If this page doesn’t implement proper escaping, an attacker can send an HTTP request with a manipulated user-agent header to hijack this page to steal your session cookie and thus gaining administrative access to your website. Takeaways This was a long read, but I want you to take away two things: - User input should never be trusted blindly - User input does not only come from input fields XSS is far from a new thing. It’s been around ever since websites were created dynamically. XSS is so old that even MySpace was hacked through it! But despite its age, XSS is still one of the most prevalent security vulnerabilities and creeps into applications written both by newbies and veterans. Hopefully, this article gave you a solid understanding and helps to make your applications a tiny bit more secure. Drop any questions in the comment box or reach out via Twitter or Email. The next article in this series on security acronyms every Django developer should know will discuss how to make your site less secure by using CORS to circumvent the SOP and why that might be a good idea.
https://consideratecode.com/2017/09/07/xss/
CC-MAIN-2022-05
refinedweb
2,295
55.74
In the previous tutorial, we discussed the technical implications while implementing logging in a framework. We discussed log4j utility at length. We discussed the basic components those constitute log4j from a usability perspective. With the Appenders and layouts, user is leveraged to choose the desired logging format/pattern and the data source/location. In the current 27th tutorial in this comprehensive free selenium online training series, we would shift our focus towards a few trivial yet important topics that would guide us troubleshoot some recurrent problems. We may or may not use them in daily scripting but they would be helpful in the long run. We would discuss some advance concepts wherein we would deal with mouse and keyboard events, accessing multiple links by implementing lists. So why not let’s just start and briefly discuss these topics with the help of appropriate scenarios and code snippets. What You Will Learn: JavaScript Executors While automating a test scenario, there are certain actions those become an inherent part of test scripts. These actions may be: - Clicking a button, hyperlink etc. - Typing in a text box - Scrolling Vertically or Horizontally until the desired object is brought into view - And many more Now, it is evident from the earlier tutorials that the best way to automate such actions is by using Selenium commands. But what if the selenium commands don’t work? Yes, it is absolutely possible that the very basic and elementary Selenium Commands don’t work in certain situations. That said, to be able to troubleshoot such situation, we shoulder JavaScript executors into the picture. What are JavaScript Executors? JavascriptExecutor interface is a part of org.openqa.selenium and implements java.lang.Object class. JavascriptExecutor presents the capabilities to execute JavaScript directly within the web-browser. To be able to execute the JavaScript, certain mechanisms in the form of methods along with a specific set of parameters are provided in its implementation. Methods executeScript (String script, args) As the method name suggests, it executes the JavaScript within the current window, alert, frame etc (the window that the WebDriver instance is currently focusing on) executeAsyncScript (String script, args) As the method name suggests, it executes the JavaScript within the current window, alert, frame etc (the window that the WebDriver instance is currently focusing on) The parameters and import statement are common to both the executor methods. Parameters Script – the script to be executed Argument – the parameters that the script requires for its execution (if any) Import statement To be able to use JavascriptExecutors in our test scripts, we need to import the package using the following syntax: import org.openqa.selenium.JavascriptExecutor; Sample Code #1) Clicking a web element // Locating the web element using id WebElement element = driver.findElement(By.id("id of the webelement")); // Instantiating JavascriptExecutor JavascriptExecutor js = (JavascriptExecutor)driver; // Clicking the web element js.executeScript("arguments[0].click();", element); #2) Typing in a Text Box // Instantiating JavascriptExecutor JavascriptExecutor js = (JavascriptExecutor)driver; // Typing the test data into Textbox js.executeScript("document.getElementById(‘id of the element’).value=’test data’;”); #3) may find various other ways of writing down the code for accessing JavascriptExecutors. Accessing multiple elements in. Refer the screenshot below to understand the elements I am talking about. In the above image, we see that the various service providers belong to an unordered list. Thus, verification of click ability and visibility of these elements can be done by a single piece of code by using a list of elements. Import statement To be able to use WebElement list in our test scripts, we need to import the package using the following syntax:(); } There are various requirements under which the lists can be used to verify the elements with suitable implementation changes. Handling keyboard and mouse events Handling Keyboard Events As also said earlier, there are n numbers of ways to deal with the same problem statement in different contexts. Thus, at times a necessity arises to deal with a problem by changing the conventional dealing strategy with a more advance strategy. I have witnessed cases where I could not deal with alerts and pop up etc. by selenium commands thus I had to opt for different java utilities to deal with it using keyboard strokes and mouse events. Robot class is one such option to perform keyboard events and mouse events. Let us understand the concept with the help of a scenario and its implementation. Scenario: Let us gather a situation where an unnecessary pop up appears on the screen which cannot be accepted or dismissed using Alert Interface, thus the only wise option we are left with is to close down the window using shortcut keys – “Alt + space bar + C”. Let us see how we close the pop up using Robot Class. Before, initiating the implementation, we should import the necessary package to be able to use Robot class within our test script. Import Statement); Robot class can also be used to handle mouse events but let us here look at the selenium’s capabilities to handle mouse events. Handling Mouse Events WebDriver offers a wide range of interaction utilities that the user can exploit to automate mouse and keyboard events. Action Interface is one such utility which simulates the single user interactions. Thus, we would witness Action Interface to mouse hover on a drop down which then opens a list of options in the next scenario. Scenario: - Mouse Hover on the dropdown - Click on one of the items in the list options Import Statement(); Conclusion In this tutorial, we discussed some advance topics related to efficient scripting and to troubleshoot scenarios where the user is required to handle mouse and keyboard events. We also discussed how to store more than one web element in a list. I hope you would be able to troubleshoot these impediments if encountered. Next Tutorial #28: For the upcoming tutorial in the series, we would discuss the concept of Database testing using Selenium WebDriver. We would witness the mechanism of database connection, hitting selenium queries and fetching the results through Selenium WebDriver Code. 3 comments ↓ What is the difference between executeScript (String script, args) and executeAsyncScript (String script, args), both seems to me same as per description provided. We have test case like. 1.open 2. copy the first 3 lines text 3.past 3 lines text in facebook/textfile. please help me for code —-How to write code for copy the displayed text and past into other website/text file. The explanations of the methods executeScript and executeAsynScript are exactly the same. The Selenium documentation states this is the difference:.
http://www.softwaretestinghelp.com/efficient-selenium-scripting-selenium-tutorial-27/
CC-MAIN-2017-34
refinedweb
1,098
52.19
I have prefab called shape i load it from resources i want to add it directly into shapes like when i drag it from assets and then drop it into shapes but not like instatiate method it first adds the gameobject in Hierrachy then i have to add it to shapes ,if doing the second method the transform(position and scale) of prefab will be changed i do not want this to happen. Answer by cmpgk1024 · Nov 09, 2013 at 08:15 PM GameObject childObject = Instantiate(YourObject) as GameObject; childObject.transform.parent = parentObject.transform Edit: remember to add using System.Collections.Generic using System.Collections.Generic System.Collections.Generic? Why? For as Transform as Transform using System.Collections.Generic for c# or import System.Collections.Generic for JS And what 'as Transform' has to do with System.Collection.Generics? System.Collection.Generics Additionally, OP stated he wants to instantiate GameObject, not Transform, so your first line should probably be changed. And finally, if childObject is a Transform, then why use childObject.transform? GameObject Transform childObject childObject.transform I wonder who upvoted your answer... @ArkaneX Setting the parent requires a Transform, not a GameObject. But yeah, using as Transform doesn't work to get the transform. This needs to be... GameObject childObject = Instantiate(YourObject) as GameObject; childObject.transform.parent = gameObject.transform; So yeah, my bad for not noticing the as Transform. Upvote rescinded unless the code is changed. Answer by Artgig · Jan 07, 2015 at 08:10 PM Try using the Transform.SetParent method: childGameObject.transform.SetParent(parentGameObject.transform, false); The key is passing false for the 2nd parameter. false This keeps your prefabs default position once it is moved inside a parent. Exactly what i was looking for! Answer by ArkaneX · Nov 09, 2013 at 08:14 PM If by directly you mean to do this using Instantiate method only, then you can't. You have to assign a parent using Transform.parent property. Changing transform.parent only modifies the transform by making the object the parent. Parents are a part of Transform because their position changes when they do. The only way to make an object a child with a script is using transform.parent. I already used transform.parent but the transform of prefab will be changed i mean the scale and position,when i drag the prefab and drop it directly into shapes its transform does not change any idea ? thx for your support. You can try temporarily setting parent scale to (1,1,1) and position to (0,0,0) before parenting, and restore original values after. I did a quick test using below code and it looks like it is working. But please test it before using in a more complicated scenario. var originalScale = parentTransform.lossyScale; parentTransform.localScale = Vector3.one; var originalPosition = parentTransform.position; parentTransform.position = Vector3.zero; childTransform.parent = parentTransform; parentTransform.localScale = originalScale; parentTransform.position = originalPosition; In case of rotation problems, I guess you have to solve them in similar way. Answer by Triqy · Nov 09, 2013 at 08:18 PM You can Instantiate the object - find it - and then make it a child of another using script. Like @ArkaneX said, look into Transform.parent and GameObject.Find Instantiate returns an instance of the object so you can set the parent on that. No need to use Find. Instantiate Find Oh yeah lol that makes more sense. Answer by WTPS · Nov 10, 2013 at 07:09 AM @ArkaneX this way worked for the scale of all elements in shape ,but the position of gameobjects not, thank you :) What is the problem with position? Could you give an example? And if I helped, please consider upvoting. If i have two or more blocks(gameobjects) beside each other ,the x position increases or changes and y position also,but this thing happened for some of gameobjects not all. (I need 15 reputation for voting). Hmm. I'm afraid I won't able to help further - in my simple scene it works, but I guess you might encounter an issue related to your setup. Maybe rotation is a cause. Not your typical "Setting the parent of a transform which resides in a prefab is disabled to prevent data corruption" 0 Answers Prefabs Transforms LookAt 0 Answers Spawning unique prefabs at different transform posititions 0 Answers How do I Update the transform of Instantiate prefab? 2 Answers How do i apply some AISeeking code to all instances of a sprite, it is contained with the ship it is seeking but only 1 follows it 0 Answers
https://answers.unity.com/questions/572176/how-can-i-instantiate-a-gameobject-directly-into-a-1.html
CC-MAIN-2020-29
refinedweb
758
58.08
/* * * ChartCandlestickSSerieBase.h" //! Point structure used as template parameter for candlestick series struct SChartCandlestickPoint { SChartCandlestickPoint() { } SChartCandlestickPoint(double XValue, double LowVal, double HighVal, double OpenVal, double CloseVal): XVal(XValue), Low(LowVal), High(HighVal), Open(OpenVal), Close(CloseVal) { } //! The X value of the point (usually, a time) double XVal; //! The low market price double Low; //! The high market price double High; //! The open market price double Open; //! The close market price double Close; //! Returns the X value of the point double GetX() const { return XVal; } //! Returns the Y value of the point, which is the average between low and high double GetY() const { return (Low+High)/2; } //! Returns the minimum X value of the point double GetXMin() const { return XVal; } //! Returns the maximum X value of the point double GetXMax() const { return XVal; } //! Returns the minimum Y value of the point (the low value) double GetYMin() const { return Low; } //! Returns the maximum Y value of the point (the high value) double GetYMax() const { return High; } }; //! Specialization of a CChartSerieBase to display a candlestick series. /** Each point in the series has an X value (the time), a high value (the highest market price), a low value (the lowest market price), an open value (the market price at the opening) and a close value (the market price at the closing). **/ class CChartCandlestickSerie : public CChartSerieBase<SChartCandlestickPoint> { public: //! Constructor CChartCandlestickSerie(CChartCtrl* pParent); //! Destructor ~CChartCandlestickSerie(); //! Tests if a certain screen point is on the series. /** ; //! Adds a new point in the series /** @param XVal The X value of the point (the time) @param Low The lowest market price @param High The highest market price @param Open The market price at the opening @param Close The market price at the closing **/ void AddPoint(double XVal, double Low, double High, double Open, double Close); //! Sets the width (in pixels) of all candlestick points in the series void SetWidth(int Width); //! Returns the width (in pixels) of a point in the series int GetWidth() { return m_iCandlestickWidth; } protected: //! Draws the legend icon for the series. /** @param pDC The device context used to draw @param rectBitmap The rectangle in which to draw the legend icon **/ void DrawLegend(CDC* pDC, const CRect& rectBitmap) const; //! Draws the most recent points of the series. /** This function should only draw the points that were not previously drawn. @param pDC The device context used to draw **/ void Draw(CDC* pDC); //! Redraws the full series. /** @param pDC The device context used to draw **/ void DrawAll(CDC *pDC); private: //! Draws a candle stick point void DrawCandleStick(CDC *pDC, SChartCandlestickPoint Point); //! The candlestick width int m_iCandlestickWidth; // Caches the pen and brushes to avoid creating them for each point mutable CBrush ShadowBrush; mutable CPen NewPen; mutable CPen ShadowPen; mutable CBrush BrushFill; mutable CBrush BrushEm)
https://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=14075&zep=ChartCandlestickSerie.h&rzp=%2FKB%2Fmiscctrl%2FHigh-speedCharting%2F%2FChartCtrl_source.zip&PageFlow=Fluid
CC-MAIN-2019-30
refinedweb
452
56.25
Possible Rendering Issue in FF3? Hello GXT Experts! I am having a stange issue here - very strange indeed. I designed a simple html template with some static content. The main area features a simple styled html table where the main cell is dedicated to receive a component created by GXT. The cell - a <td> element - has a unique ID and in the onModuleLoad() method of the GWT component, I just create a new widget containing two simple panel with a RowLayout and add it to the page with Code: RootPanel.get("tdId").add( homePage.buildPage() );Code: buildPage()Code: homePageCode: So, when I compile and run my project in GWT hosted mode, everything runs just fine. Also, if I render the same page using IE7, I also see the expected result. However, when I try to render the same page in FF3 or Google Chrome, my html template shows up, but the two components in my "main area" just won't render. If I examine the page DOM, however, in Firebug, I can see that all elements are there but FF3 (and Chrome) just won't render it. Any idea what would be wrong with my code below? Thanks for any help. -Uli Code of onModuleLoad method in main component class: Code: /** * This is the entry point method. */ public void onModuleLoad() { final HomePage homePage = new HomePage(); RootPanel.get("dg3_area").add( homePage.buildPage() ); homePage.layout(); } Code: public class HomePage extends LayoutContainer { private ContentPanel leftPanel; private LayoutContainer mainPanel; public HomePage() { // set a layout for our home page final RowLayout layout = new RowLayout(Style.Orientation.HORIZONTAL); this.setLayout(layout); // create our two content panels leftPanel = new ContentPanel(); leftPanel.setFooter(false); leftPanel.setCollapsible(false); leftPanel.setScrollMode(Style.Scroll.NONE); leftPanel.setHeaderVisible(false); leftPanel.setFrame(true); leftPanel.setSize(-1, 200); mainPanel = new LayoutContainer(); mainPanel.setBorders(false); mainPanel.setSize(-1, 200); mainPanel.setScrollMode(Style.Scroll.AUTO); } public HomePage buildPage() { // first add some test text here leftPanel.addText("Left Panel here"); mainPanel.addText("Main Panel here"); this.add(leftPanel, new RowData(0.2, 200.0)); this.add(mainPanel, new RowData(0.8, 200.0)); return this; } }
https://www.sencha.com/forum/showthread.php?71524-Possible-Rendering-Issue-in-FF3
CC-MAIN-2016-07
refinedweb
348
50.53
Creating the hackaday logo with a MSP430 and a laser. Project files can be found here: Draw the hackaday logo with a laser with less than 1K of data. Creating the hackaday logo with a MSP430 and a laser. Project files can be found here: At this point, we decided to abandon the concept of mirroring. If we had had a much more complex design with many more points and more than 1kb, mirroring would have saved us space. Due to having so little space however, the algorithms power wasn't able to give us the results we were looking for. We had by this point made so many optimizations to our code that when we reverted to a single array, the total size was 902 bytes! The final program: #include <msp430.h> #define uint8_t unsigned char #define uint16_t unsigned int #define LASER BIT0 #define SSOUT P1OUT #define SSX BIT6 #define SSY BIT7 #define length 234 void writeMCP492x(uint16_t data,uint8_t ss); void drawLine(uint16_t, uint16_t, uint16_t, uint16_t); int main(void) { WDTCTL = WDTPW | WDTHOLD; // Stop watchdog timer PM5CTL0 &= ~LOCKLPM5;//reset fram CSCTL1 |= DCORSEL_6;//sets clock speed (16mz); //The coordinates for the laser to traverse. Even index are X values, odd index are Y values. uint8_t logo[] = {66, 158, 47, 177, 37, 176, 27, 177, 16, 186, 8, 195, 8, 212, 27, 195, 47, 220, 27, 236, 40, 238, 56, 233, 66, 220, 68, 212, 68, 203, 86, 186, 75, 175, 66, 158, //lower left wrench:36 66, 98, 47, 79, 37, 80, 27, 79, 16, 70, 8, 61, 8, 44, 27, 61, 47, 36, 27, 20, 40, 18, 56, 23, 66, 36, 68, 44, 68, 53, 86, 70, 75, 81, 66, 98, //upper left wrench:36 118, 125, 110, 136, 102, 138, 86, 120, 95, 105, 97, 108, 102, 114, 110, 118, 118, 125, //left eye:18 190, 158, 209, 177, 219, 176, 229, 177, 240, 186, 248, 195, 248, 212, 229, 195, 209, 220, 229, 236, 216, 238, 200, 233, 190, 220, 188, 212, 188, 203, 170, 186, 181, 175, 190, 158,//lower right wrench:36 190, 98, 209, 79, 219, 80, 229, 79, 240, 70, 248, 61, 248, 44, 229, 61, 209, 36, 229, 20, 216, 18, 200, 23, 190, 36, 188, 44, 188, 53, 170, 70, 181, 81, 190, 98, //upper right wrench:36 138, 125, 146, 136, 154, 138, 170, 120, 161, 105, 159, 108, 154, 114, 146, 118, 138, 125,//right eye: 18 80, 159, 127, 180, 175, 159, 187, 120, 175, 84, 162, 70, 155, 55, 146, 55, 140, 70, 137, 70, 131, 55, 121, 55, 117, 70, 115, 70, 108, 55, 98, 55, 93, 74, 80, 84, 67, 120, 80, 159,//face: 40 127, 103, 130, 91, 133, 83, 127, 91, 121, 83, 125, 91, 127, 103//nose:14 }; //keeps track of where in the array we are. uint8_t myIndex = 0; //used for creating delays due to the difference in speed between processing and moving a mirror. uint16_t counter; /* * main program loop. Iterates logo array two at a time and calls drawline function * with either the next four array points or the next two from the index and the * beginning two entries creating a loop. * Also checks if the laser should be turned off while traversing between * two points. */ while(1){ if(myIndex < length-3){ //Check if laser should be turned off. if(myIndex == 34 || myIndex == 70 || myIndex == 88 || myIndex == 124 || myIndex ==160 || myIndex == 178 || myIndex == 218 || myIndex == 232){ for(counter = 1200; counter > 0; counter--){ counter = counter- 1; } P1OUT &= ~LASER; } //Draw a line from X,Y to X', Y' drawLine(logo[myIndex], logo[myIndex+1], logo[myIndex+2], logo[myIndex+3]); //Check if laser should be turned back on. if(myIndex == 34 || myIndex == 70 || myIndex == 88 || myIndex == 124 || myIndex ==160 || myIndex == 178 || myIndex == 218 || myIndex == 232){ for(counter = 800; counter > 0; counter--){ counter = counter- 1; } P1OUT |= LASER; } myIndex = myIndex +2; //Loop back to beginning of array. } else { for(counter = 1200; counter > 0; counter--){ counter...Read more » We were close, about 200 bytes away, it was time to get creative. The first thing we did was to flip our coordinates so that we were right side up. Second, we added the coordinates for where the laser should turn off (which actually brought our size count up to about 1500 bytes). Then we started experimenting a little bit. We broke up the logo into different sections: A wrench, the skull, one eye and the nose. The idea was that we could mirror the different pieces to save space. Our first take on the mirroring algorithm: //lower left wrench uint16_t wrench[] = {265, 635, 190, 710, 150, 705, 110, 710, 65, 745, 35, 780, 35, 850, 110, 780, 190, 880, 110, 945, 160, 955, 225, 935, 265, 880, 275, 850, 275, 815, 345, 745, 300, 700, 265, 635}; uint16_t wrenchLength = 36; //Skull outer uint16_t face[] = {320, 390, 510, 305, 700, 390, 750, 545, 700, 690, 650, 745, 620, 805, 585, 805, 560, 745, 550, 745, 525, 805, 485, 805, 470, 745, 460, 745, 435, 805, 395, 805, 375, 730, 320, 690, 270, 545, 320, 390}; uint16_t faceLength = 40; //Left eye uint16_t leftEye[] = {475, 525, 440, 480, 410, 475, 345, 545, 380, 605, 390, 595, 410, 570, 440, 555, 475, 525}; uint16_t eyeLength = 18; //Nose uint16_t nose[] = {510, 615, 520, 660, 535, 695, 510, 660, 485, 695, 500, 660, 510, 615}; uint16_t noseLength = 14; uint16_t myIndex = 0; uint16_t logoPart = 0; uint16_t mirrorCount = 4; uint16_t laserDelay; uint16_t direction = 0; while(1){ switch(logoPart){ //Draws the wrenches case 0: if(mirrorCount > 0){ if(myIndex < wrenchLength-3){ drawLine(wrench[myIndex], wrench[myIndex+1], wrench[myIndex+2], wrench[myIndex+3]); myIndex = myIndex +2; } else { for(laserDelay = 0; laserDelay < 400; laserDelay++) continue; P1OUT &= ~LASER; if(mirrorCount > 1){ uint16_t oldX = wrench[myIndex]; uint16_t oldY = wrench[myIndex+1]; if(direction == 0){ mirrorX(wrench, wrenchLength, 256); direction = 1; } else { mirrorY(wrench, wrenchLength, 256); direction = 0; } drawLine(oldX, oldY, wrench[0], wrench[1]); } for(laserDelay = 0; laserDelay < 250; laserDelay++) continue; P1OUT |= LASER; myIndex = 0; mirrorCount = mirrorCount - 1; } } else { for(laserDelay = 0; laserDelay < 400; laserDelay++) continue; P1OUT &= ~LASER; mirrorCount = 2; drawLine(wrench[myIndex], wrench[myIndex+1], leftEye[0], leftEye[1]); logoPart = 1; for(laserDelay = 0; laserDelay < 250; laserDelay++) continue; P1OUT |= LASER; } break; //draws the eyes case 1: if(myIndex < eyeLength-3){ drawLine(leftEye[myIndex], leftEye[myIndex+1], leftEye[myIndex+2], leftEye[myIndex+3]); myIndex = myIndex +2; } else if(mirrorCount > 1) { for(laserDelay = 0; laserDelay < 400; laserDelay++) continue; P1OUT &= ~LASER; uint16_t oldX = leftEye[myIndex]; uint16_t oldY = leftEye[myIndex+1]; mirrorX(leftEye, eyeLength, 256); drawLine(oldX, oldY, leftEye[0], leftEye[1]); myIndex = 0; for(laserDelay = 0; laserDelay < 250; laserDelay++) continue; P1OUT |= LASER; mirrorCount = mirrorCount - 1; } else { for(laserDelay = 0; laserDelay < 400; laserDelay++) continue; P1OUT &= ~LASER; mirrorCount = 4; drawLine(leftEye[myIndex], leftEye[myIndex+1], nose[0], nose[1]); myIndex = 0; logoPart = 2; for(laserDelay = 0; laserDelay < 250; laserDelay++) continue; P1OUT |= LASER; } break; //draws the nose case 2: if(myIndex < noseLength-3){ drawLine(nose[myIndex], nose[myIndex+1], nose[myIndex+2], nose[myIndex+3]); myIndex = myIndex +2; } else { for(laserDelay = 0; laserDelay < 400; laserDelay++) continue; P1OUT &= ~LASER; drawLine(nose[myIndex], nose[myIndex+1], face[0], face[1]); logoPart = 3; myIndex = 0; for(laserDelay = 0; laserDelay < 250; laserDelay++) continue; P1OUT |= LASER; } break; //draws the face case 3: if(myIndex < faceLength-...Read more » Our goal for this project is to display the hackaday logo by shining a laser at some mirrors and moving the mirrors in such way that it would draw the logo. We are under the 1K limit, we have a function that will take an array of coordinates and display the laser, we have ways of turning on/off the laser during certain times, now its time to see if we can draw the whole logo. The first thing we did was plot out where we would want our coordinates to be. Using photoshop and a sketch pad, we created our array. *The 1's in circles dictated which point the shape would start at and in which direction the laser would go. We plug in the number for our array and: Okay! Not bad, we've got a little work to do but we're getting close. Unfortunately though, we're back over our 1K limit at around 1150 bytes. So a little more optimization, add the 'turn off laser' array, and flip it right side up, should be a breeze right? How did we go from around 300 bytes to 1500 bytes when all we did was try to modularize our code? This is where the disassembly window started really becoming the star of the show. Going through each assembly instruction, we quickly realized that instructions like multiply and divide and data structures like floats were the culprit. Looking at the memory map, we saw that all kinds of libraries were being brought in. We got rid of floats and started using unsigned chars and ints void drawLine(uint16_t x1, uint16_t y1, uint16_t x2, uint16_t y2){ uint16_t dx1 = x1 > x2 ? x1 - x2 : x2 - x1; uint16_t dy1 = y1 > y2 ? y1 - y2 : y2 - y1; uint16_t steps1 = dx1 > dy1 ? dx1/4 : dy1/4; uint16_t Xincrement1 = (dx1*100) / steps1; uint16_t Yincrement1 = (dy1*100) / steps1; int x11 = x1*100; int y11 = y1*100; int i; for(i = 0; i < steps1; i++){ x11 = x1 < x2 ? x11+Xincrement1 : x11 - Xincrement1; y11 = y1 < y2 ? y11+Yincrement1 : y11 - Yincrement1; writeMCP492x((int)((x11/100)*16), SSX); writeMCP492x((int)((y11/100)*16), SSY); } } With just a few little changes, this brought our size back down to 464 bytes. Another optimization we made was that rather than calling this function with the coordinates directly, we modified the program a little bit to accept arrays of coordinates. The even numbers in the array would be X values while the odd would be Y values. We also created an array of when the laser should be turned off. Turning off the laser would allow us to jump from one coordinate to another without being seen and without over cranking the galvos. uint16_t myPoly[] = {230, 220, 245, 185, 150, 5, 65, 155, 150, 155, 130, 110, 150, 75, 230, 220, 25, 220, 5, 185, 175, 185, 150, 155, 130, 110, 110, 155, 5, 185, 110, 5, 150, 5}; uint16_t polyLength = 34; uint16_t offIndices[] = {22, 26, 32}; uint16_t offLength = 3; uint16_t offIter = 0; uint16_t myIndex = 0; while(1){ for(offIter = 0; offIter < offLength; offIter++){ if(myIndex == offIndices[offIter]){ P1OUT &= ~LASER; } } if(myIndex < polyLength-3){ drawLine(myPoly[myIndex], myPoly[myIndex+1], myPoly[myIndex+2], myPoly[myIndex+3]); myIndex = myIndex +2; } else { drawLine(myPoly[myIndex], myPoly[myIndex+1], myPoly[0], myPoly[1]); myIndex = 0; } P1OUT |= LASER; } This allowed for the creation of some more complex patterns while still saving as much space as possible. Once we had the laser at the origin, we wanted to draw a square to find the borders of the area we could project in. Using for loops, we iterated through each side. uint16_t i; while(1){ for(i = 0; i <= 4095; i++){ writeMCP492x(i, SSX); writeMCP492x(0, SSY); } for(i = 0; i <= 4095; i++){ writeMCP492x(4095, SSX); writeMCP492x(i, SSY); } for(i = 0; i <= 4095; i++){ writeMCP492x(4095-i, SSX); writeMCP492x(4095, SSY); } for(i = 0; i <= 4095; i++){ writeMCP492x(0, SSX); writeMCP492x(4095-i, SSY); } } This was the result. So far, we were only using about 310 bytes of our 1K, smooth sailing! Now that we knew how to draw lines, we thought that the easiest way to implement the drawing of any shape would be to create a function that took in four numbers: an X and Y coordinate from where a line would start and an X and Y coordinate for where it would end. The function would then find every location between these points and move the mirrors to their corresponding location. This is what we came up with. void drawLine(uint8_t x1, uint8_t y1, uint8_t x2, uint8_t y2){ int dx = (int)x2 - (int)x1; int dy = (int)y2 - (int)y1; int steps; if (abs(dx) > abs(dy)){ steps = abs(dx/4); } else{ steps = abs(dy/4); } float Xincrement = (float) dx / (float) steps; float Yincrement = (float) dy / (float) steps; float x = (float) x1; float y = (float) y1; int i; for(i = 0; i < steps; i++){ x = x + Xincrement; y = y + Yincrement; writeMCP492x( ((int)(x*16)),SSX); writeMCP492x( ((int)(y*16)),SSY); } } int abs(int val){ return (val<0 ? (-val) : val); } With this, we could feed it the coordinates to any kind of polygon, for example, a hexagon. while(1){ drawLine(192, 128, 160, 183); drawLine(160, 183, 96, 183); drawLine(96, 183, 64, 128); drawLine(64, 128, 95, 72); drawLine(95, 72, 160, 72); drawLine(160, 72, 192, 128); } After compiling, our size was almost 1500 bytes! But it worked, so we just had to figure out what was taking up all that space. In order to use the DAC, we needed to initialize the SPI on this device as well as create a method to write the DAC values. The module uses an MCP4921 DAC IC which is at 12bit resolution. The voltage conditioners in the modules will scale the min input voltage as zero and the max output voltage as 4095. In this case 0 is -12V and 4095 is 12V making 2048 0V. Being on the SPI Bus each DAC module is activated using the slave select PIN. We Defined these as SSX and SSY MSP430 SPI Initialization WDTCTL = WDTPW | WDTHOLD; // Stop watchdog timer PM5CTL0 &= ~LOCKLPM5;//sets to 12 mh CSCTL1 |= DCORSEL_6;//sets clockspeed; WriteMCP Routine void writeMCP492x(uint16_t data,uint8_t ss) { // Take the top 4 bits of config and the top 4 valid bits (data is actually a 12 bit number) //and OR them together uint8_t top_msg = (0x30 & 0xF0) | (0x0F & (data >> 8)); // Take the bottom octet of data uint8_t lower_msg = (data & 0x00FF); // Select our DAC, Active LOW SSOUT &= ~ss; // Send first 8 bits UCB0TXBUF = top_msg; while (UCBUSY & UCB0STAT); // Send second 8 bits UCB0TXBUF = lower_msg; while (UCBUSY & UCB0STAT); //Deselect DAC SSOUT |= ss; } To run faster we also set the internal clock to run at 16Mhz. [clock lines] CSCTL1 |= DCORSEL_6; //sets clockspeed while(1){ writeMCP492x(2048, SSX); writeMCP492x(2048, SSY); } The first step is to install TI Code Composer studio and then start a new CCS project for this Microcontroller Once created Code Composer studio started off with a real basic main.c file #include <msp430.h> /* * main.c */ int main(void) { WDTCTL = WDTPW | WDTHOLD; // Stop watchdog timer return 0; } Just using main.c we compiled and ran to see where we were in terms of memory MSP430: There were 46 (code) and 38 (data) bytes written to FLASH/FRAM. The expected RAM usage is 160 (uninitialized data + stack) bytes.To fully understand what the code was being used for, we wanted to see the Assembly along with the map file. We enabled these features by clicking Project -> Show build settings. In the settings, we applied some basic optimization. The first was to tell the compiler to optimize for space instead of speed. Then under Processor options we set the code memory model to be small. This just tells the processor to use the first 64k of memory. We are unsure if this will help with space but it may make looking at the disassembly a bit easier. We wanted to see the Assembly files generated by the compiler. So we clicked the option to keep the assembly file as well as add the source interlist so we can see the C code in the comments with the assembly. And finally we set up our view to show the assembly side by side our C code. Originally the microcontroller for this project was a Teensy 3.1 which was picked due to its high speed, however once hackaday announced their 1k challenge we figured maybe we can make this laser project do something interesting with only 1k. At the time the boot-loader memory was still being debated on the forms and in a such a space constrained device it is nice to have full hardware debugging where memory/registers can be viewed and instructions can be stepped through. So the Teensy was out and the only microcontroller we had that met these requirements is a TI MSP430FR133 Launchpad kit. As per rules you can use TI Studio on this device for free and get full hardware programming and debugging. This was also a great opportunity/excuse to explore this architecture and learn something about compiler optimization. This project was originally trying to use a couple SPI DAC modules to control a mirror galvanometer kit that was purchased from eBay. The basic kit included two galvanometers, a stand, two voltage feedback units and a +/- 15V switching power supply. The controller simply takes an analog voltage between -12 and 12v to position the mirror. Our intent was to use our DAC module which is capable of producing that voltage directly via an SPI signal. All components were mounted to a MDF and the laser was held in using a custom 3d printed holder. Congratulations!!! Awesome result! I love it. Thanks! It was a ton of fun making this!
https://hackaday.io/project/18843-1k-challange-laser
CC-MAIN-2018-05
refinedweb
2,841
60.38
This is a discussion on Re: [PATCH] ehea: Add hugepage detection - Kernel ; may not be part of a memory region (firmware restriction). This patch > > modifies the walk_memory_resource callback fn to filter hugepages and add only standard > > memory to the busmap which is later on used for MR registration. > > Does this support a case where a userspace app is reading network > packets into a 16GB page backed area? I think you need to elaborate on > what kind of memory you need to have registered in these memory regions. > It's hard to review what you've done here otherwise. > > > --- linux-2.6.27/drivers/net/ehea/ehea_qmr.c 2008-10-24 09:29:19.000000000 +0200 > > +++ patched_kernel/drivers/net/ehea/ehea_qmr.c 2008-10-24 09:45:15.000000000 +0200 > > @@ -636,6 +636,9 @@ static int ehea_update_busmap(unsigned l > > { > > unsigned long i, start_section, end_section; > > > > + if (!pgnum) > > + return 0; > > This probably needs a comment. It's not obvious what it is doing. I decided to just rename the var to nr_pages as it is used in all other busmap-related functions in our code. That makes the condition check quite obvious. > > > if (!ehea_bmap) { > > ehea_bmap = kzalloc(sizeof(struct ehea_bmap), GFP_KERNEL); > > if (!ehea_bmap) > > @@ -692,10 +695,47 @@ int ehea_rem_sect_bmap(unsigned long pfn > > return ret; > > } > > > > -static int ehea_create_busmap_callback(unsigned long pfn, > > - unsigned long nr_pages, void *arg) > > +static int ehea_is_hugepage(unsigned long pfn) > > +{ > > + return ((((pfn << PAGE_SHIFT) & (EHEA_HUGEPAGE_SIZE - 1)) == 0) > > + && (compound_order(pfn_to_page(pfn)) + PAGE_SHIFT > > + == EHEA_HUGEPAGESHIFT) ); > > +} > > Whoa. That's dense. Can you actually read that in less than 5 minutes? > Seriously. Thanks for this comment. I totally agree - and I'm happy to aerate it a bit. I had been urged to make it dense during our internal review ;-) > > I'm not sure what else you use EHEA_HUGEPAGE_SIZE for or if this gets > duplicated, but this would look nicer if you just had a: > > #define EHEA_HUGEPAGE_PFN_MASK ((EHEA_HUGEPAGE_SIZE - 1) >> PAGE_SHIFT) > > if (pfn & EHEA_HUGEPAGE_PFN_MASK) > return 0; > > Or, with no new macro: > > if ((pfn << PAGE_SHIFT) & (EHEA_HUGEPAGE_SIZE - 1) != 0) > return 0; > > page_order = compound_order(pfn_to_page(pfn)); > if (page_order + PAGE_SHIFT != EHEA_HUGEPAGESHIFT) > return 0; > return 1; > } > > Please break that up into something that is truly readable. gcc will > generate the exact same code. > > > +static int ehea_create_busmap_callback(unsigned long initial_pfn, > > + unsigned long total_nr_pages, void *arg) > > { > > - return ehea_update_busmap(pfn, nr_pages, EHEA_BUSMAP_ADD_SECT); > > + int ret; > > + unsigned long pfn, start_pfn, end_pfn, nr_pages; > > + > > + if ((total_nr_pages * PAGE_SIZE) < EHEA_HUGEPAGE_SIZE) > > + return ehea_update_busmap(initial_pfn, total_nr_pages, > > + EHEA_BUSMAP_ADD_SECT); > > + > > + /* Given chunk is >= 16GB -> check for hugepages */ > > + start_pfn = initial_pfn; > > + end_pfn = initial_pfn + total_nr_pages; > > + pfn = start_pfn; > > + > > + while (pfn < end_pfn) { > > + if (ehea_is_hugepage(pfn)) { > > + /* Add mem found in front of the hugepage */ > > + nr_pages = pfn - start_pfn; > > + ret = ehea_update_busmap(start_pfn, nr_pages, > > + EHEA_BUSMAP_ADD_SECT); > > + if (ret) > > + return ret; > > + > > + /* Skip the hugepage */ > > + pfn += (EHEA_HUGEPAGE_SIZE / PAGE_SIZE); > > + start_pfn = pfn; > > + } else > > + pfn += (EHEA_SECTSIZE / PAGE_SIZE); > > + } > > + > > + /* Add mem found behind the hugepage(s) */ > > + nr_pages = pfn - start_pfn; > > + return ehea_update_busmap(start_pfn, nr_pages, EHEA_BUSMAP_ADD_SECT); > > } > > > > int ehea_create_busmap(void) > > diff -Nurp -X dontdiff linux-2.6.27/drivers/net/ehea/ehea_qmr.h patched_kernel/drivers/net/ehea/ehea_qmr.h > > --- linux-2.6.27/drivers/net/ehea/ehea_qmr.h 2008-10-24 09:29:19.000000000 +0200 > > +++ patched_kernel/drivers/net/ehea/ehea_qmr.h 2008-10-24 09:45:15.000000000 +0200 > > @@ -40,6 +40,8 @@ > > #define EHEA_PAGESIZE (1UL << EHEA_PAGESHIFT) > > #define EHEA_SECTSIZE (1UL << 24) > > #define EHEA_PAGES_PER_SECTION (EHEA_SECTSIZE >> EHEA_PAGESHIFT) > > +#define EHEA_HUGEPAGESHIFT 34 > > +#define EHEA_HUGEPAGE_SIZE (1UL << EHEA_HUGEPAGESHIFT) > > I'm a bit worried that you're basically duplicating hugetlb.h here. Why > not just use the existing 16GB page macros? While you're at it please > expand these to give some more useful macros so you don't have to do > arithmetic on them in the code as much. I don't agree at this point. The 16GB hugepages we're dealing with here are imho a different thing than the hugetlb stuff. Furthermore as far as I can see the hugetlb macros vary depending on the kernel configuration while the ehea driver requires them to be constant independently from the kernel config. Please correct me if I missed something here. > > #define EHEA_SECT_NR_PAGES (EHEA_SECTSIZE / PAGE_SIZE) > > for instance. > > -- Dave > > -- > To unsubscribe from this list: send the line "unsubscribe netdev"
http://fixunix.com/kernel/550446-re-%5Bpatch%5D-ehea-add-hugepage-detection.html
CC-MAIN-2016-36
refinedweb
664
63.49
ISO9660 (CD-ROM) filesystem driver. More... #include <sys/cdefs.h> #include <arch/types.h> #include <kos/limits.h> #include <kos/fs.h> Go to the source code of this file. ISO9660 (CD-ROM) filesystem driver. This driver implements support for reading files from a CD-ROM or CD-R in the Dreamcast's disc drive. This filesystem mounts itself on /cd. This driver supports Rock Ridge, thanks to Andrew Kieschnick. The driver also supports the Joliet extensions thanks to Bero. The implementation was originally based on a simple ISO9660 implementation by Marcus Comstedt. The maximum number of files that can be open at once. Reset the internal ISO9660 cache. This function resets the cache of the ISO9660 driver, breaking connections to all files. This generally assumes that a new disc has been or will be inserted.
http://cadcdev.sourceforge.net/docs/kos-2.0.0/fs__iso9660_8h.html
CC-MAIN-2018-05
refinedweb
136
71.61
Reads Hydrological Simulation Program - FORTRAN binary files and prints to screen. Project description Documentation for hspfbintoolbox The hspfbintoolbox is a Python script and library of functions to read Hydrological Simulation Program Fortran (HSPF) binary files and print to screen. The time series can then be redirected to file, or piped to other command line programs like tstoolbox. Requirements - pandas - on Windows this is part of the Python(x,y), Enthought, or Anaconda distributions - mando - command line parser - tstoolbox - utilities to process time-series Installation Should be as easy as running pip install hspfbintoolbox or easy_install hspfbintoolbox at any command line. Not sure on Windows whether this will bring in pandas but as mentioned above, if you start with Python(x,y) then you won’t have a problem. Usage - Command Line Just run ‘hspfbintoolbox’ to get a list of subcommands: - catalog - Prints out a catalog of data sets in the binary file. - dump - Prints out ALL data from a HSPF binary output file. - extract - Prints out data to the screen from a HSPF binary output file. - time_series - DEPRECATED: Use ‘extract’ instead. The default for all of the subcommands is to accept data from stdin (typically a pipe). If a subcommand accepts an input file for an argument, you can use “–infile=filename”, or to explicitly specify from stdin use “–infile=’ hspfbintoolbox: import hspfbintoolbox # Then you could call the functions ntsd = hspfbintoolbox.dump('tests/test.hbn') # Once you have a PANDAS DataFrame you can use that as input. ntsd = tstoolbox.aggregate(statistic='mean', agg_interval='daily', input_ts=ntsd) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/hspfbintoolbox/2.6.11.4/
CC-MAIN-2019-26
refinedweb
283
52.19
The CData Python Connector for Phoenix enables you to create ETL applications and pipelines for Phoenix data in Python with petl. The rich ecosystem of Python modules lets you get to work quickly and integrate your systems more effectively. With the CData Python Connector for Phoenix and the petl framework, you can build Phoenix-connected applications and pipelines for extracting, transforming, and loading Phoenix data. This article shows how to connect to Phoenix with the CData Python Connector and use petl and pandas to extract, transform, and load Phoenix data. With built-in, optimized data processing, the CData Python Connector offers unmatched performance for interacting with live Phoenix data in Python. When you issue complex SQL queries from Phoenix, the driver pushes supported SQL operations, like filters and aggregations, directly to Phoenix and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations). Connecting to Phoenix Data Connecting to Phoenix data looks just like connecting to any relational data source. Create a connection string using the required connection properties. For this article, you will pass the connection string as a parameter to the create_engine function.. After installing the CData Phoenix Connector, follow the procedure below to install the other required modules and start accessing Phoenix through Python objects. Install Required Modules Use the pip utility to install the required modules and frameworks: pip install petl pip install pandas Build an ETL App for Phoenix.apachephoenix as mod You can now connect with a connection string. Use the connect function for the CData Phoenix Connector to create a connection for working with Phoenix data. cnxn = mod.connect("Server=localhost;Port=8765;") Create a SQL Statement to Query Phoenix Use SQL to create a statement for querying Phoenix. In this article, we read data from the MyTable entity. sql = "SELECT Id, Column1 FROM MyTable WHERE Id = '123456'" Extract, Transform, and Load the Phoenix Data With the query results stored in a DataFrame, we can use petl to extract, transform, and load the Phoenix data. In this example, we extract Phoenix data, sort the data by the Column1 column, and load the data into a CSV file. table1 = etl.fromdb(cnxn,sql) table2 = etl.sort(table1,'Column1') etl.tocsv(table2,'mytable_data.csv') With the CData Python Connector for Phoenix, you can work with Phoenix data just like you would with any database, including direct access to data in ETL packages like petl. Free Trial & More Information Download a free, 30-day trial of the Phoenix Python Connector to start building Python apps and scripts with connectivity to Phoenix data. Reach out to our Support Team if you have any questions. Full Source Code import petl as etl import pandas as pd import cdata.apachephoenix as mod cnxn = mod.connect("Server=localhost;Port=8765;") sql = "SELECT Id, Column1 FROM MyTable WHERE Id = '123456'" table1 = etl.fromdb(cnxn,sql) table2 = etl.sort(table1,'Column1') etl.tocsv(table2,'mytable_data.csv')
https://www.cdata.com/kb/tech/phoenix-python-petl.rst
CC-MAIN-2021-25
refinedweb
491
53
This is the fifth Azure IoT Hub as the entry point. A C# MQTT client will mimic device location, Android OS info and battery information with this data being shown in a Power BI dashboard. Critically low battery levels will be emailed so an administrator can take immediate action to replace the batteries. The next part in this series will move to using a real device. This example is based on Microsoft’s tutorial which covers configuring message routing with IoT hub. I have simplified the tutorial slightly by removing the storage container but please refer back to the original tutorial for more detail. Disclaimers: - This is a very simple example but the aim is to show the principles of injecting IoT data into Azure and how that data might be processed or stored for later retrieval and analysis. - Any hosted cloud computing solution may incur costs. Please ensure you are aware of the costs associated with the cloud components you are using and how to monitor and set spending limits. Objectives: - Publish simulated device telemetry using the Azure Devices Client SDK - Receive device telemetry into Azure using the IoT Hub - Set up a message route to perform a special action for devices with critical battery level. - Use a Service Bus and logic app to send an email to notify an administrator when a device has a critically low battery - Visualize received data using a Power BI dashboard - Data that will be simulated along with some sample values is defined in JSON as follows: { "deviceId": "test-device", "dateTime": "[String representation of date]", "model": "TC57", "lat": "35.6602997", "lng": "139.7282743", "battLevel": "[Randomly generated]", "battHealth": "[Randomly generated]", "osVersion": "8.1.0", "patchLevel": "2019-02-01", "releaseVersion": "01-10-09.00-OG-U00-STD" } Source Code: The source code discussed in this tutorial is available from GitHub. Prerequisites: - An Azure subscription. If you do not have an Azure subscription you can create a free account before you begin. - An installation of Visual Studio (to run the simulator) - A Power BI account to analyse the stream analytics. - An Office 365 account to send notification emails. Set up resources: It is necessary to create some resources in Azure to facilitate this tutorial. The easiest way to create resources is using the Azure Cloud Shell - Open the Azure portal - Select the Cloud shell button on the menu in the upper-right corner of the screen The following Azure CLI script will create the following: - A resource group - An IoT hub in the S1 tier with a consumer group which is used by the Azure Stream Analytics when retrieving data - A Service Bus namespace and queue - A device identity for the simulated device that sends messages to the hub. Although the Azure Cloud Shell supports both Azure CLI and PowerShell, for simplicity only the Azure Bash commands are given here (and in GitHub) – paste these into the Azure Cloud shell and execute: Note that the variables which must be globally unique have $RANDOM concatenated to them, be sure to make a note of the actual names of the generated resources when using them in subsequent steps. # This is the IOT Extension for Azure CLI. # You only need to install this the first time. # You need it to create the device identity. az extension add --name azure-cli-iot-ext # Set the values for the resource names that don't have to be globally unique. # The resources that have to have unique names are named in the script below # with a random number concatenated to the name so you can probably just # run this script, and it will work with no conflicts. location=westus resourceGroup=EntAndroidResources iotHubConsumerGroup=EntAndroidConsumers containerName=entandroidresults iotDeviceName=test-device # Create the resource group to be used # for all the resources for this tutorial. az group create --name $resourceGroup \ --location $location # The IoT hub name must be globally unique, so add a random number to the end. iotHubName=EntAndroidHub$RANDOM echo "IoT hub name = " $iotHubName # Create the IoT hub. az iot hub create --name $iotHubName \ --resource-group $resourceGroup \ --sku S1 --location $location # Add a consumer group to the IoT hub. az iot hub consumer-group create --hub-name $iotHubName \ --name $iotHubConsumerGroup # The Service Bus namespace must be globally unique, so add a random number to the end. sbNameSpace=EntAndroidSBNamespace$RANDOM echo "Service Bus namespace = " $sbNameSpace # Create the Service Bus namespace. az servicebus namespace create --resource-group $resourceGroup \ --name $sbNameSpace \ --location $location # The Service Bus queue name must be globally unique, so add a random number to the end. sbQueueName=EntAndroidSBQueue$RANDOM echo "Service Bus queue name = " $sbQueueName # Create the Service Bus queue to be used as a routing destination. az servicebus queue create --name $sbQueueName \ --namespace-name $sbNameSpace \ --resource-group $resourceGroup #. az iot hub device-identity show --device-id $iotDeviceName \ --hub-name $iotHubName After some time, the script will finish running. I recommend you take a copy of the output so you can refer to it later. Verify the test device was created: The script which was ran in the previous stage will create an iot device as the penultimate step. To check that the iot test device was successfully created: - Open the Azure portal - Click on Resource groups and select your resource group, EntAndroidResources in the case of this tutorial. - In the list of resources, click your IoT hub, EntAndroidHub in the case of this tutorial. - Select IoT Devices from the hub pane - You should see a single device, test-device - If you click on test-device you will be presented with the keys and connection string associated with this device. - Both the primary key and connection string will be required in subsequent steps of this and the next tutorial respectively so make a note of these for future reference. Set up message routing We are going to route messages to different resources based on properties attached to the message by the device or simulated device. Messages that are not custom routed are sent to the default endpoint. Note: In a real solution we would probably also route those messages with battery level <= 15 to the Power BI dashboard but for consistency with the parent tutorial, I will keep things simple here. Routing to a Service Bus queue - In the Azure portal, click Resource Groups, then select your resource group. This tutorial uses EntAndroidResources. - Click the IoT hub under the list of resources, EntAndroidHub in the case of this tutorial. - Click Message Routing. - In the Message routing pane, click +Add. - On the Add a Route pane, click +Add next to the endpoint field and select Service bus queue. - On the Add Service Endpoint pane specify the following fields: - Endpoint Name, CriticalBatteryQueue in the case of this tutorial - Service Bus Namespace: From the dropdown list select the service bus namespace created in the preparation steps. This tutorial uses EntAndroidSBNamespace. - Service Bus queue: From the dropdown list select the service bus queue created in the preparation steps. This tutorial uses entandroidsbqueue. - Click Create to add the Service Bus queue endpoint. - Now complete the rest of the routing query information. This query specifies the criteria for sending messages to the Service Bus queue which was just added as an endpoint: - Name: Name of the routing query, this tutorial uses SBQueueRoute - Endpoint: The previously configured endpoint, in this case CriticalBatteryQueue - Data source: Select ‘Device Telemetry Messages’ from the dropdown list. - Routing query: Enter batteryLevel=”critical” as the query string. In subsequent steps we will assign this in the client when the battery level is critical. - Click Save. You will be returned to the routes pane to see the route you just configured - Close the Message Routing pane, which returns you to the Resource group pane. Create a Logic App The Service Bus queue will receive critical battery level messages. Set up a Logic app to monitor the Service Bus queue and send an email when a message is added to the queue. - In the Azure portal, click +Create a resource. Put “logic app” in the search box and click Enter. From the search results displayed, select Logic App, then click Create to continue to the Create logic app pane. Fill in the fields: - Name: This field is the name of the logic app, EntAndroidLogicApp in this tutorial. - Subscription: Select your Azure subscription - Resource group: Click ‘Use existing’ and select your resource group, EntAndroidResources in the case of this tutorial. - Location: This tutorial uses West US as specified when we set up the resources. - Log Analytics: This toggle should be turned off. - Click create. - Open the Logic App. The easiest way to get to the Logic App is to click on Resource groups, select your resource group, then select the Logic App from the list of resources. The Logic Apps Designer page appears (you might have to scroll over to the right to see the full page). On the Logic Apps Designer page, scroll down until you see the tile in the Templates section that says Blank Logic App + and click it. - Select the Connectors tab and from the displayed connectors select Service Bus. - A list of triggers is displayed. Select Service Bus – When a message is received in a queue (auto-complete). - On the next screen, fill in the Connection Name. This tutorial uses EntAndroidConnection - Click the Service Bus namespace (EntAndroidSBNamespace in the case of this tutorial). When you select the namespace, the portal queries the Service Bus namespace to retrieve the keys. Select and click Create. - On the next screen, select the name of the queue (this tutorial uses ‘entandroidsbqueue’) from the dropdown list. You can use the defaults for the rest of the fields. - Now set up the action to send an email when a message is received in the queue. In the Logic Apps Designer click + New step to add a step. In the Choose an action pane, find and click Office 365 Outlook. On the triggers screen, select Office 365 Outlook – Send an email. - Next, log into your Office 365 account to set up the connection. Specify the email address for the recipient(s) of the emails. Also specify the subject, and type what message you would like the recipient to see in the body. For testing, fill in your own email address as the recipient. - Click Add dynamic content to show the content from the message that you can include. Select Content – it will include the message in the email. - Click Save then close the Logic App Designer. If you wish, you can now jump directly to the “Run Simulated Device app” step to verify that you have configured the message route, Service Bus and Logic app correctly but in the next step, we will configure the stream analytics job that will power the Power BI dashboard Set up Azure Stream Analytics To see the data in a Power BI visualization, first set up a Stream Analytics job to retrieve the data. For consistency with the tutorial on which this tutorial is based only non-critical battery events are sent to the default endpoint and will be retrieved by the Stream Analytics job for the Power BI visualization. Create the Stream Analytics job - In the Azure portal, click Create a resource > Internet of Things > Stream Analytics job. - Enter the following information for the job - Job name: The name of the job, EntAndroidJob in the case of this tutorial. - Resource group: Use the same resource group used by your IoT hub. This tutorial uses EntAndroidResources. - Location: Use the same location as specified in the setup script, ‘West US’ in the case of this tutorial. - Click Create to create the job. To get back to the job, click Resource groups, select the resource group (EntAndroidResources in the case of this tutorial) then click the Stream Analytics job in the list of resources. Add an input to the Stream Analytics job - Under Job Topology, click Inputs. - In the Inputs pane, click Add stream input and select IoT Hub. On the screen that comes up, fill in the following fields: - Input alias: This tutorial uses entandroidinputs - Subscription: Select your subscription - Iot Hub: Select the IoT Hub. This tutorial uses EntAndroidHub - Endpoint: Select Messaging - Shared access policy name: Select iothubowner which should be the default - Consumer group: Select the consumer group created as part of the resource setup. This tutorial uses entandroidconsumers - For the rest of the fields, accept the defaults - Click Save. Add an output to the Stream Analytics job - Under Job Topology, click Outputs. - In the Outputs pane, click Add, then select Power BI. On the screen that comes up, fill in the following fields: - Output alias: The unique alias for the output. This tutorial uses entandroidoutputs. - Dataset name: Name of the dataset to be used in Power BI. This tutorial uses entandroiddataset. - Table name: Name of the table to be used in Power BI. This tutorial uses entandroidtable. - Accept the defaults for the rest of the fields. - Click Authorize and sign into your Power BI account. - At this point you can choose to change the Group workspace should you wish to do so. This tutorial will use the default ‘My workspace’. - You can create additional workspaces from the Power BI tool. - Click Save Configure the query of the Stream Analytics job - Under Job Topology, click Query. - Replace [YouOutputAlias] with the output alias of the job. This tutorial uses entandroidoutputs. - Replace [YourInputAlias] with the input alias of the job. This tutorial uses entandroidinputs. - Click Save. - Close the Query pane and return to the view of the resources in the Resource Group. Click the Stream Analytics job, EntAndroidJob in the case of this tutorial. Run the Stream Analytics Job - In the Stream analytics job, click Start > Now > Start. Once the job successfully starts, the job status changes from Stopped to Running. - Data is required to set up the Power BI report therefore the next step is to run a simulated device app before setting up the Power BI dashboard. Run Simulated Device app When setting up resources for this tutorial a test device was automatically created (see also the earlier section on “Verify the test device was created”). In this section we will use a .NET console app that simulates a physical device sending device-to-cloud messages to an IoT hub, including the generation of random critically low battery events. Download the solution for IoT Device simulation from GitHub. This is based on the solution from the original tutorial but I have modified it to pass device telemetry such as battery level and location information. Open the solution file (IoT_SimulatedDevice.sln) in Visual Studio and open Program.cs. Substitute the correct values for your iotHubUri and deviceKey. The format of the IoT hub hostname is {iot-hub-name}.azure-devices.net (this tutorial uses EntAndroidHub10110.azure-devices.net). You can find the values for your test device from the Device details pane in Azure (see the previous section, “Verify the test device was created”). You want to ‘Primary key’ and ‘HostName’ which is part of the ConnectionString. private readonly static string s_myDeviceId = "test-device"; // Device Id private readonly static string s_iotHubUri = "EntAndroidHub10110.azure-devices.net"; // HostName private readonly static string s_deviceKey = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"; // Primary Key Run and test Run the console application. Wait a few minutes. You can see the messages being sent on the console screen of the application of which around 20% should be critical battery messages. The app sends a new device-to-cloud message to the IoT hub every second. The message contains a JSON-serialized object with the device ID and some mocked telemetry data. The batteryLevel property will be set to “critical” if the randomly generated value drops below 15. Critical battery events will generate an email (Courtesy of the Service Bus and Logic App) whilst normal battery events will be displayed in the BI report set up in the next step (courtesy of the Azure Stream Analytics) If everything is set up correctly, at this point you should see the following results: - The console application reports that data is being successfully sent This means that: - The routing to the Service Bus queue is working correctly. - The Logic App retrieving the message from the Service Bus queue is working correctly. - The Logic App connector to Outlook is working correctly Set up the Power BI Visualizations - Go to Workspaces and select the workspace that you set when you created the output for the Stream Analytics job. This tutorial uses My Workspace. - Click the Datasets tab. - You should see the listed dataset that you specified when you created the output for the Stream Analytics job. This tutorial uses entandroiddataset. (It may take 5-10 minutes for the dataset to show up the first time). - Under ACTIONS, click the first icon to create a report. - Create a line chart to show battery level over time - On the report creation page, add a line chart by clicking on the line chart icon. - On the Fields pane, expand the table you specified when you created the output for the Stream Analytics job, entandroidtable in the case of this tutorial. - Drag EventEnqueueUtcTime to Axis on the Visualizations pane. - Drag battLevel to Values - A line chart is created. The x-axis displays the date and time in the UTC time zone whilst the y-axis displays the simulated battery data. - Create a map to show the location of the device - On the report creation page, add an ArcGIS Maps visualization - On the Fields pane, drag the lat and lng to the appropriate latitude and longitude fields - A map is created showing the location of the simulated device (in this case Tokyo) - Click Save to save the report. Seeing data in both dashboards means: - The routing to the default endpoint is working correctly. - The Azure Strem Analytics job is streaming correctly - The Power BI Visualization is set up correctly. You can refresh the charts to see the most recent data by clicking the Refresh button on the top of the Power BI window. Conclusion Simulated data is being sent over MQTT into the Azure IoT Hub where it is analysed by a logic app and streamed to Power BI to visualise. In the next part of this post series we will replace the simulator with a real device and show how Power BI can be used to quickly get up and running with data visualisation from a device fleet.
https://developer.zebra.com/community/home/blog/2019/02/12/enterprise-android-in-a-hosted-cloud-iot-solution-part-5-azure-iot-hub
CC-MAIN-2019-09
refinedweb
3,051
53.81
David Ebbo's blog - The Ebb and Flow of ASP.NET Unlike Linq To Sql, Entity Framework directly supports Many to Many relationships. I’ll first describe what this support means. In the Northwind sample database, you have Employees, Territories and EmployeeTerritories tables. EmployeeTerritories is a ‘junction’ table which has only two columns: an EmployeeID and a TerritoryID, which creates a Many to Many relationship between Employees and Territories. When using Linq To Sql, all three tables get mapped in your model, and you need to manually deal with the EmployeeTerritories junction table. But when using Entity Framework, the EmployeeTerritories junction table is not part of your model. Instead, your Employee entity class has a ‘Territories’ navigation property, and conversely your Territory entity class has an ‘Employees’ navigation property. What Entity Framework does under the cover to make all this work is pretty amazing! Unfortunately, in ASP.NET Dynamic Data’s initial release (as part of Framework 3.5 SP1), we didn’t have time to add proper support for such Many to Many relationships, and in fact it behaves in a pretty broken way when it encounters them. The good news is that it is possible to write a field template that adds great support for this, and that is exactly what this blog post is about. I should note that a couple of our users have written such field templates before (in particular, see this post). In fact, that’s what got me going to write one! :) One difference is that I set mine out to be completely generic, in the sense that it doesn’t assume any specific database or table. e.g. it works for Northwind’s Employees/Territories (which my sample includes), but works just as well for any other database that uses Many to Many relationships. In read-only mode, it shows you a list of links to the related entities. e.g. when looking at a Territory, you’ll see links to each of the Employees that work there. In edit and insert mode, it gets more interesting: it displays a list of checkboxes, one for each Employee in the database. Then, whether the Employee works in this territory is determined by whether the checkbox is checked. Pretty much what you’d expect! I won’t go into great details about how it works here, but if you are interested, I encourage you to download the sample and look at the code, which I commented pretty well. The key things to look at are: Enjoy, and let me know if you have feedback on this. Hello David, This is really helpful. I was looking for it and you can at right time. Thanks Any Idea, that this will be supported in LINQ to SQL sometime ? Shail Thanks for making Dynamic Data more complete David ! I think this is one of the 3 missing features that were missing in V1 1. Not able to reorder columns (in detail & list view) 2. No support for many to many relations 3. No support for creating a lookup inline. Eg. If I create a Product and I have to specify a Category for it that doesn't exist yet I have to abort the Product creation, go and create a Category and start creating a Product again. This is a usability problem that can be adressed by creating the Category on the same page the user creates the Product. So the Category is effectivly created "inline" I know point 1 is going to get introduced in V2 and I hope you will consider point 3 as well! This would cover almost 90% of the requirments I typicaly have and I would recommend it wholeheartedly to my team. I have some feedback on the Many to Many support : - The list can get pretty long so I would wrap it inside a div and give it a css property of "overflow:scroll" so that the user the vertical height can be controlled; - Give the developer the option to split the checkboxes in x columns - Give the developer the option order left to right or top to bottom Also, Is there a book on Dynamic Data in the pipeline that you know of? Is this template evailable in the latest codeplex release? Shail, I'm not aware of plans to add Many To Many support to Linq To Sql. Geneally, Linq To Sql is a simpler/lighter technology, which does not have the same power of abstraction as Entity Framework. Of course, theoretically it may be possible to make this work in Dynamic Data by directly dealing with the 'Junction' table in the field template, though that might be non-trivial. Tom, thanks for all the feedback. Note that all the styling related changes you bring up (overflow scroll, # of columns, left to right) are things that can be done simply by changing ManyToMany_Edit.ascx (which has the CheckBoxList). Though of course, if the goal is to use different settings for different fields in the same app, it might make sense to create a metadata attribute that would pass that information in from the model to the field template. For your other points that don't directly relate to Many To Many, I would prefer to start a new discussion on the Dynamic Data forum, in order to keep this post focused on just Many To Many :) Thanks! Buckley, right now it is only available here, but it should make its way on to Codeplex at some point (and eventually in our next release). "I'm not aware of plans to add Many To Many support to Linq To Sql" That was definitely announced way back, just as support for other providers. Mike, you may very well be right, as I don't work in the Linq To Sql team (I'm in the ASP.NET/Dynamic Data team), and I'm aware of everything they are doing. It might be worth asking on the Linq To Sql forum () to reach the 'experts' in that area. From a Dynamic Data point of view, if Linq To Sql adds this support we can support it accordingly in the field template. Great Post! Thank you. Anyone attended the PDC last week? Just got back from PDC 2008 last week, we showed all kinds of cool new stuff. Here is a list of links I spent 10 hours, literally trying to get this to work with my SQL Server 2008 database. I downloaded this Many To Many example and it worked perfectly fine, except I couldn’t get mine to work the same. My application wouldn't update it would just stick so I combed through all of the code in both apps to find the difference. Well I'm embarrassed to admit I did not make the linking table (EmployeesTerritories) two ID fields (EmployeeID, TerritoryID) as primary keys and foreign keys respectfully. I tried to do this relationship connection through the entity model instead. So if anyone is having a similar issue, try this out! Build the relationships in your database first. Thank you for the great post on how to deal with Many-To-Many relationships. Unfortunately it doesn't work in its current form on the project I'm working on because the joining table contains some data about the relationship. I am looking into explicititly defining the relationship as opposed to implicity having it defined. I do have one question regarding the AutoFieldGenerator. Specifically the following lines. string uiHint = null; if (column.Provider.Association != null && column.Provider.Association.Direction == AssociationDirection.ManyToMany) { uiHint = "ManyToMany"; } fields.Add(new DynamicField() { DataField = column.Name, UIHint=uiHint }); I noticed that the UIHint is set to either null or ManyToMany yet it still seems to pick up UIHints defined in a MetadataType Class. [UIHint("Image")] public object Logo{ get; set; } Is that because the UIHint from the MetadataType is applied after the AutoFieldGenerator. If that is the case, any UIHint's for the FK field will remove the ManyToMany that is applied in the AutoFieldGenerator. I thought I should have used something like the following but found it made no difference: UIHint = uiHint != null ? uiHint : column.UIHint Cheers Q Q, This works because if DynamicField has a null UIHint, then it always defaults to the one from the MetaColumn. Basically, what you set on the DynamicField overrides the MetaColumn uihint. David I'm getting a 'Repeater1 not found' type error in ManyToMany.aspx.cs when I include this in my own project...I've made the necessary mods to ensure it's in the proper namespace and everything. Any ideas? J Stroud: the field templates were written for a web site. To use them in a Web Application, right click the aspx files in VS and choose 'convert to web application'. You should then be ok. Thanks! Still muddling about with 2008. Scott Hunter brings a summary of the new features coming in ASP.NET 4.0 and Visual Studio 2010. Learn Does this control work for Dynamic Data LinqToSQL apps ? If not would it be possible to ? Michael, copying from a recent forum thread: "I think in theory it can be made to work with Linq to SQL, but it would certainly be harder. The reason is that with L2S, the framework doesn't abstract out the 'junction table', so the field templates would need to do all the book keeping themselves to make it all work. But in theory, I don't see why it couldn't be made to work with the right field template." There are many ways to customize a ASP.NET Dynamic Data site, which can sometimes be a bit overwhelming I tried your code, works very nice, thanks. Though, ManyToMany_Edit.ascx isn't loaded/executed when in 'Insert' mode. When I set a breakpoint in 'Page_Load' of ManyToMany_Edit, it doesn't even get there. Any ideas?! philip Philip, what exactly are you trying? In the sample I shared, if I: - Go to Territories/ListDetails.aspx - Click New on the DetailView The insert UI correctky uses the Many To Many field template to pick employee. I have implemented this field on a web-application, and it works. It does however result in a lot of roundtrips to the database. It seems that for every foreignkey column in every row on the main table a call to the database is made to extract the related records. This means that a page with 10 records shown and 3 many to many columns makes 30 extra calls to the database. The call is made every time a RelatedEnd entityCollection is loaded. On the listDetails page you do load foreignkeycolumns explicitly, so why is it necessary to do this row by row loading of related entities? Jan, this happens because by default we only Include the entity ref columns in the query, and not the entity sets. See this line in list.aspx.cs: GridDataSource.Include = table.ForeignKeyColumnsNames; If you were to add the ManyToMany column in that list, you should get all the data in one query. e.g. in the Territories case, with the current code the Include only gets set to "Region". Instead, you'd want to set it to "Region,Employees". Hi David Thank you for the quick response. It did the trick when I put this check on the related end: if (!entityCollection.IsLoaded) { entityCollection.Load(); } It does however require that I use a custom page in order to hardcode the table names that needs to be included in the GridDataSource query. I already have this, so that is ok. For later usage however I would like to know if you know a general way to get the names of the navigation properties on the main entity to be able to include the entitities behind them. If there is such I way, I suggest this as the default behavior for the ListDetails page. Regards, Jan Jan, you should be able to do this by looking at table.Columns.OfType<MetaChildrenColumn>(), and add them to the include. Obviously, this needs to be done carefully, as it will cause one big query to happen. It's good if the data ends up being used, and bad if not (e.g. if the column ends up not being displayed). En samlig utav länkar för er som utvecklar. ASP.NET iTunes skin grid Check/Uncheck all Items in an ASP Please post corrections/new submissions to the Dynamic Data Forum . Put FAQ Submission/Correction in Hi All, If you want to know more about the new ASP.NET 4.0 and Visual Studio 2010 enhancements you can Articles and Blog Posts A Many-To-Many Field Template for Dynamic Data (David Ebbo) Dynamic Data Preview
http://blogs.msdn.com/davidebb/archive/2008/10/25/a-many-to-many-field-template-for-dynamic-data.aspx
crawl-002
refinedweb
2,123
62.98
The Definitive React Hooks Cheatsheet Antonin Januska Updated on ・5 min read React Hooks is the new hotness in the React world. I'm writing steadily more and more of them and I thought it would be useful to have a cheatsheet to refer back to which encompasses the basic hooks as well as the intricacies of useEffect. Check out the official Hooks API Reference for more in-depth information. Table of Contents - useEffect When does it run? On every render What's the catch? It's not just a componentDidUpdate replacement, it also runs on mount. So it's not 1-to-1 Important features? useEffect can take in a 2nd argument, you have to skip that argument. You can also return a function, we'll cover that in the next section. Code sandbox playground: Go play with it Syntax: import { useEffect } from 'react'; useEffect(() => { // whatever runs here will run on each re-render }); substitute for componentDidMount + componentWillUnmount When does it run? On component mount and unmount What's the catch? The syntax is very close to the previous use case. It threw me off several times but it makes sense once you read the docs. If the effect runs more than once, make sure you passed in the 2nd argument Important features? This is an effect that runs only once. The mount logic goes in the body of the effect function, the unmount/cleanup logic goes into a function that you return from the effect. Code sandbox playground: Go play with it Syntax: import { useEffect } from 'react'; useEffect(() => { // run mount logic here such as fetching some data return () => { // unmount logic goes here }; }, []); // note the empty array You can leave either the mount or unmount logic empty to work only off one of those lifecycle substitute. Meaning that: - When does it run? when the component re-renders, useEffect will check dependencies. If the dependency values changed, useEffect will run the effect What's the catch? React does a shallow comparison. If you use an object or an array that you mutate, React will think nothing changed. Important features useEffect skips running the effect when things don't change. You don't actually have to use the dependency values in the effect. You can pass in a prop value as a dependency. Code sandbox playground: Go play with it Syntax: import { useEffect } from 'react'; function SomeComponent(props) { useEffect(() => { // logic runs only when dependency variables changed }, [arrOfDependency, values, props.id]); // array of values to check if they've changed } Potential use cases Since the hook is more difficult to explain, I'd like to offer a list of use cases - run a side effect (like a fetch) when a prop changes to get new data - run a resource-heavy calculation only when the calculation values change - update the page (like document title) when a value updates useState State is probably the reason why people switch from stateless (functional) components to class components. useState allows us to have stateful components without classes. What does it return? Current state and a function that lets you set state What's the catch? The state setting function will replace the previous state with the new one rather than merging them as class state would have. You need to merge your objects yourself before setting the state. Important features You can use as many useState hooks in your component as you want. Passing any value to useState will create the initial state. It's also a convention to not call the variables state and setState but rather by contextual names (eg. user and setUser). useState accepts any value for state, it doesn't have to be an object. Code Sandbox playground: Check out the useState examples Syntax: import { useState } from 'react'; // setup const defaultValue = { name: "Antonin" }; const [state, setState] = useState(defaultValue); // scenario 1 usage // resulting state only contains key `user` useReducer is an alternative to useState and if you've used Redux in the past, this will look familiar. What are the arguments? What does it return? useReducer takes in a reducer function and the initialState. It returns the current state and a dispatcher (sound familiar?) How does it run? On state change, dispatch an object with a type and a data payload (read about flux standard action for more info). The reducer we passed into useReducer will receive the current state and the dispatched object. It returns the new state. What's the catch? It's a more complicated workflow but it works just like you'd expect if you've used Redux. Important features The reducer gets run on every dispatch. It gets access to the previous state. useReducer also includes a 3rd argument you can use to create the initial state Code Sandbox playground: Check out the useReducer example Syntax import { useReducer } from 'react'; function reducer(currentState, action) { switch(action.type) { // handle each action type and how it affects the current state here } } function SomeComponent() { const [state, dispatch] = useReducer(reducer, initialState); dispatch({ type: 'ADD', payload: data }); // { type: 'ADD', payload: data } gets passed into the `reducer` as the `action` argument while `state` gets passed in as the `currentState` argument } Building Your Own Hooks A quick note on building your own hooks. It's as easy as using the existing hooks and composing them together inside of a function that starts with use. Here's a quick example of a useUser hook. What are the requirements? That the function starts with the keyword use. Eg. useUser or useSomethingElse. Important features: you can call any hooks within your custom hook and it works as expected. Code Sandbox playground: Check out the custom hooks example Syntax: import { useEffect } from 'react'; function useUser(userId) { let [user, setUser] = useState(null); useEffect(() => { fetch(`/api/user/${userId}`) .then(data => data.toJSON()) .then(data => setUser(data)); }, [userId]); return user; } function SomeComponent(props) { const user = useUser(props.id); } What about the rest? There are other hooks you can use such as useMemo, useCallback and so on. I would say that those are more advanced hooks and if you understand the basic hooks, go ahead and check out the official docs. I also understand there are some advanced usage examples for many of these (like passing useReducer's dispatch down several levels). If you find something incorrect or some extra information useful that isn't included, let me know! And I'll include it! Did you find the cheatsheet useful? Buy me a coffee so I can keep doing this and produce more content! :) You can also follow me on Twitter Thoughts on legacy code, diversity and inclusion TLDR; If you're still dropping snarky commentaries about PHP on Twitter, just grow up already.? I’d be interested to see how to test hooks. Thanks for an article. Another great place to wrap a head around hooks is Dan's blog. Dan's blog is awesome.
https://dev.to/antjanus/the-definitive-react-hooks-cheatsheet-2ebn
CC-MAIN-2019-35
refinedweb
1,142
64.81
(For more resources related to this topic, see here.) Reinventing Metasploit Consider a scenario where the systems under the scope of the penetration test are very large in number, and we need to perform a post-exploitation function such as downloading a particular file from all the systems after exploiting them. Downloading a particular file from each system manually will consume a lot of time and will be tiring as well. Therefore, in a scenario like this, we can create a custom post-exploitation script that will automatically download a file from all the systems that are compromised. This article focuses on building programming skill sets for Metasploit modules. This article kicks off with the basics of Ruby programming and ends with developing various Metasploit modules. In this article, we will cover the following points: Understanding the basics of Ruby programming Writing programs in Ruby programming Exploring modules in Metasploit Writing your own modules and post-exploitation modules Let's now understand the basics of Ruby programming and gather the required essentials we need to code Metasploit modules. Before we delve deeper into coding Metasploit modules, we must know the core features of Ruby programming that are required in order to design these modules. However, why do we require Ruby for Metasploit? The following key points will help us understand the answer to this question: Constructing an automated class for reusable code is a feature of the Ruby language that matches the needs of Metasploit Ruby is an object-oriented style of programming Ruby is an interpreter-based language that is fast and consumes less development time Earlier, Perl used to not support code reuse Ruby – the heart of Metasploit Ruby is indeed the heart of the Metasploit framework. However, what exactly is Ruby? According to the official website, Ruby is a simple and powerful programming language. Yokihiru Matsumoto designed it in 1995. It is further defined as a dynamic, reflective, and general-purpose object-oriented programming language with functions similar to Perl. You can download Ruby for Windows/Linux from. You can refer to an excellent resource for learning Ruby practically at. Creating your first Ruby program Ruby is an easy-to-learn programming language. Now, let's start with the basics of Ruby. However, remember that Ruby is a vast programming language. Covering all the capabilities of Ruby will push us beyond the scope of this article. Therefore, we will only stick to the essentials that are required in designing Metasploit modules. Interacting with the Ruby shell Ruby offers an interactive shell too. Working on the interactive shell will help us understand the basics of Ruby clearly. So, let's get started. Open your CMD/terminal and type irb in it to launch the Ruby interactive shell. Let's input something into the Ruby shell and see what happens; suppose I type in the number 2 as follows: irb(main):001:0> 2 => 2 The shell throws back the value. Now, let's give another input such as the addition operation as follows: irb(main):002:0> 2+3 => 5 We can see that if we input numbers using an expression style, the shell gives us back the result of the expression. Let's perform some functions on the string, such as storing the value of a string in a variable, as follows: irb(main):005:0> a= "nipun" => "nipun" irb(main):006:0> b= "loves metasploit" => "loves metasploit" After assigning values to the variables a and b, let's see what the shell response will be when we write a and a+b on the shell's console: irb(main):014:0> a => "nipun" irb(main):015:0> a+b => "nipunloves metasploit" We can see that when we typed in a as an input, it reflected the value stored in the variable named a. Similarly, a+b gave us back the concatenated result of variables a and b. Defining methods in the shell A method or function is a set of statements that will execute when we make a call to it. We can declare methods easily in Ruby's interactive shell, or we can declare them using the script as well. Methods are an important aspect when working with Metasploit modules. Let's see the syntax: def method_name [( [arg [= default]]...[, * arg [, &expr ]])] expr end To define a method, we use def followed by the method name, with arguments and expressions in parentheses. We also use an end statement following all the expressions to set an end to the method definition. Here, arg refers to the arguments that a method receives. In addition, expr refers to the expressions that a method receives or calculates inline. Let's have a look at an example: irb(main):001:0> def week2day(week) irb(main):002:1> week=week*7 irb(main):003:1> puts(week) irb(main):004:1> end => nil We defined a method named week2day that receives an argument named week. Further more, we multiplied the received argument with 7 and printed out the result using the puts function. Let's call this function with an argument with 4 as the value: irb(main):005:0> week2day(4) 28 => nil We can see our function printing out the correct value by performing the multiplication operation. Ruby offers two different functions to print the output: puts and print. However, when it comes to the Metasploit framework, the print_line function is used. Variables and data types in Ruby A variable is a placeholder for values that can change at any given time. In Ruby, we declare a variable only when we need to use it. Ruby supports numerous variables' data types, but we will only discuss those that are relevant to Metasploit. Let's see what they are. Working with strings Strings are objects that represent a stream or sequence of characters. In Ruby, we can assign a string value to a variable with ease as seen in the previous example. By simply defining the value in quotation marks or a single quotation mark, we can assign a value to a string. It is recommended to use double quotation marks because if single quotations are used, it can create problems. Let's have a look at the problem that may arise: irb(main):005:0> name = 'Msf Book' => "Msf Book" irb(main):006:0> name = 'Msf's Book' irb(main):007:0' ' We can see that when we used a single quotation mark, it worked. However, when we tried to put Msf's instead of the value Msf, an error occurred. This is because it read the single quotation mark in the Msf's string as the end of single quotations, which is not the case; this situation caused a syntax-based error. The split function We can split the value of a string into a number of consecutive variables using the split function. Let's have a look at a quick example that demonstrates this: irb(main):011:0> name = "nipun jaswal" => "nipun jaswal" irb(main):012:0> name,surname=name.split(' ') => ["nipun", "jaswal"] irb(main):013:0> name => "nipun" irb(main):014:0> surname => "jaswal" Here, we have split the value of the entire string into two consecutive strings, name and surname by using the split function. However, this function split the entire string into two strings by considering the space to be the split's position. The squeeze function The squeeze function removes extra spaces from the given string, as shown in the following code snippet: irb(main):016:0> name = "Nipun Jaswal" => "Nipun Jaswal" irb(main):017:0> name.squeeze => "Nipun Jaswal" Numbers and conversions in Ruby We can use numbers directly in arithmetic operations. However, remember to convert a string into an integer when working on user input using the .to_i function. Simultaneously, we can convert an integer number into a string using the .to_s function. Let's have a look at some quick examples and their output: irb(main):006:0> b="55" => "55" irb(main):007:0> b+10 TypeError: no implicit conversion of Fixnum into String from (irb):7:in `+' from (irb):7 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):008:0> b.to_i+10 => 65 irb(main):009:0> a=10 => 10 irb(main):010:0> b="hello" => "hello" irb(main):011:0> a+b TypeError: String can't be coerced into Fixnum from (irb):11:in `+' from (irb):11 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):012:0> a.to_s+b => "10hello" We can see that when we assigned a value to b in quotation marks, it was considered as a string, and an error was generated while performing the addition operation. Nevertheless, as soon as we used the to_i function, it converted the value from a string into an integer variable, and addition was performed successfully. Similarly, with regards to strings, when we tried to concatenate an integer with a string, an error showed up. However, after the conversion, it worked. Ranges in Ruby Ranges are important aspects and are widely used in auxiliary modules such as scanners and fuzzers in Metasploit. Let's define a range and look at the various operations we can perform on this data type: irb(main):028:0> zero_to_nine= 0..9 => 0..9 irb(main):031:0> zero_to_nine.include?(4) => true irb(main):032:0> zero_to_nine.include?(11) => false irb(main):002:0> zero_to_nine.each{|zero_to_nine| print(zero_to_nine)} 0123456789=> 0..9 irb(main):003:0> zero_to_nine.min => 0 irb(main):004:0> zero_to_nine.max => 9 We can see that a range offers various operations such as searching, finding the minimum and maximum values, and displaying all the data in a range. Here, the include? function checks whether the value is contained in the range or not. In addition, the min and max functions display the lowest and highest values in a range. Arrays in Ruby We can simply define arrays as a list of various values. Let's have a look at an example: irb(main):005:0> name = ["nipun","james"] => ["nipun", "james"] irb(main):006:0> name[0] => "nipun" irb(main):007:0> name[1] => "james" So, up to this point, we have covered all the required variables and data types that we will need for writing Metasploit modules. For more information on variables and data types, refer to the following link: Refer to a quick cheat sheet for using Ruby programming effectively at the following links: Methods in Ruby A method is another name for a function. Programmers with a different background than Ruby might use these terms interchangeably. A method is a subroutine that performs a specific operation. The use of methods implements the reuse of code and decreases the length of programs significantly. Defining a method is easy, and their definition starts with the def keyword and ends with the end statement. Let's consider a simple program to understand their working, for example, printing out the square of 50: def print_data(par1) square = par1*par1 return square end answer=print_data(50) print(answer) The print_data method receives the parameter sent from the main function, multiplies it with itself, and sends it back using the return statement. The program saves this returned value in a variable named answer and prints the value. Decision-making operators Decision making is also a simple concept as with any other programming language. Let's have a look at an example: irb(main):001:0> 1 > 2 => false irb(main):002:0> 1 < 2 => true Let's also consider the case of string data: irb(main):005:0> "Nipun" == "nipun" => false irb(main):006:0> "Nipun" == "Nipun" => true Let's consider a simple program with decision-making operators: #Main num = gets num1 = num.to_i decision(num1) #Function def decision(par1) print(par1) par1= par1 if(par1%2==0) print("Number is Even") else print("Number is Odd") end end We ask the user to enter a number and store it in a variable named num using gets. However, gets will save the user input in the form of a string. So, let's first change its data type to an integer using the to_i method and store it in a different variable named num1. Next, we pass this value as an argument to the method named decision and check whether the number is divisible by two. If the remainder is equal to zero, it is concluded that the number is divisible by true, which is why the if block is executed; if the condition is not met, the else block is executed. The output of the preceding program will be something similar to the following screenshot when executed in a Windows-based environment: Loops in Ruby Iterative statements are called loops; exactly like any other programming language, loops also exist in Ruby programming. Let's use them and see how their syntax differs from other languages: def forl for i in 0..5 print("Number #{i}\n") end end forl The preceding code iterates the loop from 0 to 5 as defined in the range and consequently prints out the values. Here, we have used #{i} to print the value of the i variable in the print statement. The \n keyword specifies a new line. Therefore, every time a variable is printed, it will occupy a new line. Refer to for more on loops. Regular expressions Regular expressions are used to match a string or its number of occurrences in a given set of strings or a sentence. The concept of regular expressions is critical when it comes to Metasploit. We use regular expressions in most cases while writing fuzzers, scanners, analyzing the response from a given port, and so on. Let's have a look at an example of a program that demonstrates the usage of regular expressions. Consider a scenario where we have a variable, n, with the value Hello world, and we need to design regular expressions for it. Let's have a look at the following code snippet: irb(main):001:0> irb(main):006:0> n =~r => 6 We have created another variable called r and we stored our regular expression in it, that is, world. In the next line, we match the regular expression with the string using the match object of the MatchData class. The shell responds with a message saying yes it matches by displaying MatchData "world". Next, we will use another approach of matching a string using the =~ operator and receiving the exact location of the match. Let's see one other example of doing this: irb(main):007:0> r = /^world/ => /^world/ irb(main):008:0> n =~r => nil irb(main):009:0> r = /^Hello/ => /^Hello/ irb(main):010:0> n =~r => 0 irb(main):014:0> r= /world$/ => /world$/ irb(main):015:0> n=~r => 6 Let's assign a new value to r, namely, /^world/; here, the ^ operator tells the interpreter to match the string from the start. We get nil as the output as it is not matched. We modify this expression to start with the word Hello; this time, it gives us back the location zero, which denotes a match as it starts from the very beginning. Next, we modify our regular expression to /world$/, which denotes that we need to match the word world from the end so that a successful match is made. For further information on regular expressions in Ruby, refer to. Refer to a quick cheat sheet for using Ruby programming effectively at the following links: Refer to for more on building correct regular expressions. Wrapping up with Ruby basics Hello! Still awake? It was a tiring session, right? We have just covered the basic functionalities of Ruby that are required to design Metasploit modules. Ruby is quite vast, and it is not possible to cover all its aspects here. However, refer to some of the excellent resources on Ruby programming from the following links: A great resource for Ruby tutorials is available at A quick cheat sheet for using Ruby programming effectively is available at the following links: More information on Ruby is available at Developing custom modules Let's dig deep into the process of writing a module. Metasploit has various modules such as payloads, encoders, exploits, NOPs, and auxiliaries. We will cover the essentials of developing a module; then, we will look at how we can actually create our own custom modules. Let's discuss the essentials of building a module first. Building a module in a nutshell Let's understand how things are arranged in the Metasploit framework as well as what all the components of Metasploit are and what they are meant to do. The architecture of the Metasploit framework Metasploit is composed of various components. These components include all the important libraries, modules, plugins, and tools. A diagrammatic view of the structure of Metasploit is as follows: Let's see what these components are and how they work. The best to start with are the Metasploit libraries that act as the heart of Metasploit. Let's understand the use of various libraries as explained in the following table: We have different types of modules in Metasploit, and they differ in terms of their functionality. We have payloads modules for creating an access channel to the exploited system. We have auxiliary modules to carry out operations such as information gathering, fingerprinting, fuzzing an application, and logging in to various services. Let's examine the basic functionality of these modules, as shown in the following table: Understanding the libraries' layout Metasploit modules are the buildup of various functions contained in different libraries and the general Ruby programming. Now, to use these functions, first we need to understand what these functions are. How can we trigger these functions? What number of parameters do we need to pass? Moreover, what will these functions return? Let's have a look at where these libraries are actually located; this is illustrated in the following screenshot: As we can see in the preceding screenshot, we have the REX libraries located in the /lib directory; under the /msf folder, we have the /base and /core library directories. Now, under the core libraries' folder, we have libraries for all the modules we covered earlier; this is illustrated in the following screenshot: We will get started with writing our very first auxiliary module shortly. So, let's focus on the auxiliary modules first and check what is under the hood. Looking into the library for auxiliary modules, we will find that we have various library files to perform a variety of tasks, as shown in the following screenshot: These library files provide the core for auxiliary modules. However, for different operations and functionalities, we can refer to any library we want. Some of the most widely used library files in most Metasploit modules are located in the core/exploits/directory, as shown in the following screenshot: We can find all other core libraries for various types of modules in the core/ directory. Currently, we have core libraries for exploits, payload, post-exploitation, encoders, and various other modules. Visit the Metasploit Git repository at to access the complete source code. Summary In this article, we learned how Ruby is the heart of the Metasploit framework. We learned to create Ruby programs and interacted with the Ruby shell as well. We also learned about how to define methods in the Ruby shell and about various variables and data types of Ruby. Finally, we also learned how to develop custom modules in Metasploit and also about the architecture of the Metasploit framework. Resources for Article: Further resources on this subject: - So, what is Metasploit? [Article] - Exploitation Basics [Article] - Understanding the True Security Posture of the Network Environment being Tested [Article]
https://www.packtpub.com/books/content/ruby-and-metasploit-modules
CC-MAIN-2015-48
refinedweb
3,294
51.28
Hello once again, this time I seem to be having trouble getting this script I pulled together to work. #include <SoftwareServo.h> SoftwareServo servo1; SoftwareServo servo2; int pot1 = 0; int pot2 = 3; int pot3 = 5; int val1; int val2; int val3; void setup() { servo1.attach(9); servo1.attach(5); Serial.begin(9600); } void loop() { val1 = analogRead(pot1); val2 = analogRead(pot2); val3 = analogRead(pot3); SoftwareServo::refresh(); if (val2 < 10) { if (val1 < 10) { if (val3 < 10) { delay (50); Serial.write("off"); } else{ val3 = map(val3, 0, 1023, 1, 79); servo2.write(val3); delay(5); } } else{ val1 = map(val1, 0, 1023, 100, 179); servo1.write(val1); if (val3 > 10) { val3 = map(val3, 0, 1023, 1, 79); servo2.write(val3); delay(5); } delay(5); } } else{ val2 = map(val2, 0, 1023, 1, 79); servo2.write(val2); val2 = map(val2, 0, 1023, 100, 179); servo1.write(val2); } delay(5); } To explain what I was expecting to happen, I have 3 potentiometers. I’ll name them 1, 2 and 3. And 2 servos, I’ll name them 1 and 2 If potentiometer 2 is anywhere but all the way down, both servos 1 and 2 follow it and potentiometers 1 and 3 are completely ignored until potentiometer 2 is back to 0 or close. If potentiometer 1 is moved, servo 1 moves with it regardless of what happens to potentiometer 3. If potentiometer 3 is moved, servo 2 moves with it regardless of what happens to potentiometer 1. If potentiometer 1 and 2 are greater than 0, and potentiometer 2 is moved, I would like both servos to once again ignore 1 and 3 and jump to potentiometer 2. I understand this is a little confusing, and I’m trying the best I can to explain it. I could have swore that this script would work but it simply doesn’t do anything. Movement of all potentiometers incites no movement in the servos, and the servos are not receiving any commands because I am able to rotate the servo horns by myself. Any help would be appreciated, I am rather new to all of this. Thank you!
https://forum.arduino.cc/t/controlling-2-servos-with-3-potentiometers-softwareservo/223308
CC-MAIN-2021-31
refinedweb
350
70.73
Weekly Wrap-up: Earnings, Interest Rates, and the 2nd Derivative The following article was originally published at The Agile Trader Website on March 19, 2006. Dear Speculators, Last week the Dynamic Trading System took profits of +9% on the Nasdaq 100 E-mini futures contracts and +32% on the S&P 500 E-mini contracts. Since inception on July 15, 2005 the Index Futures Portfolio has netted +378% in position/trading gains and has garnered +107% in total return (net of all subscription, commission, regulatory, and exchange fees in auto-trade accounts).* Click here if you would like to read more about The Agile Trader Index Futures Service. **** **** **** I was asked to do an interview last week on the subject of my involvement with the stock market and my approach to trading. Since, in this space, we generally focus on a longer-term overview of the markets, I thought it might be productive to share a longer-term overview of shorter-term trading. QUESTION: How did you first become interested in trading the markets? ADAM OLIENSIS: When I was 9 years old I bought one share of Automatic Data Processing (ADP). We had family in the business and, though they didn't need it, I wanted to express support and solidarity. The company and the stock have been a real American success story. I held that one share until it was 32 shares...and it was at the same price point that I had bought it at; $44. So, I've always been aware of the stock market and of the power of compounding. Then in the '80s I got a hot tip on a stock from, of all people, a singing teacher. I averaged in at about $6. The stock ran to $60. I remember scanning the New York Times stock quotes as I was standing in front of Grand Central Station. $60! It was a 10-bagger! I could hear my grandfather's voice in my head, screaming "Sell! Sell! Sell, Adam! Sell half!" I called my broker from pay phone on 42nd Street and Madison Avenue. I said, "I want to sell half." He said, "Adam, I've seen stocks like this go to $90." I acquiesced to the greedy voice in the telephone and not my grandfather's voice in my head and I held the whole position. That was on October 2, 1987. On October 19 the stock dropped into the $20s. I ended up selling for 1 5/8 sometime in 1989. It wasn't until about 4 years later that it dawned on me that it would be a good idea to invest in computers. Then about a year after that I got married and started having kids. That's when I became seriously interested in investing. In 1995, I began educating myself on fundamentals. In 1996 I started exploring technical analysis. I read everything about TA that I could find and started playing with charting software. I began trading more actively and in 1997 I started trading options pretty seriously. By 1998 I was in it full-time and started developing a loose following of people to whom I would e-mail charts. I really sort of backed into the whole thing and it took a number of years before it turned into a primary occupation. QUESTION: Which do you prefer, short-term trading or longer-term trading? I prefer short-term swing trading and day trading. I've found some technical keys that make the statistical risk/reward scenarios over a period of days (and sometimes weeks) pretty advantageous and clear. As with the weather, I find it's easier to forecast over shorter time frames than out a number of months or years. QUESTION: What are the things you like best about being a trader? AO: I love the fact that every day, every signal, unfolds as a mystery. It's a puzzle. We try to set up what we think are statistically likely scenarios, but we never know where we'll be at the end of the day until we see it unfold. QUESTION: How do you treat losses and account drawdown? AO: I treat losses and drawdowns in 3 ways. Intellectually, I try to understand them within the probabilistic context of my Dynamic Trading System. They're inevitable and necessary. No trading system in the real world wins all the time. And I make every effort to keep that in perspective. Spiritually/psychologically I try to look at each loss as a golden opportunity. It's an opportunity to utilize discipline, to stick with the strict parameters that are laid out when each trade is opened. It's an opportunity to make certain that I haven't become complacent, arrogant, or numbed to what "risk" really means. And it's an opportunity to humble myself as well as to make sure that I react correctly (with discipline and without hubris) when I am "wrong." Emotionally I react to each loss as though it's a complete and utter catastrophe. As a professional trader I'm not supposed to admit that, but I hate losing. I hate hate hate it. I'm bad at it. I'm a bad sport about it. Not much anymore but I used to stand up from my chair and scream at the market. I screamed at my monitors the same way I screamed at the TV when I was 11 years old for the Green Bay Packers' defense to strip the ball from the Vikings' running back when the Pack was down 23 points in the 4th quarter. And that character flaw, that relentless competitive stubbornness, my natural inability to take a small judicious loss, is precisely why I forced myself to develop a trading system that would keep my "head" probabilistic about both wins and losses. After years of practice, now I'd say I react with about 65% of the equanimity I would hope to achieve if I were really an evolved person. QUESTION: What are some of the key rules that you feel are most important for a trader to keep in mind when evaluating any potential trading opportunity? AO: First and foremost, define the maximum risk that you are willing to take, set your loss-cut, and stick to it. "Survival" is the most important thing in trading. And here's the most important understanding to have before trading, I think: TRADING-CAPITAL IS SCARCE. OPPORTUNITIES TO TRADE ARE PLENTIFUL. BE STINGY WITH WHAT'S SCARCE AND BE PROFLIGATE WITH WHAT'S PLENTIFUL. IT'S BETTER TO MISS ONE OF THOSE PLENTIFUL OPPORTUNITIES (THERE WILL BE MORE TOMORROW) THAN TO LOSE SCARCE TRADING CAPITAL (THERE MAY NOT BE MORE TOMORROW). It's fine to be stopped out of a position and take a small loss. It's frustrating, it may make you want to pull out your hair or stand up and scream but it's fine. The most important thing is to avoid big losses and to live to fight another day. As a corollary to that rule: ZERO AND INFINITY (WINNING AND LOSING) ARE NOT SYMMETRICAL. ZERO IS A LIMIT THAT IS OFTEN REACHED. INFINITY IS UNATTAINABLE. If your account goes to zero, you're out of the game. Look, if I lose 50% I have to make 100% to get back to breakeven. And if I gain 50% and then lose 50%, I'm at 75% of where I started, not at breakeven. Losing is much easier than winning. And money management along with risk control are the cardinal rules of trading--more important than chart reading, more important than understanding the economy, more important than deriving valuation models or growth projections, and more important than optimizing gains. And finally: find a trading methodology that gives you a good idea of your statistical probabilities in trading, then use the method, stick to it, and keep a probabilistic perspective about it. If you don't, your ego will get involved, your temperament will get the better of you, and there's a real good chance you'll end up putting not just your net worth but your self-worth in jeopardy. QUESTION: What are your favorite markets that you like to trade and do you ever use options? AO: At this point my favorite markets. QUESTION: What is your most memorable trade? AO: My most memorable trade was one of the single stupidest things I've ever done. It was late 1999. I was very, very long Qualcomm (QCOM) stock. And I had much less experience than profit with the tech bubble at maximum expansion and on the verge of popping (which, of course I didn't know). I had taken all sorts of profits from trading in the options markets, piled them into Qualcomm stock and Leaps, and the stock was rocketing up skyward on the strength of some freakish liquidity factors that were probably resultant from the Fed's fear of the putatively impending Y2K crisis (remember that one?). If I remember correctly, Qualcomm was in the mid-400s, and I sold short calls something like a hundred points out of the money against my long position in the stock. I figured, hell, if the stock goes up a hundred points I'll be overjoyed to be called out. Well, the stock went up 100 points and then some. And I ended up buying back to cover the short calls at a huge loss on the calls, which effectively raised my cost basis on the stock...all this just as the NASDAQ and tech stocks were priming themselves to enter the worst cyclical bear market in 70 years. I had become so intoxicated by the upside that I had was insensitive to risk and stupidly violated my own trading plan. It's the single worst trade I ever made both in terms of the size of the losses I ultimately incurred and in terms of the whimsical impulsiveness I exhibited in violating my plan. QUESTION: With all the different technical analysis tools out there how does a new technician avoid information overload or "analysis paralysis?" AO: Test your indicators. Don't trust what other people say about MACD or Stochastics or moving averages or RSI or directional move indicators. Look at your charts. Observe them carefully and make note of your impressions. Then find a charting program that allows you to TEST your impressions, observations, and indicators. See if what you think you see is in FACT profitable. See if your indicators do what you think they'll do and what they're "supposed" to do. And in your tests determine where your stops should be. Otherwise you'll be guessing, you'll lose confidence in real time, you'll impulsively violate your plan...and your losses will effectively become very expensive tuition. QUESTION: What kind of technical analysis and fundamental analysis tools do you employ? AO: After years of "wandering in the Sinai," I have cut way back on the indicators I use. I LOOK at a lot of indicators but I USE my Dynamic Trading Oscillators explicitly. I look at Bollinger Bands and MACD. And I look at the VIX, the VXN, the Put/Call Ratios, the New Highs, New Lows, and historical Volatility. I could go on and on. I've probably done extensive research on hundreds of indicators...but I USE the oscillators I derived myself. Fundamentally, I look at a host of factors each week in my Weekly Wrap-Up. I look at the market's PE, earnings growth, which sectors are displaying upward revisions and which sectors are suffering downward revisions, interest rates, the yield curve, Equity Risk Premium, and finally my Risk Adjusted Fair Value calculation, which is a variation on the Fair Value calculation that the Federal Reserve employs. QUESTION: What mistakes do most people make in the markets? AO: The worst mistake people make is to either not have a trading plan or to have a plan and not stick to it. QUESTION: How important is money management in your overall approach to trading? AO: Money management is probably the first, second, and third most important thing in trading. QUESTION: How would you characterize your approach to the markets? AO: My Dynamic Trading System (DTS) swing trades signals, both long and short, derived from a set of proprietary algorithms applied to the DTS Oscillators. The DTS was developed via extensive testing in over the past 7 years' data (in bull, bear, and flat markets), and applied through seasonal and cyclical filters. The System has enjoyed a very profitable statistical edge in the past and we continue to tweak the System to learn from the markets in real time. Our approach is probabilistic. The System places trades that, based on historical testing, have an optimal probability of profitability. We try not to get too involved in any one particular trade, (we want to avoid my doing a lot of screaming) but look to measure the System's results over months and years of data. QUESTION: What do you think are the greatest misconceptions people have about trading and investing? AO: That there could be somebody who knows everything. And that it could be "me" (oneself). In real time, we never know what the market will do next. Trading is not about being right. It's about maintaining a probabilistic approach to what is likely to be profitable. It's about being disciplined, and it's about recognizing an optimal time to acknowledge when a trade is not working...and then maintaining discipline and exiting. QUESTION: What would you say are the most reliable chart patterns and indicators for a trader to watch out for and monitor? AO: That's a really tough question. I think it depends on the market, the time frame, and a variety of underlying conditions. Right now I'm enamored of slight violations of support or resistance that FAIL. For instance, the SPX has just broken out to a new 4-year high. Should it FAIL to hold above 1300, one could well imagine that a lot of new longs in the market who are buying the breakouts...those longs will turn into sellers should 1300 fail. So, I guess in terms of formations right now I'm enamored of fake-out breakouts and shakeout breakdowns. I really enjoy the reversals that follow these. In terms of indicators, the most reliable ones I know of are my Dynamic Trading Oscillators. I do not know of any other indicators that have been as rigorously and successfully tested. I'm sure there are others out there. But these are the most reliable ones I know about at this time. As for the future, I continue to research different time frames and various markets. And we hope to have new products available for shorter-term day traders as well as for players in international markets later this year. **** **** **** EARNINGS, RATES, AND THE FED The Consensus for Forward 52-Week Operating Earnings for the SPX (blue line) has hit a new all-time high at $86.26. Trailing 52-Wk Operating EPS and Reported EPS (yellow and pink) have also both hit new all-time highs at $78.12 and $75.92 respectively. Top-Down estimates for CY07 have been published at Standard & Poors and the consensus estimate for the SPX is $89.15. That represents +5.1% growth Y/Y for, which follows a consensus of +10.9% growth for CY06, actual +12.9% growth for CY05, and actual +23.7% growth for CY04. The trends are still sloped in the right direction, but Wall Street loves the "2nd Derivative" (the rate of change of the rate of change) and deceleration looks to be name of the game over the next 21 months. Now, look at this chart, published by Ed Yardeni of Oak Associates. Looking at this chart was a real "aha" moment for me. The correlation between the Y/Y change in F52W EPS and the Y/Y change in the Fed Funds Rate is astonishingly high over this 22-year period. I only had the data for the series beginning in 1994, so I did my own study. Here's what we see upon inspection using a higher-powered magnification. Over the last 11 years the correlation between these 2 series is a whopping +0.88. (1 is perfect and -1 is perfectly inverse.) The yellow highlights represent points at which the 2 series have diverged such that the Fed might be accused of being "behind the curve." Several things appear fairly obvious from this chart: the Fed looks at the growth of earnings (it would be bizarre to think that this correlation is purely accidental over a 22-year period); the Fed waited longer than normal to raise rates once earnings re-accelerated out of the market's 2001-2002 slump; the FF Rate is probably up ABOVE neutral by this metric; as F52W EPS growth decelerates the Fed is once again behind the curve, this time in lowering rates; if the Fed continues to raise rates by 25 beeps at each meeting the red line will stay up near 2% and the blue line will very likely continue to be driven down, at least in part by the restrictive FF Rate. On the other hand, all the Fed has to do to get the red line to begin moving down is to stop raising rates. They don't actually have to lower rates. (The red line measures not the FF Rate but the Y/Y change in the FF Rate, so the red line will move down if the Fed simply stands pat.) So, the funky thing here is that if the Fed continues to raise rates, the blue line will probably tank hard, forcing the Fed to lower rates more aggressively in the not-too-distant future. But if the Fed stands pat sooner than later, the red line will begin to drop, decreasing the Fed's relatively near-term imperative to chase the blue line down by lowering rates more aggressively. The Y/Y change in F52W EPS now stands at +13.7%, down from about +20% in 2004. However with the 3-month annualized GR hovering well below +10%, the "2nd Derivative" (rate of change of the rate of change) on the Y/Y line (blue) is likely to continue to be negative. (The blue line will continue falling.) And, as we have discussed in the past, once that blue line falls below +10% with a negative 2nd Derivative (or if the blue line is below 0%) the market often enters a difficult phase for the bullish case. (Grey highlights.) With the Consensus now at +10.9% EPS Growth in CY06 and at +5.1% for CY07, the odds are greatly increasing that we will see the Y/Y GR for F52W EPS fall below +10% in the months ahead. For the 2nd straight week the SPX closed above our RISK ADJUSTED FAIR VALUE price. Prior to last week it had been 21 months since we had seen this. We calculate RISK ADJUSTED FAIR VALUE (RAFV) by dividing F52W EPS ($86.26) by the sum of the 10-Yr Treasury Yield (TNX, 4.674%) and the median post-9/11 Equity Risk Premium (ERP, now +1.95%). (ERP is the difference between the F52W Earnings yield on the SPX (6.6%) and TNX (4.67 %). (6.6%-4.67%= 1.93%, which is below the median post 9/11 ERP). $86.26 / (0.04674+.0195) = 1302. In order for the SPX to move higher from here the market would have to believe some combination of these things: earnings growth will reaccelerate over next 12 months, TNX will fall, and/or the post-9/11 world is once again becoming less risky. Our view is that earnings growth will continue to decelerate, that TNX will move up toward 4.9-5%, and that (with oil still over $60) the markets continue to perceive the post-9/11 world as about as risky as it has been. With TNX at 5% we could very well see our RAFV calculation look like this: RAFV= $86.26 / (.05+.0195) = 1241 While the SPX has modestly broken to a new cycle high, extending this rally beyond the time frames of the analogous rallies of 1966 and 1994, we would continue to look for the market to be entering a retrenchment phase (for the reasons discussed above) between now and October. Factors that could change our minds from ursine to bovine include: - Crude Oil breaking below $58/barrel, which would quell inflationary pressures on the Headline and allow the Fed to - decelerate the increase on the Fed Funds Rate, as discussed at length above. - A successful test of the 1297-1300 as support. Should that band hold...well, it's tough to be too aggressively bearish when the market continues to make and hold new cycle highs. Of course we have numerous technical concerns about the stock market at this point, including, but not limited to relative weakness on the Nasdaq 100 and the Philly Semiconductor Index. Please join us in as we explore these issues among others in our daily work at The Agile Trader and The Agile Trader Index Futures Service. Best regards and good trading! TweetTweet
http://www.safehaven.com/article/4810/weekly-wrap-up-earnings-interest-rates-and-the-2nd-derivative
CC-MAIN-2017-26
refinedweb
3,565
71.44
Building a Photo Gallery with Python and WSGI If you wanted to write a Python web application a few years ago, you'd be faced with quite a glut of choices. You'd have to choose among a bunch of great web frameworks, and then figure out a reasonable way to deploy the application in production. It became a running joke that Python was the language of a thousand frameworks. The Python community had options to solve the problem, cull the number of frameworks, or embrace the diversity. Given the nature of the community, culling didn't seem like an attractive option, so PEP 333 was written as a way to lower the barriers to using Python as a language to develop for the web and the Web Server Gateway Interface (WSGI) was born. WSGI separates the web application from the web server, similar to a Java servlet. In this way, web framework authors could worry about the best way to implement a web application, and leave the server implementation details to those working on the opposite side of the WSGI "tube." Although the intent of WSGI is to allow web framework developers a way to easily interface with web servers, WSGI is also a pretty fun way to build web applications. Ian Bicking, in his presentation "WSGI: An Introduction" at Pycon 2007, compared WSGI to the early days of CGI programming. It turns out, despite its problems, early CGI was a great encapsulation that provided clean separation between the server and the application. The server was responsible for marshalling some environment variables and passing them to the stdin of the application. The application responded with data (usually HTML) on stdout. Of course, CGI was slow and cumbersome, but it encapsulated things really nicely, and was easy to wrap your head around. WSGI is similar to CGI in that the interface is simple. So simple, in fact, that it often throws people off. When you assume that deploying web applications is difficult, the reaction to WSGI is usually a shock. Here's a basic example: def hi(environ, start_response): start_response('200 OK', [('content-type','text/html')]) return "HI!" from wsgiref.simple_server import make_server make_server('', 8080, hi).serve_forever() The application is the function "hi", which takes as arguments the environment (a dictionary), and a function called start_response. The first line of the the hi application "start_response('200 OK', [('content-type','text/html')])" declares that the request was good, returning the HTTP response 200, and lets the client know that what follows is the mimetype text/HTML. The application then returns the HTML, in this case the simple phrase "HI!" It's fairly similar to the CGI way of passing environment in on stdin, and getting a response from stdout. That function is all that's required of a full WSGI application. It's trivial to plug the hi application into a WSGI container and run it. The final two lines of the script do just that: from wsgiref.simple_server import make_server make_server('', 8080, hi).serve_forever() I'm using the WSGI reference server, included in the Python standard library since Python 2.4. I could just as easily substitute it with a FastCGI, AJP, SCGI, or Apache container. In that way, it's a write once, run anywhere...plug and play kind of web application. Now that you're over the hello world hump, it's time to build a useful application. On August 4th, 2007, my wife (Camri) and I had our first child, Mr. William Christopher McAvoy. Since then, we've taken thousands of photographs. All of them are stored in a neatly organized series of folders on an external hard drive on my desk. When Camri wants to find pictures to give to the grandparents, she has to wheel herself over to my computer and look through them. We tried a shared drive, but it was just too slow. I did a little bit of looking for a web application that would read a big filesystem of pictures, but couldn't find any. The existing galleries all wanted you to upload pictures; none assumed a pre-existing series of folders. I puttered around for a few hours in the airport on a trip, and came up with a relatively usable WSGI application that converts web paths to directory paths, dynamically creates thumbnails, and generally makes it easy to browse a big listing of jpegs. When we got home, I plugged the app into a mod_wsgi container on my desktop installation of Apache, and it ran as well as it did in the WSGI container included in Python 2.4 that I was using for development. The full source of the application is available on my public Google code page. The guts of the application is the class fsPicture. "A class?" you say, "I thought WSGI apps were supposed to be functions?!" Sort of. They're supposed to be callable, function-like objects, which is a way of saying that they can be objects, as long as you override the __call__ magic method of the object. Yhis sounds simple enough, but it really confused me when I first started playing with WSGI, so let me spend a minute on it. If I declare a class that looks like this: class Something(object): def __call__(self): return "Hi there!" And then instantiate the class like so: s = Something() I can call 's' as if it were a function, like 's()'. It's functionally equivilent to creating an s function, like this: def s(): return "Hi there!" This is really great, because it means that you can create objects as WSGI applications, which is a lot cleaner than creating a WSGI application with a function as its base. Page 1 of 2
http://www.developer.com/open/article.php/3734416/Building-a-Photo-Gallery-with-Python-and-WSGI.htm
CC-MAIN-2015-48
refinedweb
960
61.87
37/compiler-error-in-selenium Dear Learner, Hope you are doing great. Your code needs little bit correction. Please use the below given code : package first; import java.util.Scanner; public class testfunctions { static int square(int x) { int y=x*x; return y; } public static void main(String[] args) { Scanner in =new Scanner(System.in); System.out.println("Enter the number : "); int n=in.nextInt(); int result=testfunctions.square(n); System.out.println("Square of "+n+" is : "+ result); } Please try the above code and let us know if you face any issue. We will be waiting for your response. Hey @ali, rnorm() function requires an argument, ...READ MORE This is a very common issue that ...READ MORE This is a very common issue that ...READ MORE Try formatting your code like this: myfunction <- ...READ MORE You need a sequence to iterate over ...READ MORE This caused non-logical data or missing values passed ...READ MORE This error is caused by references to ...READ MORE This is caused by using an object-oriented ...READ MORE Dear Raghu, Hope you are doing great. It is ...READ MORE Dear Koushik, Hope you are doing great. The hadoop ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/37/compiler-error-in-selenium
CC-MAIN-2021-21
refinedweb
204
70.39
lfc_readdirxc - read LFC directory opened by lfc_opendir in the name server #include <sys/types.h> #include "lfc_api.h" struct lfc_direnstatc *lfc_readdirxc (lfc_DIR *dirp) lfc_readdirxc reads the LFC directory opened by lfc_opendir in the name server. This routine returns a pointer to a structure containing the current directory entry including the stat information and the comment associated. lfc_readdirxc caches a variable number of such entries, depending on the filename size, to minimize the number of requests to the name server. dirp specifies the pointer value returned by lfcxc returns a null pointer both at the end of the directory and on error, an application wishing to check for error situations should set serrno to 0, then call lfc_readdirxc,. lfc_closedir(3), lfc_opendirg(3), lfc_rewinddir(3), lfc_setcomment(3), stat(2) LCG Grid Deployment Team
http://huge-man-linux.net/man3/lfc_readdirxc.html
CC-MAIN-2017-22
refinedweb
131
53.61
Can somebody explain why you should Use a flask extension instead of a bare library? For example, if you want to use mongoDB (or whatever) from Flask, it seems you need to do this: from flask.ext.pymongo import PyMongo mongo = PyMongo(app) from pymongo import MongoClient mongo = MongoClient() I'd like to know what are so special about the extensions. The PyMongo extension allows you to configure the Mongo connection using the same configuration mechanism used for the rest of Flask, see the Configuration section in the PyMongo documentation what is supported. The library also offers a few convenience functions that you may find helpful when writing a Flask app backed by Mongo. There is no requirement in using the extension; if you feel that using MongoClient directly is easier, then do so. This applies to most extensions; they'll offer some level of integration with the Flask ecosystem. You'll need to decide for each how much you need to make use of that integration vs. how much you'd have to re-invent the wheel. Like any library really.
https://codedump.io/share/4Qfr9CUUhI4N/1/why-need-to-use-flask-extension-not-just-the-bare-library
CC-MAIN-2017-13
refinedweb
182
53.81
Up to [cvs.netbsd.org] / pkgsrc / mail / gmime Request diff between arbitrary revisions Default branch: MAIN Revision 1.23 / .22: +2 -1 lines Diff to previous 1.22 (colored) don't install uuencode/uudecode to avoid conflict with gmime24 bump PKGREVISION Revision 1.22 / (download) - annotate - [select for diffs], Wed Sep 8 11:53:04 2010 UTC (20 months, 2 weeks ago) by drochner Branch: MAIN Changes since 1.21: +3 -3 lines Diff to previous 1.21 (colored) back out update to the API incompatible 2.4 branch -- there are still users of the 2.0 API, and mail/gmome24 has been there before Revision 1.21 / (download) - annotate - [select for diffs], Tue Sep 7 19:04:15 2010 UTC (20 months, 2 weeks ago) by adam Branch: MAIN Changes since 1.20: +4 -4 lines Diff to previous 1.20 / (download) - annotate - [select for diffs], Thu Feb 11 15:44:29 2010 UTC (2 years, 3 months ago) by wiz Branch: MAIN CVS Tags: pkgsrc-2010Q2-base, pkgsrc-2010Q2, pkgsrc-2010Q1-base, pkgsrc-2010Q1 Changes since 1.19: +2 -1 lines Diff to previous 1.19 (colored) Add fix for from Fedora. Bump PKGREVISION. Revision 1.19 / (download) - annotate - [select for diffs], Fri Feb 5 12:41:44 2010 UTC (2 years, 3 months ago) by wiz Branch: MAIN Changes since 1.18: +4 -4 lines Diff to previous 1.18 (colored) Update to 2.2.25: 2010-01-30 Jeffrey Stedfast <fejj@novell.com> * README: Bumped version * configure.in: Bumped version to 2.2.25 * configure.in: Disabled strict-aliasing to work around subtle bugs generated by gcc 4.4 when optimizations are enabled. Revision 1.18 / (download) - annotate - [select for diffs], Sun Aug 16 13:35:58 2009 UTC (2 years, 9 months ago) by wiz Branch: MAIN CVS Tags: pkgsrc-2009Q4-base, pkgsrc-2009Q4, pkgsrc-2009Q3-base, pkgsrc-2009Q3 Changes since 1.17: +4 -4 lines Diff to previous 1.17 (colored). Revision 1.17 / (download) - annotate - [select for diffs], Sun Jul 19 19:16:17 2009 UTC (2 years, 10 months ago) by wiz Branch: MAIN Changes since 1.16: +1 -2 lines Diff to previous 1.16 (colored) Remove DragonFly portability patch that isn't necessary any longer. Ok hasso@ Revision 1.16 / (download) - annotate - [select for diffs], Fri Oct 24 21:08:01 2008 UTC (3 years,.15: +4 -4 lines Diff to previous 1.15 (colored) Update to 2.2.23: 2008-09-14 Jeffrey Stedfast <fejj@novell.com> * README: Bumped version * configure.in: Bumped version to 2.2.23 2008-09-13 Jeffrey Stedfast <fejj@novell.com> * docs/reference/gmime-sections.txt: Updated. * gmime/gmime-parser.c (nearest_pow): New faster method for calculating nearest power of 2, rather than an expensive while-loop. (g_mime_parser_get_headers_begin): New function backported from 2.3.x (g_mime_parser_get_headers_end): Same. 2008-08-07 Jeffrey Stedfast <fejj@novell.com> * gmime/gmime-message-part.c (g_mime_message_part_get_message): Only ref the message if it is non-NULL. Thanks to Peter Bloomfield for this fix. Revision 1.15 / (download) - annotate - [select for diffs], Thu Aug 14 20:39:25 2008 UTC (3 years, 9 months ago) by wiz Branch: MAIN CVS Tags: pkgsrc-2008Q3-base, pkgsrc-2008Q3, cube-native-xorg-base, cube-native-xorg Changes since 1.14: +4 -4 lines Diff to previous 1.14 (colored). Revision 1.14 / (download) - annotate - [select for diffs], Wed Apr 16 14:37:39 2008 UTC (4 years, 1 month ago) by wiz Branch: MAIN CVS Tags: pkgsrc-2008Q2-base, pkgsrc-2008Q2, cwrapper Changes since 1.13: +4 -4 lines Diff to previous 1.13 (colored) Update to 2.2.18: 2008-03-13 Jeffrey Stedfast * gmime/gmime-parser.c (parser_construct_message): Changed content_length to an unsigned long rather than unsigned int, fixes bug #521872. Thanks to Pawel Salek for this fix. 2008-03-10 Jeffrey Stedfast * gmime/gmime-parser.c (parser_scan_mime_part_content): Don't let size go negative. 2008-02-09 Jeffrey Stedfast * gmime/gmime-filter-basic.c (filter_filter): Use the new macros defined below. * gmime/gmime-utils.c (rfc2047_encode_word): Use the new macros. * gmime/gmime-utils.h: Added more accurate encoding-length macros for base64, quoted-printable, and uuencode which are try to minimize over-calculating the amount of output data that we need. Also namespaced them. 2008-02-08 Jeffrey Stedfast * src/uudecode.c (uudecode): Use g_strchomp() on the filename parsed from the 'begin' line. 2008-02-07 Jeffrey Stedfast * util/url-scanner.c (url_web_end): Handle IP address literals within []'s. Fixes bug #515088. 2008-02-06 Jeffrey Stedfast * gmime/gmime-utils.c (g_mime_utils_uuencode_step): Optimized. 2008-02-03 Jeffrey Stedfast * gmime/gmime-stream-cat.c (stream_read): Removed an extra seek. 2008-02-02 Jeffrey Stedfast Fix for and some other bugs I discovered while fixing it. * gmime/gmime-parser.c (header_parse): Made an actual function rather than a macro. Don't turn invalid headers into X-Invalid-Headers, just ignore them. Instead of using g_strstrip(), do our own lwsp trimming so we can do it before malloc'ing - this helps reduce memory usage and memmove() processing in g_strstrip(). (parser_step_headers): Validate the header field names as we go so that we can stop when we come to an invalid header in some cases. May now return with 3 states rather than only 1: HEADERS_END (as before), CONTENT (suggesting we've reached body content w/o a blank line to separate it from the headers), and COMPLETE (which suggests that we've reached the next message's From-line). (parser_skip_line): Rearranged a bit: don't fill unless/until we need to. (parser_step): For HEADERS_END state, skip a line and increment state to CONTENT. No-op for CONTENT and COMPLETE states. (parser_scan_message_part): parser_step() can return more than just HEADERS_END on 'success' when starting with HEADERS state, so check for error rather than HEADERS_END. (parser_construct_leaf_part): No need to parser_step() thru header parsing, they should already be parsed by the time we get here. Also, don't call parser_skip_line() directly to skip the blank line between headers and content, use parser_step() to do that for us. (parser_construct_multipart): Same as parser_construct_leaf_part() (found_immediate_boundary): Now takes an 'end' argument so callers can request a check against an end-boundary vs a part boundary. (parser_scan_multipart_subparts): Check for errors with parser_skip_line(). Set HEADERS state and use parser_step() to parse headers rather than calling parser_step_headers() directly. If, after parsing the headers, we are at the next message (aka COMPLETE state) and we have no header list, then break out of our loop and pretend we've found an end-boundary. After parsing the content of each MIME part, check that the boundary we found is our own and not a parent's (if it belongs to a parent, break out). (parser_construct_part): Loop parser_step() until we're at any state past the header block (>= HEADERS_END). (parser_construct_message): Same idea. Also, do error checking for decoded content_length value. 2008-02-02 Jeffrey Stedfast * gmime/gmime-iconv-utils.c (iconv_utils_init): Don't break if the user's locale is unset (e.g. US-ASCII). 2008-01-31 Jeffrey Stedfast * gmime/gmime-parser.c: Removed the need for 'unstep' state information. 2008-01-27 Jeffrey Stedfast * gmime/gmime-stream-buffer.c (stream_write): Don't modify the passed-in arguments so that it makes debugging easier if there's ever a bug. 2008-01-27 Jeffrey Stedfast * gmime/gmime-stream-buffer.c (stream_read): Optimized the BLOCK_READ code-path. (stream_write): Optimized the BLOCK_WRITE code-path. (stream_seek): Optimized the BLOCK_READ code-path. (g_mime_stream_buffer_gets): Updated for the changes made to the way bufptr is used in the BLOCK_READ case. 2008-01-14 Jeffrey Stedfast * gmime/gmime-charset.c (g_mime_set_user_charsets): Deep copy the string array. Fixes bug #509434. 2008-01-02 Jeffrey Stedfast * gmime/gmime-message.c (message_write_to_stream): Reworked the logic to be easier to understand what is going on. * gmime/gmime-multipart.c (multipart_write_to_stream): In the case where multipart->boundary is NULL /and/ we have a raw header (suggesting a parsed message), do not set a boundary as it will break the output because it will clobber the saved raw header and GMimeMessage's write_to_stream() method will have skipped writing its own headers if its toplevel part (us) have a raw header set. In this case, also skip writing the end boundary. 2008-01-01 Jeffrey Stedfast * gmime/gmime-utils.c (g_mime_utils_generate_message_id): Fixed a Free Memory Read access (FMR) by not freeing 'name' before using it's value. Also reworked to take advantage of uname(2) or getdomainname() to get the domain name if available to avoid having to do a DNS lookup. 2008-01-01 Jeffrey Stedfast Fixes bug #506701 * gmime/gmime-utils.c (rfc2047_encode_get_rfc822_words): Don't reset the word-type variable as it needs to be preserved when breaking long words. (rfc2047_encode): Switch on word->encoding - if 0, rfc2047 encode as us-ascii. 2007-12-27 Jeffrey Stedfast * gmime/gmime-utils.c (decode_8bit): Now takes a default_charset argument which we use in place of the locale charet if non-NULL. We also now always include this charset in our list of charsets to check for a best-match (obviously this charset is unlikely to be an exact fit if this function is getting called, so we place it at the end of the list). (rfc2047_decode_word): If given a valid charset in the encoded-word token, always use that for charset conversion to UTF-8 even if it doesn't convert fully. We don't want to fall back to the user's supplied charset list because it may contain iso-8859-1 which will likely always be a 'best-match' charset. 2007-12-26 Jeffrey Stedfast * gmime/gmime-utils.c (g_mime_utils_decode_8bit): Made public. * gmime/internet-address.c (decode_mailbox): Instead of doing our own thing to convert raw 8bit/multibyte text sequences into UTF-8, use the same function we use in gmime-utils.c's header decoder. 2007-12-25 Jeffrey Stedfast * gmime/charset-map.c: New source file to generate the charset map (moved out of gmime-charset.c) * gmime/gmime-charset.c (main): Removed. 2007-12-25 Jeffrey Stedfast * gmime/gmime-charset.c (main): Cleaned up the logic and made it so that we can alias a block to a previous block if the blocks are identical rather than just aliasing when all values in the block are identical. Happens to make no difference in the output, but the logic is now there if that ever changes. 2007-12-24 Jeffrey Stedfast * gmime/gmime-charset-map-private.h: Regenerated. * gmime/gmime-charset.c (known_iconv_charsets): Map all of the gb2312 aliases to GBK as GBK is a superset of gb2312 (apparently some clients are tagging GBK as gb2312 which is missing some glyphs contained within GBK). (main): Added iso-8859-6 to the table for Arabic support. 2007-12-16 Jeffrey Stedfast * gmime/gmime-utils.c (decode_8bit): When reallocing our output buffer, we need to update outleft as well. 2007-12-08 Jeffrey Stedfast * gmime/gmime-utils.c (rfc2047_encode_merge_rfc822_words): Completely rewritten with new logic which will hopefully group words more logically. 2007-12-08 Jeffrey Stedfast Fixes bug #498720 * gmime/internet-address.c (internet_address_list_writer): Renamed from the temporary internet_address_list_fold() name. (_internet_address_to_string): New internal function that writes an InternetAddress to a GString, doing proper folding and rfc2047 encoding if requested. (internet_address_to_string): Use the new internal function. * tests/test-mime.c: Added another addrspec test and fixed up some exception strings to be a little more helpful. 2007-12-05 Jeffrey Stedfast * configure.in: Fixed a bug where explicitly disabling largefile support would add -D_FILE_OFFSET_BITS=no to the compiler CFLAGS. Also added a blaring WARNING when -enable-largefile is passed. 2007-11-23 Jeffrey Stedfast Attempt at solving bug #498720 for address fields, altho it should probably be made to handle folding single addresses in the case where they are too long to fit within a single line. * gmime/internet-address.c (internet_address_list_fold): New function. * gmime/gmime-message.c (write_structured): Renamed from write_addrspec(). (write_addrspec): New header writer that writes InternetAddressLists in a nicely folded manner. 2007-11-12 Jeffrey Stedfast * gmime/internet-address.c (internet_address_destroy): No need to check if ia != NULL, we know this is true already. Revision 1.13 / (download) - annotate - [select for diffs], Thu Nov 22 20:39:25 2.2.11 changes: many bugfixes Revision 1.12 / (download) - annotate - [select for diffs], Tue Jul 17 10:39:09 2007 UTC (4 years, 10 months ago) by drochner Branch: MAIN CVS Tags: pkgsrc-2007Q3-base, pkgsrc-2007Q3 Changes since 1.11: +4 -4 lines Diff to previous 1.11 (colored) update to 2.2.9 changes: -Fixed a memory leak -Oops, fseek() should have been using SEEK_SET, not SEEK_END Revision 1.11 / (download) - annotate - [select for diffs], Thu May 3 11:54:33 2007 UTC (5 years ago) by wiz Branch: MAIN CVS Tags: pkgsrc-2007Q2-base, pkgsrc-2007Q2 Changes since 1.10: +4 -4 lines Diff to previous 1.10 (colored) Update to 2.2.8: 2007-04-25 Jeffrey Stedfast * README: Bumped version * configure.in: Bumped version to 2.2.8 * tests/test-pgp.c: Test exporting of keys. * gmime/gmime-utils.c (rfc2047_decode_word): Fixed compile warnings. * gmime/gmime-stream-file.c (stream_reset): Removed an unused variable. * gmime/gmime-charset.c (g_mime_charset_can_encode): s/if (mask->level = 1)/if (mask->level == 1)/ 2007-04-23 Jeffrey Stedfast * README: Bumped version * configure.in: Bumped version to 2.2.7 2007-04-14 Jeffrey Stedfast * gmime/*.c (g_mime_*_get_type): Set n_preallocs to 0. 2007-04-12 Jeffrey Stedfast * gmime/*.c: no need for a second NULL argument to g_object_new() * util/cache.c (cache_new): Change max_size and node_size to be of type size_t. * gmime/gmime-multipart-encrypted.c (g_mime_multipart_encrypted_new): g_object_new() doesn't need a second NULL argument. * gmime/gmime-utils.c (decode_8bit): Close the iconv descriptor and since we are using is_ascii() now, we don't need to use unsigned char *'s. 2007-04-12 Jeffrey Stedfast * gmime/gmime-utils.c (decode_8bit): Use is_ascii(). (g_mime_utils_header_decode_text): Same. (g_mime_utils_header_decode_phrase): Here too. * gmime/gen-table.c: Added a is_ascii() macro for use instead of the ctype isascii() so that I don't have to worry about casting. 2007-04-11 Jeffrey Stedfast Revision 1119 (previous commit) made the following 2 functions even less attractive than they already were, so I decided to rewrite them especially since it wasn't hard to find a far cleaner approach. * gmime/gmime-utils.c (g_mime_utils_header_decode_text): Rewritten to be cleaner, faster, and more elegant. (g_mime_utils_header_decode_phrase): Same. 2007-04-11 Jeffrey Stedfast Fixes for bug #423760 and bug #342196 * gmime/gmime-charset.c (g_mime_charset_can_encode): New convenience function to check whether a length of UTF-8 text can be converted into the specified charset. (g_mime_set_user_charsets): New function allowing an application to provide GMime with a list of user-preferred charsets to use for encoding and decoding headers. (g_mime_user_charsets): New function to get the list of user-preferred charsets. * gmime/gmime-utils.c (decode_8bit): New function to convert arbitrary 8bit text into UTF-8 using the charset list provided by g_mime_user_charsets(). (rfc2047_decode_word): Don't assume that just because the declared charset is UTF-8 that it actually is in UTF-8. (rfc2047_decode_word): If we can't open a converter for the declared charset to UTF-8 or if we can't convert from the declared charset into UTF-8, fall back to using decode_8bit(). (g_mime_utils_header_decode_text): Convert 8bit word tokens into UTF-8 using decode_8bit(). (g_mime_utils_header_decode_phrase): Same. (rfc2047_encode_word): Be a little more efficient about removing '\n' chars... (rfc2047_encode): When encoding a level-2 word cluster, attempt to fit the cluster within a charset provided by g_mime_user_charsets() rather than using GMime's best-fit charset table (unless, of course, it doesn't fit within any of the user-specified charsets). 2007-03-28 Jeffrey Stedfast * gmime/gmime-iconv-utils.c (g_mime_iconv_strndup): No need to cast out to a char *, it already is. * gmime/gmime-stream-mem.c (g_mime_stream_mem_set_byte_array): Only free the previous memory buffer if we were the owner. Revision 1.10 / (download) - annotate - [select for diffs], Sun Apr 15 13:11:40 2007 UTC (5 years, 1 month ago) by wiz Branch: MAIN Changes since 1.9: +4 -5 lines Diff to previous 1.9 .9 / (download) - annotate - [select for diffs], Sat Mar 17 14:39:58 2007 UTC (5 years, 2 months ago) by joerg Branch: MAIN CVS Tags: pkgsrc-2007Q1-base, pkgsrc-2007Q1 Changes since 1.8: +2 -1 lines Diff to previous 1.8 (colored) Fix build on DragonFly. Revision 1.8 / (download) - annotate - [select for diffs], Thu Mar 8 20:04:06 2007 UTC (5 years, 2 months ago) by wiz Branch: MAIN Changes since 1.7: +5 -6 lines Diff to previous 1.6: +6 -6 lines Diff to previous 1.6 (colored) Update to gmime 2.1.19 and drop maintainership. Based upon the 2.1.17 update by Fredrik Carlsson in PR 32487 Changes: The usual: fixes, new features. Revision 1.6 / (download) - annotate - [select for diffs], Wed Mar 16 12:34:49 2005 UTC (7 years, 2 months ago) by rill) Added a workaround for systems that don't have ECANCELED. Approved by wiz. Revision 1.5 / (download) - annotate - [select for diffs], Thu Feb 24 09:59:22 2005 UTC (7 years, 3 months ago) by agc Branch: MAIN Changes since 1.4: +2 -1 lines Diff to previous 1.4 (colored) Add RMD160 digests. Revision 1.4 / (download) - annotate - [select for diffs], Sun Nov 14 16:48:55 2004 UTC (7 years, 6 months ago) by jmmv Branch: MAIN CVS Tags: pkgsrc-2004Q4-base, pkgsrc-2004Q4 Changes since 1.3: +3 -3 lines Diff to previous 1: +3 -3: +4 -4.
http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/mail/gmime/distinfo
crawl-003
refinedweb
2,926
68.06
Contents QAction Class Reference The QAction class provides an abstract user interface action that can be inserted into widgets. More... #include <QAction> Inherited by: QMenuItem and QWidgetAction. Public Types Properties Public Functions Public Slots Signals Reimplemented Protected Functions Additional Inherited Members: openAct = new QAction(QIcon(":/images/open.png"), tr("&Open..."), this); openAct->setShortcuts(QKeySequence::Open); openAct->setStatusTip(tr("Open an existing file")); connect(openAct, SIGNAL(triggered()), this, SLOT OS X.. enum QAction::SoftKeyRole This enum describes how an action should be placed in the softkey bar. Currently this enum only has an effect on the Symbian platform. Actions with a softkey role defined are only visible in the softkey bar when the widget containing the action has focus. If no widget currently has focus, the softkey framework will traverse up the widget parent hierarchy looking for a widget containing softkey actions.). Access functions: Notifier signal:. On Symbian the icons which are passed to softkeys, i.e. to actions with softkey role, need to have pixmap alpha channel correctly set otherwise drawing artifacts will appear when softkey is pressed down.Application::setAttribute(). menuRole : MenuRole : softKeyRole : SoftKeyRole This property holds the action's softkey role. This indicates what type of role this action describes in the softkey framework on platforms where such a framework is supported. Currently this is only supported on the Symbian platform. The softkey role can be changed any time. This property was introduced in Qt 4.6. Access functions: Notifier signal: and Q3StyleSheet. Member Function Documentation QAction::QAction ( QObject * parent ) Constructs an action with parent. If parent is an action group the action will be automatically inserted into the group. QAction::QAction ( const QString & text, QObject * parent )(). void QAction::changed () [signal] This signal is emitted when an action has changed. If you are only interested in actions in a given widget, you can watch for QWidget::actionEvent() sent with an QEvent::ActionChanged. See also QWidget::actionEvent(). QVariant QAction::data () const Returns the user data as set in QAction::setData. bool QAction::event ( QEvent * e ) [virtual protected] Reimplemented from QObject::event(). void QAction::hover () [slot] This is a convenience slot that calls activate(Hover). void QAction::hovered () [signal]. void QAction::setDisabled ( bool b ) [slot]List<QKeySequence> QAction::shortcuts () const Returns the list of shortcuts, with the primary shortcut as the first element of the list. This function was introduced in Qt 4.2. See also setShortcuts(). bool QAction::showStatusText ( QWidget * widget = 0 ) Updates the relevant status bar for the widget specified by sending a QStatusTipEvent to its parent widget. Returns true if an event was sent; otherwise returns false. If a null widget is specified, the event is sent to the action's parent. void QAction::toggle () [slot] This is a convenience function for the checked property. Connect to it to change the checked state to its opposite state. void QAction::toggled ( bool checked ) [signal]. void QAction::trigger () [slot] This is a convenience slot that calls activate(Trigger). void QAction::triggered ( bool checked = false ) [signal]. Votes: 1 Coverage: Qt 4.8 Lab Rat 2 notes Mac menu role gotchas As stated in, the Preferences and About menu items are treated differently on Mac OS. For example, a QAction titled “About*” (*=wildcard) will be placed in the application menu. However, ANY QAction will obey this rule, which can lead to unexpected results if you are dealing with dynamically created menus. To avoid this automatic mapping, use QAction::setMenuRole(QAction::NoRole). [Revisions] Votes: 0 Coverage: Qt 4.8 Ant Farmer 6 notes void QAction::setMenu ( QMenu * menu ) Ownership of the menu is not transferred to the action. [Revisions]
http://qt-project.org/doc/qt-4.8/qaction.html
CC-MAIN-2015-06
refinedweb
597
50.63
Hi and welcome to Just Answer!A copy of your tax return include copies of all forms and schedules that were sent to the IRS. You may request exact copies of your tax return (including W2 forms) - the IRS charges $57 for that.To request a copy of the tax return - you need to use form 4506 - are generally available for returns filed in the current and past six years.In most situations - you do not need an actual copy - but may use have three easy and convenient options for getting copies of your federal tax return information -- tax account transcripts--by phone, by mail, or online. The. You may take a look here for the example hose the transcript looks - Let me know if you need any help. Be sure to ask if any clarification needed. what i really want to know is how much it says my total gross income for the year was For that purposes the tax transcript would be simple solution. If you want it faster - I suggest ordering either online or on the phone. There will not be any fee. thanks not 39 dollars worth Please let me know if you need any help. My goal is to provide EXELLENT service.If you expect me to access your tax information - please be aware that the IRS will only allow to have access you and your official representative.I hope that information I provided will be useful, but if you need any help - let me know.
http://www.justanswer.com/tax/7718q-hi-know-copy-2008.html
CC-MAIN-2014-35
refinedweb
251
70.63
A reason that I'm such a huge fan of Elixir is that everything just seems to click. The code is elegant and expressive while managing to be efficient. One thing that hasn't clicked until just now is the with statement. It turns out that it's pretty easy to understand and really quite useful. It addresses a specific and not-uncommon problem in a very Elixir-way. What's the problem? Pretend we have the following: defmodule User do defstruct name: nil, dob: nil def create(params) do end end The create function should either create a User struct or return an error if the params are invalid. How would you go about doing this? Here's an example using pipes: def create(params) do %User{} |> parse_dob(params["dob"]) |> parse_name(params["name"]) end defp parse_dob(user, nil), do: {:error, "dob is required"} defp parse_dob(user, dob) when is_integer(dob), do: %{user | dob: dob} defp parse_dob(_user, _invalid), do: {:error "dob must be an integer"} defp parse_name(_user, {:error, _} = err), do: err defp parse_name(user, nil), do: {:error, "name is required"} defp parse_name(user, ""), do: parse_name(user, nil) defp parse_name(user, name), do: %{user | name: name} The problem with this approach is that every function in the chain needs to handle the case where any function before it returned an error. It's clumsy, both because it isn't pretty and because it isn't flexible. Any new return type that we introduced has to be handled by all functions in the chain. The pipe operator is great when all functions are acting on a consistent piece of data. It falls apart when we introduce variability. That's where with comes in. with is a lot like a |> except that it allows you to match each intermediary result. Let's rewrite our code: def create(params) do with {:ok, dob} <- parse_dob(params["dob"]), {:ok, name} <- parse_name(params["name"]) do %User{dob: dob, name: name} else # nil -> {:error, ...} an example that we can match here too err -> err end end defp parse_dob(nil), do: {:error, "dob is required"} defp parse_dob(dob) when is_integer(dob), do: {:ok, dob} defp parse_dob(_invalid), do: {:error "dob must be an integer"} defp parse_name(nil), do: {:error, "name is required"} defp parse_name(""), do: parse_name(nil) defp parse_name(name), do: {:ok, name} Every statement of with is executed in order. Execution continues as long as left <- right match. As soon as a match fails, the else block is executed. Within the else block we can match against whatever WAS returned. If all statements match, the do block is executed and has access to all the local variables in the with block. Got it? Here's a test. Why did we have to change our success cases so that they'd return {:ok, dob} and {:ok, name} instead of just dob and name? If we didn't return {:ok, X}, our match would look like: def create(params) do with dob <- parse_dob(params["dob"]), name <- parse_name(params["name"]) do %User{dob: dob, name: name} else err -> err end end ... However, a variable on the left side matches everything. In other words, the above would treat the {:error, "BLAH"} as matches and continue down the "success" path.
https://www.openmymind.net/Elixirs-With-Statement/
CC-MAIN-2021-39
refinedweb
540
69.21
Alright, so that title isn’t very clickbaity unless you are a real data nerd. Linear models do terrible at learning XOR. Simply because XOR is highly non-linear. It requires a non-linear model to learn it. And my model is non-linear too. But the only model that I use is a linear regression, straight up OLS. My trick is that I use an ensemble of OLS regressions. I know, that’s like cheating to say that I’m saying that I am “only” using Linear Regression. I suppose the more appropriate way to say it is that I am only using Linear Regressions combined using stacking. But I digress. So let’s start out with what is XOR anyway? XOR is a type of logic gate that you need to have to build a computer. Computer scientists have known about XOR for quite some time, like 60 or 70 years at this point. It takes two inputs, both of them being bits. In other words, both of the inputs can be a 0 or a 1. It also outputs a 0 or a 1. So there are only 4 distinct inputs, and 2 distinct outputs. Here it is in mathematical terms, the function f is the XOR function. And that’s it really. That is the whole XOR function. Two bits are in the off or on position output a bit that is off, if exactly 1 input is on, output a bit that is on. Let’s try to model it with Linear Regression Okay, let’s model it. First we need to create the data, and load the libraries that we’ll need. First off, we’ll be using sklearn (a couple of parts of it anyway), numpy, and pandas. So here’s the code to get them into python. from sklearn.model_selection import KFold from sklearn.linear_model import LinearRegression,LogisticRegression from sklearn.metrics import confusion_matrix import numpy as np import pandas as pd And now we need to get some data. Fortunately, there are only 4 data points that describe the entire function, so we can hand code them. We’ll also throw them into a pandas dataframe just so that we can use them later. X = np.array([[0, 0], [1, 0], [0, 1], [1, 1]]) y = np.array([0, 1, 1, 0]) X = pd.DataFrame(X) X['depvar'] = y Great, now we have our dataset, and all the libraries that we need to do a linear regression. Let’s run OLS against this data and see what we get. model=LinearRegression() model.fit(X[[0,1]],X[‘depvar’]) print(model.predict(X[[0,1]])) So it ran just fine. And here is the output: [ 0.5 0.5 0.5 0.5] Well, that sucks. The way that I interpret this is that the model decides there is a always a 50% probability that the output should be on. That’s not very helpful when you are trying to make decisions, essentially, you don’t know. Flip a coin. It’s going to be just as accurate. Sad day for OLS, it can’t do what it is supposed to do. It is just too weak for this problem, it can’t learn the non-linearity. Ensembles Will Make it Better So the approach that I am going to take is that of ensembling. There are a number of other ways that you can approach this problem. Introducing non-linear terms, a cubic would be able to fit XOR, for example. You could apply some sort of kernel to do an SVM type thing, maybe. You can run it through a neural network. But all of those take us away from simple models. Let’s give ensembling a try. First we need to separate the data set into a number of folds. I could cheat a little bit and learn 4 folds, one for each output, but that’s boring. We’ll just do 2 folds. The code below, will separate the data into 2 random folds. kf = KFold(n_splits=2,shuffle=True) kf.get_n_splits(X) i=0 for train_index, test_index in kf.split(X): for j in test_index: X.loc[X.index == j,'fold'] = int(i) i+=1 This code adds a column to the dataset which indicates which fold an observation belongs to. This will be important when it is time to fit models for stacking. Okay, now in order to do proper stacking we’ll need to create a copy of the dataset where we can store estimates from our models in the stacking procedure. We’ll call it meta_X: meta_X = X.copy() meta_X['model1']=None The model1 variable is where we’ll store the output from the linear regressions that we’re going to be using as base models.Now we need to loop over all of our folds, and split the data into training and test sets. We’ll train a model on data in the training set and use that trained model to predict the test set. These predictions, the ones on data that the model in that fold hasn’t seen will be used to fill in the model1 variable in the test set. I feel like this paragraph has been crazy convoluted. Hopefully, some code will clear things up if I managed to confuse you. So check out this for loop, it is essentially what I was trying to describe in this paragraph: for idx in X['fold'].unique(): train=X[X['fold'] != idx] test=X[X['fold'] == idx] model1 = LinearRegression() model1.fit(train[[0,1]],train['depvar']) print(model1.coef_) meta_X.loc[test.index,'model1'] = model1.predict(test[[0,1]]) Okay, so I’ve filled in the model1 variable. All is looking good, except that I still don’t have predictions from the model. The way that I solve this problem, is by running a linear regression over the whole dataset. But instead of using my original data, I use the predictions from my base models. model_all = LinearRegression() model_all.fit(meta_X[['model1']],meta_X['depvar']) print(model_all.predict(meta_X[['model1']])) Which gives the following output: [ 0. 1. 1. 0.] Hey, that’s not too shaby. What Is Going On Here? I actually look at this problem, and it looks a lot like a multi-layer perceptron to me. I have a hidden layer, an input layer and an output layer. The only difference is that instead of running the algorithm from end to end, I was in the loop making decisions about how the weights and layers would work together. I think that I was manually building out a neural network, and building out activations, etc. But it works, and I would call it distinctly different from a neural network. But it does make a nice example of how an ensemble of linear models can produce very non-linear outputs.
https://barnesanalytics.com/learning-xor-linear-regression-yeah-possible
CC-MAIN-2018-13
refinedweb
1,146
75.81
curl_global_sslset - select SSL backend NAME curl_global_sslset - Select SSL backend to use with libcurl SYNOPSIS #include <curl/curl.h> typedef struct { curl_sslbackend id; const char *name; } curl_ssl_backend; typedef enum { CURLSSLBACKEND_NONE = 0, CURLSSLBACKEND_OPENSSL = 1, CURLSSLBACKEND_GNUTLS = 2, CURLSSLBACKEND_NSS = 3, CURLSSLBACKEND_GSKIT = 5, CURLSSLBACKEND_POLARSSL = 6, CURLSSLBACKEND_WOLFSSL = 7, CURLSSLBACKEND_SCHANNEL = 8, CURLSSLBACKEND_DARWINSSL = 9, CURLSSLBACKEND_AXTLS = 10, CURLSSLBACKEND_MBEDTLS = 11 } curl_sslbackend; CURLsslset curl_global_sslset(curl_sslbackend id, const char * name, curl_ssl_backend *** avail ); DESCRIPTION This function configures at runtime which SSL backend to use with libcurl. This function can only be used to select an SSL backend once, and it must be called before curl_global_init. The backend can be identified by the id (e.g. CURLSSLBACKEND_OPENSSL). The backend can also be specified via the name parameter for a case insensitive match (passing -1 as id). If both id and name are specified, the name will be ignored. If neither id nor name are specified, the function will fail with CURLSSLSET_UNKNOWN_BACKEND and set the avail pointer to the NULL-terminated list of available backends. The available backends are those that this particular build of libcurl supports. Since libcurl 7.60.0, the avail pointer will always be set to the list of alternatives if non-NULL. Upon success, the function returns CURLSSLSET_OK. If the specified SSL backend is not available, the function returns CURLSSLSET_UNKNOWN_BACKEND and sets the avail pointer to a NULL-terminated list of available SSL backends. In this case, you may call the function again to try to select a different backend. The SSL backend can be set only once. If it has already been set, a subsequent attempt to change it will result in a CURLSSLSET_TOO_LATE. This function is not thread safe. You must not call it when any other thread in the program (i.e. a thread sharing the same memory) is running. This doesn't just mean no other thread that is using libcurl. AVAILABILITY This function was added in libcurl 7.56.0. Before this version, there was no support for choosing SSL backends at runtime. RETURN VALUE If this function returns CURLSSLSET_OK, the backend was successfully selected. If the chosen backend is unknown (or support for the chosen backend has not been compiled into libcurl), the function returns CURLSSLSET_UNKNOWN_BACKEND. If the backend had been configured previously, or if curl_global_init has already been called, the function returns CURLSSLSET_TOO_LATE. If this libcurl was built completely without SSL support, with no backends at all, this function returns CURLSSLSET_NO_BACKENDS. SEE ALSO curl_global_init, libcurl This HTML page was made with roffit.
https://curl.haxx.se/libcurl/c/curl_global_sslset.html
CC-MAIN-2018-43
refinedweb
409
64.51
Robert S. Thau wrote: > > I. I've been told since I posted that the only namespace problem is "pool". > >. I'm not sure what you mean by smart pointers, but I'm guessing that you mean some kind of wrapper for a pointer that hides tricks like double indirection and link counting, right? I've actually implemented this and it wasn't too bad, IMO, and that was before templates, too. The hairiest bit was deferring the deletion of objects that removed the last link to themselves ;-) > >. Well ... I'm not sure I'd agree with this. It is certainly a damn sight easier in C++ than it is in C. Interfacing C++ to the pool stuff would be trivial, though getting the destructors called when the pool is destroyed might be harder (if we actually want to - it would be the logical way to do the cleanup routines). > . No hash tables? Yuk. Anyway, I still haven't got round to looking at STL but its on the list... > >. I'm not so sure about these alternatives, but a strong argument against them is the reduction of the "out-of-the-box experience" that C/C++ gives us. This may change, of course. > What I'd *really* like is a decent, high-performance implementation of > SML, but the current implementations have an unfortunately high > performance hint. I'm hoping that when (if!) CMU produces a > production-quality version of their type-directed research compiler, > this will improve...). OK, I'll bite. What's SML? Cheers, Ben. > > rst -- Ben Laurie Phone: +44 (181) 994 6435 Freelance Consultant and Fax: +44 (181) 994 6472 Technical Director Email: ben@algroup.co.uk A.L. Digital Ltd, URL: London, England. Apache Group member ()
http://mail-archives.apache.org/mod_mbox/httpd-dev/199607.mbox/%3C9607241009.aa19592@gonzo.ben.algroup.co.uk%3E
CC-MAIN-2017-34
refinedweb
289
74.29
HEllo. Printable View HEllo. All random numbers generated by computer, require to use some sort of formula. Thus, the generated numbers are pseudorandom and follow a specific sequence if we always starts from the same point. In order to further randomize the number, we should change the starting point whenever the program start. In your example, srand(clock()) should only be called once. This function uses the processor time retrieved from clock() to set the random starting point, also known as random seed. As for the second function call, rand(), it generates the random number. Hope this helps. Quell, Here's something simple which is consistent with the response from Kheun. Take a look... Sincerely, Chris. :) Code: #include <time.h> #include <stdlib.h> static const int random(void) { static bool is_init = false; if(!is_init) { is_init = true; ::srand(static_cast<unsigned int>(clock())); } return ::rand(); } I don't know how good of a random number generator you need, but rand() is not a good choice for *serious* random number generation. If you need something more robust, do a search for "Mersenne Twister". Regards, Kevin rand() is compiler implementation specific. I guess rand() will give quite good chi-square test result (but again, it depends on the compiler vendor), at least for normal use of pseudo-random numbers. The seed for the pseudo-random algorithm (which you initialize with srand()) can be very important--again, depending on what your application is using the pseudo-random numbers for. You don't want to use clock() for srand(), because it will most likley return the same value all times (if called once from the same location in your code). This is due to that clock() represents the life-time of the process, ie the tme your process has lived. Try this little program and you'll see what I mean: With my compiler (vc7) it prints out zero each run... Really bad choice for a seed.With my compiler (vc7) it prints out zero each run... Really bad choice for a seed.Code: /* tryclock.c */ #include <stdio.h> #include <time.h> int main(void) { printf("%lu\n", clock()); return 0; } Hope it helps, Jonas PS,. For cryptography, rand() is no good. The same goes for using a (any) time function value as seed. Personally, I attempt to use the lowest significant bits from several time variables and mix them up to produce a good seed. (time(), QueryPerformanceCounter() or RDTSC, and/or other timers that exist on particular hardware on my system). It's a heck of a lot better than using just one time value, and it suits my needs -- though I admitt this may not be the best seed. Another possibility is to get seeds from if your application is connected to the web.
http://forums.codeguru.com/printthread.php?t=281630&pp=15&page=1
CC-MAIN-2015-11
refinedweb
460
66.64
table of contents - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.04-1 - unstable 5.04-1 NAME¶readdir_r - read a directory SYNOPSIS¶ #include <dirent.h> int readdir_r(DIR *dirp, struct dirent *entry, struct dirent **result); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): readdir_r(): DESCRIPTION¶. RETURN VALUE¶The readdir_r() function returns 0 on success. On error, it returns a positive error number (listed under ERRORS). If the end of the directory stream is reached, readdir_r() returns 0, and returns NULL in *result. ERRORS¶ - EBADF - Invalid directory stream descriptor dirp. - ENAMETOOLONG - A directory entry whose name was too long to be read was encountered.
https://manpages.debian.org/testing/manpages-dev/readdir_r.3.en.html
CC-MAIN-2019-51
refinedweb
110
54.39
NTN 51196 Cost|51196 bearing in Luxembourg 【snr 51196 bearing for sale】-Yemen Bearing snr 51196 bearing is one of the best products we sell, our company is also one of the best snr 51196 bearing for sale. Expect us to cooperate. Our snr 51196 bearingContact TWB | SKF 51196 bearing in Ireland NTN 51196 Bearin. NTN 51196 bearings founded in Japan.The largest merits of our NTN 51196 bearing is best quality,competitive price and fast shipping SKF Bearings|FAG.Contact us NTN 511/500 bearings /NTN-bearings/NTN_511_500_39838.html NTN 511/500 bearings founded in Japan.The largest merits of our NTN 511/500 bearing is best quality,competitive price and fast shipping.MeanwhileNTN 511/500 bearingContact us NTN Standard angular contact ball bearings 7009 Bearings,Pdf Offer NTN Standard angular contact ball bearings 7009 Bearings Pdf,Spec,Dimensions,Size Chart with best NTN Thrust Ball Bearings 51196. Brand:NTN,Bearings Types:Contact us NTN 51196 bearing in Australia | Product NTN 51196 bearing in Australia High Quality And Low Price. NTN 51196 bearing in Australia are widely used in industrial drive, agriculture, compressors, motors andContact us 51196 bearing Japan ntn-51196 Model: 51196d:480 mmD:580 mmB:80 mmCr:525 N C0r:3100 N Grease RPM:380 1/minOil RPM:550 1/minm:43.3 kg 51196 bearing Japan ntn-51196Contact us NTN Global | bearings, driveshafts, and precision equipments This is the official website of NTN. On this site you can find information on bearings, driveshafts, precision equipments, and NTN's other products. It also providesContact us cheaper nachi 51196 bearing | Product NTN 51196 bearings founded in Japan.The largest merits of our NTN 51196 bearing is best quality,competitive price and fast shipping.MeanwhileNTN 51196 bearing is very.Contact us import ntn 51192 bearing | Product Bearing name:NTN 51196 bearing in Turkey. If you are interested in NTN 51192 Bearing,please email us Bearings - Import . Bearing Type Size. Contact us SKF 51196 F Bearings,Price,Size Chart,CAD,SKF ,Thrust Ball We guarantee to provide you with the best SKF 51196 F Bearings,At the same time to provide you with the SKF 51196 F types NTN,Types:Thrust Ball BearingsContact us NTN Bearing Finder: Online Catalog and Interchange Browse All Categories in the NTN Bearing Corp. of America catalog including Ball Bearings,Tapered Roller Bearings,Cylindrical Roller Bearings,Needle Roller BearingsContact us NTN 51196 Bearings /NTN-bearings/NTN_51196_39839.html NTN 51196 bearings founded in Japan.The largest merits of our NTN 51196 bearing is best quality,competitive price and fast shipping.MeanwhileNTN 51196 bearing is veryContact us NTN Bearings_Shanghai Allen Bearing Manufacturing Co., Ltd. Now days NTN bearings CO operates more than fifty plants worldwide and is the third largest bearing manufacturer in the world. The main NTN bearings line include NTNContact us NTN Thrust Ball Bearings 51196 Bearings,Pdf,NTN 51196 Offer NTN Thrust Ball Bearings 51196 Bearings Pdf,Spec,Dimensions,Size Chart with best price from us, At the same time to provide you with the NTN Thrust BallContact us SKF 51196 F bearing in Netherlands | Product SKF 51196 F bearing in Netherlands High Quality And Low Price. SKF 51196 F bearing in Netherlands are widely used in industrial drive, agriculture, compressorsContact us - FAG (SCHAEFFLER) Z-534176.PRL Size|(SCHAEFFLER) Z-534176.PRL bearing in Switzerland - NSK UCFU210 Material|UCFU210 bearing in Liberia - SKF BT2-8011/HA3VA901 Puller Price|BT2-8011/HA3VA901 bearing in Estonia - NSK NUP326E Material|NUP326E bearing in Kazakhstan - FAG 23076CC/W33 For Sale|23076CC/W33 bearing in Brazil - NTN 234440B Supplier|234440B bearing in Latvia Riga Riga
http://welcomehomewesley.org/?id=15156&bearing-type=NTN-51196-Bearing
CC-MAIN-2018-47
refinedweb
596
53.41
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 3 results of 3 hi I'm having a problem with vertical synchronization/refresh with a map browser application. When the application is running and I am scrolling around the map space I see a subtle vertical shift that starts at the bottom of the screen and flows to the top. I am using a wxGlCanvas, the scrolling display is updated on a 12 ms recursive animation timer. My first thought was that there were perhaps other timers in the application firing off, this was not the case. I then checked the vertical sync setting on the video card, it was on. So far so good The code snippet is below: class MapWindow(glcanvas.GLCanvas): def __init__(self,parent): glcanvas.GLCanvas.__init__(self,parent) def OnPaint(self, event=None): dc = wx.PaintDC(self) self.SetCurrent() #set this canvas to recieve Opengl calls self.DrawGL() #call to draw opengl model self.SwapBuffers() From my understanding SwapBuffers() will sync the buffer swap with the vertical refresh of the hardware. I believe the problem my be in the glCanvas. After searching through various posts it seems the glcanvas does double buffer, but when I check the class reference documents there is a Constant WX_GL_DOUBLEBUFFER. Is it necessary to include this constant? In addtion I'm not sure about the syntax since these constants are in a attribList. I think this may be the crux of the problem, any suggestions would be appreciated. Ian Has anyone ported the GLUI toolkit to python? Steve _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today - it's FREE! ht
http://sourceforge.net/p/pyopengl/mailman/pyopengl-users/?viewmonth=200501&viewday=12
CC-MAIN-2015-22
refinedweb
290
66.84
. - Association with artifacts can occur in the following ways - Link to a single artifact e.g. a class - Out of sync risk = medium high - Depending on the number of deletes and renames, layers linked to classes can have stale links. However, these are shown as warnings and can be cleaned up quite easily. - Link to a group of artifacts using a container e.g. namespace - Out of sync risk = low - Linking a layer to a namespace indirectly links the layer to all classes in the namespace. Validation will start at the namespace and process everything in it. This indirect link allows the validation to adjust to changes to individual classes rather well. - Namespaces themselves do not change as often. - Link to a group of artifacts using a query - Out of sync risk = very low - A query links to dynamic collection of artifacts. The query is evaluated every time you validate. Hence, like the namespace example above, changes to the results of the query are automatically factored in. - Build time validation - The integration with builds and check ins make this feature a lot more useful and bring it to the forefront of the development process. This also has the side benefit of forcing updates if they’re required. PingBack from Brian Harry on Preview of the next TFS Power Tools release Martin Woodward on Radio TFS on Check-in…
https://blogs.msdn.microsoft.com/camerons/2008/09/30/architectural-validation-and-synchronization/
CC-MAIN-2016-44
refinedweb
228
63.7
Jose Coto11,466 Points If statement Hi, I am struggling to understand what is going on in this line of code: {% if saves.get(category) == choice %} checked {% endif %} Why would the if statement check if what has been saved is equal to choice? In that regard, how do we make sure that it will check the item that we have selected and not other? Thanks for your help? 1 Answer Daniel Schirmer4,364 Points It's not totally clear, because I couldn't see the code for the Update button, but the Update button should save the item that is clicked into the cookie. saves.get(category). So I think it would go like: - User clicks an item picture in a category and then clicks Update - Update button saves the item name/category into a cookie (via the save function with url redirects) - The if statement in question checks the cookie for a match {'shorts': 'yellow'} and makes "checked" the value from the key-value pair in the cookie.
https://teamtreehouse.com/community/if-statement-4
CC-MAIN-2020-50
refinedweb
169
61.5
The goal of this plugin is to avoid the need to use npm or yarn clients to explicitly install your dependencies from the registry before you bundle. Instead of specifying your dependencies in package.json, you specify them in your source code as URLs in import statements. Then Rollup dynamically fetches and includes those dependencies when you bundle. For example, you could put the following in your rollup.config.js: import urlResolve from 'rollup-plugin-url-resolve'; export default { // ... plugins: [urlResolve()] }; Then, in your source files, you can do stuff like this: import * as d3 from ''; Run rollup, and you're done. No more npm install! :) Well, at least not for your app's dependencies. Currently, the following URL protocols are supported: https:and http: file: data: It might help to think about this plugin as an alternative to rollup-plugin-node-resolve, but for any URL, not just stuff you've already installed in node_modules. The urlResolve function accepts all the same options as make-fetch-happen. They are used when we need to fetch a module from a remote URL. One option that is particularly useful is cacheManager, which can be used to cache the results of fetch operations on disk. This can make your builds a lot faster if many of your URLs point to remote servers. import urlResolve from 'rollup-plugin-url-resolve'; export default { // ... plugins: [ urlResolve({ // Caches the results of all fetch operations // in a local directory named ".cache" cacheManager: '.cache' }) ] }; There are various other options as well, including support for retrying failed requests and proxy servers. Please see the list of options for more information. You could also try using a URL that returns CommonJS, though you won't get the benefit of tree-shaking that using JavaScript modules provides. Still, it can be a useful stopgap until a package you need starts publishing JavaScript modules. If you do this, you'll probably want to use rollup-plugin-commonjs on those URLs in your Rollup config, just like you would normally do for stuff in node_modules: import commonjs from 'rollup-plugin-commonjs'; import urlResolve from 'rollup-plugin-url-resolve'; export default { // ... plugins: [ urlResolve(), commonjs({ // Treat unpkg URLs as CommonJS include: /^https:\/\/unpkg\.com/, // ...except for unpkg ?module URLs exclude: /^https:\/\/unpkg\.com.*?\?.*?\bmodule\b/ }) ] };
https://openbase.com/js/rollup-plugin-url-resolve
CC-MAIN-2021-39
refinedweb
380
56.86
Why Should You Use Transform Class Properties Plugin November 27, 2017 In my previous post I used pretty interesting syntax to define class methods for my Popup component. I was able to use arrow functions to change the scope of this to class level. Hmm, but it’s not actually Javascript, so how did I do that? First let’s refresh your memory, I’m talking about this code: import React, { Component } from 'react'; import Popup from './Popup'; import SubscriptionForm from './SubscriptionForm'; class App extends Component { constructor(props) { super(props); this.state = { isOpen: false }; } openPopup = () => { this.setState({ isOpen: true }); } closePopup = () => { this.setState({ isOpen: false }); } render() { return ( <div className="App"> <button onClick={this.openPopup}> Click Me! </button> <Popup show={this.state.isOpen} onClose={this.closePopup}> <SubscriptionForm></SubscriptionForm> </Popup> </div> ); } } export default App; Look, at the openPopup for example. That openPopup = is exactly what transform-class-properties allowed me to do. openPopup = () => { this.setState({ isOpen: true }); } Also it allowed me to use arrow function here. If not it this in that function would reference global scope instead of the scope of App class. Probably I would get an error like Uncaught TypeError: Property 'setState' of object [object Object] is not a function. But What Are The Alternatives More traditional and verbose approach would be to bind this manually. You can do this inside the constructor method. constructor(props) { super(props); this.openPopup = this.openPopup.bind(this); this.closePopup = this.closePopup.bind(this); this.state = { isOpen: false }; } You have to do this for every function that will use this reference, and it’s very repetitive. You Can Bind In Render Function For example by using bind(this): <button onClick={this.openPopup.bind(this)}> Or by using arrow functions: <button onClick={e => this.openPopup(e)}> Both of these require additional hassle, look ugly and have performance implications as you basically reallocate the function on every render. Summary This is why you better use class level properties. And by the way there is a proposal about class fields for future JS versions and it’s already Stage 3. That means that it’s very likely to become part of the language. If you are interested in learning new Javascript features (maybe even ones that are not included yet) – make sure to subscribe to my mailing list:
https://maksimivanov.com/posts/why-you-should-use-transform-class-properties-plugin/
CC-MAIN-2019-43
refinedweb
381
58.69
I tried running the latest code and went as far back as 0.6.07 and I keep getting the following error message when running duplicity. Any ideas how to fix? Tell me I dont have to upgrade to Python 2.4 [root@sierra duplicity]# duplicity Traceback (most recent call last): File "/usr/bin/duplicity", line 42, in ? from duplicity import commandline File "/usr/lib64/python2.3/site-packages/duplicity/commandline.py", line 147, in ? class DupOption(optparse.Option): File "/usr/lib64/python2.3/site-packages/duplicity/commandline.py", line 157, in DupOption ALWAYS_TYPED_ACTIONS = optparse.Option.ALWAYS_TYPED_ACTIONS + ("extend",) AttributeError: class Option has no attribute 'ALWAYS_TYPED_ACTIONS' It looks like ALWAYS_TYPED_ACTIONS is was added either in Python 2.4 or 2.5, so you'll have to go back to before duplicity used it. Going back to 0.5.x should be enough. ALWAYS_TYPED_ACTIONS You could report a bug to the duplicity maintainers, since their web page says that Python 2.3 is supported. However they may just fix the issue by amending their requirements to
http://serverfault.com/questions/172407/duplicity-fails-to-run-w-python-2-3
crawl-003
refinedweb
174
54.49
Letter Maker Objective This project will give you some practice with loops, static methods, removing code duplication and introduces the notion of having multiple classes working together in a single project. Overview The project consists of several classes working together -- most of them are provided for you -- only one class (LetterMaker) will be written by you. The final product will draw large letters made from blocks; the letters 'C', 'U', 'H', 'O', 'I', 'N', and 'Z' will be drawn (scroll down for a while to see some examples.): import java.awt.Color; You do not have to actually create Color objects yourself (that is done for you in the code that is provided by us), but if you wanted to create your own Color object you could use a statement like the one below: Color myColor = Color.BLACK; // myColor now refers to the color "black" There are other ways to generate just about any color you could imagine, but the simple syntax shown above works for the following built-in colors: BLACK, BLUE, CYAN, DARK_GRAY, GRAY, GREEN, LIGHT_GRAY, MAGENTA, ORANGE, PINK, RED, WHITE, YELLOW. DrawingGrid Class The DrawingGrid class has been written for you. It is part of a package called "CMSC131GridTools". That means that any class that wants to use DrawingGrid objects must begin with the following import statement at the top of the file: import CMSC131GridTools.DrawingGrid; A DrawingGrid object is a window that you can draw on. In the center of the window there is a square grid. You can color in each region of the grid with any color you want. Below is an example of an "empty" 11 by 9 DrawingGrid object: empty11.jpg Creating a DrawingGrid Object You don't actually have to create any DrawingGrids yourself for this project (they're created by code that has been provided for you), but if you wanted to create a new DrawingGrid object, you could use a statement like this one: DrawingGrid myGrid = new DrawingGrid(13); // myGrid will refer to a //13 by 13 drawing Grid Coloring the Squares on the Grid Once you have a DrawingGrid object, you can color in any of the squares you want using any color you want. To do this, you use the method of the DrawingGrid class called setColor. The only method that is provided colors in exactly one square of the grid – no more. The setColor method of the DrawingGrid class takes three parameters. The signature for the method appears below. public void setColor(int row, int col, Color color). DrawingGrid myGrid = new DrawingGrid(5); // creates the 5 by 5 DrawingGrid myGrid.setColor(3, 3, Color.RED); // colors in the square at row 3, col 3, using red myGrid.setColor(0, 2, Color.BLUE); // colors in the square at row 0, col 2, using blue twoDots.jpg If a particular square on the grid has already been colored, there is no harm in re-coloring that same square -- the previous color will simply be replaced with the new color. Determining the size of the DrawingGrid There is one other method of the DrawingGrid class that you will need. Suppose you have a variable called "myGrid" that refers to an existing DrawingGrid, and you want to know how big it is. You can use the method "getGridSize" to find out. The signature for this method appears below: public int getGridSize() // the return value is the size of the grid For example, if the variable "myGrid" refers to an existing 17 by 17 DrawingGrid object, then the following expression would be equal to the integer 17: myGrid.getGridSize() LetterMaker Class OK, this is where YOU come in! We have provided a skeleton for this class that you must complete. There is basically just one method that you MUST fill in. It is recommended that you create other methods in this class if you find it useful to do so. You may find that writing additional methods can help to eliminate duplicative code. Since we will not be creating any LetterMaker objects, any methods you choose to write in this class should be declared using the keyword "static". The method that you must write has the following signature: public static void drawLetter(DrawingGrid grid, String letter, Color color) Remember that the three parameters (grid, letter, and color) are provided to your method by the method that calls it. Those parameters contain the information that your method needs in order to do its job. The parameter "grid" will refer to an existing DrawingGrid object that has already been created by the driver that calls this method. Your method must not create a DrawingGrid, it is already there and if you create another it won't display or test correctly! Your method will color squares on the grid that is passed in via this parameter. You may assume that the grid starts off empty (all white squares). The parameter "letter" will be equal to one of the following characters: "C", "U", "H", "O", "I", "N", "Z" or "error". The "error" indicates that the user did not type in a valid letter. This should display as dot in the bottom right corner as shown below. The parameter "color" will indicate the color the user has selected or black if the user did not indicate a valid color option. Using the Color specified by the parameter "color", your method will draw a big block-letter on the grid (see the section below for examples), depicting the letter specified by the "letter" parameter. The style for this block letter must conform exactly to the examples below. Note that the size of the grid can be determined by calling the "getGridSize" method of the DrawingGrid class. The size of the grid dictates how big of a letter you are supposed to draw (the letter must fill up the entire grid, as in the examples shown below.)
https://www.daniweb.com/programming/software-development/threads/263992/i-am-new-with-java-i-don-t-know-where-to-start-i-can-even-write-my-own-method
CC-MAIN-2017-26
refinedweb
979
59.74
This post is the result of some head-scratching and note taking I did for a reporting project I undertook recently. It’s not a complete rundown of Python date manipulation, but hopefully the post (and hopefully the comments) will help you and maybe me too 🙂 The head-scratching is related to the fact that there are several different time-related objects, spread out over a few different time-related modules in Python, and I have found myself in plenty of instances where I needed to mix and match various methods and objects from different modules to get what I needed (which I thought was pretty simple at first glance). Here are a few nits to get started with: - strftime/strptime can generate the “day of week” where Sunday is 0, but there’s no way to tell any of the conversion functions like gmtime() that you want your week to start on Sunday as far as I know. I’m happy to be wrong, so leave comments if I am. It seems odd that you can do a sort of conversion like this when you output, but not within the calculation logic. - If you have a struct_time object in localtime format and want to convert it to an epoch date, time.mktime() works, but if your struct_time object is in UTC format, you have to use calendar.timegm() — this is lame and needs to go away. Just add timegm() to the time module (possibly renamed?). - time.ctime() will convert an epoch date into nicely formatted local time, but there’s no function to provide the equivalent output for UTC time. There are too many methods and modules for dealing with date manipulation in Python, such that performing fairly common tasks requires importing and using a few different modules, different object types and methods from each. I’d love this to be cleaned up. I’d love it more if I were qualified to do it. More learning probably needs to happen for that. Anyway, just my $.02. Mission 1: Calculating Week Start/End Dates Where Week Starts on Sunday My mission: Pull epoch dates from a database. They were generated on a machine whose time does not use UTC, but rather local time (GMT-4). Given the epoch date, find the start and end of the previous week, where the first day of the week is Sunday, and the last day of the week is Saturday. So, I need to be able to get a week start/end range, from Sunday at 00:00 through Saturday at 23:59:59. My initial plan of attack was to calculate midnight of the current day, and then base my calculations for Sunday 00:00 on that, using simple timedelta(days=x) manipulations. Then I could do something like calculate the next Sunday and subtract a second to get Saturday at 23:59:59. Nothing but ‘time’ In this iteration, I’ll try to accomplish my mission using only the ‘time’ module and some epoch math. Seems like you should be able to easily get the epoch value for midnight of the current epoch date, and display it easily with time.ctime(). This isn’t quite true, however. See here: >>> etime = int(time.time()) >>> time.ctime(etime) 'Thu May 20 15:26:40 2010' >>> etime_midnight = etime - (etime % 86400) >>> time.ctime(etime_midnight) 'Wed May 19 20:00:00 2010' >>> The reason this doesn’t do what you might expect is that time.ctime() in this case outputs the local time, which in this case is UTC-4 (I live near NY, USA, and we’re currently in DST. The timezone is EDT now, and EST in winter). So when you do math on the raw epoch timestamp (etime), you’re working with a bare integer that has no idea about time zones. Therefore, you have to account for that. Let’s try again: >>> etime = int(time.time()) >>> etime 1274384049 >>> etime_midnight = (etime - (etime % 86400)) + time.altzone >>> time.ctime(etime_midnight) 'Thu May 20 00:00:00 2010' >>> So, why is this necessary? It might be clearer if we throw in a call to gmtime() and also make the math bits more transparent: >>> etime 1274384049 >>> time.ctime(etime) 'Thu May 20 15:34:09 2010' >>> etime % 86400 70449 >>> (etime % 86400) / 3600 19 >>> time.gmtime(etime) time.struct_time(tm_year=2010, tm_mon=5, tm_mday=20, tm_hour=19, tm_min=34, tm_sec=9, tm_wday=3, tm_yday=140, tm_isdst=0) >>> midnight = etime - (etime % 86400) >>> time.gmtime(midnight) time.struct_time(tm_year=2010, tm_mon=5, tm_mday=20, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=140, tm_isdst=0) >>> time.ctime(midnight) 'Wed May 19 20:00:00 2010' >>> time.altzone 14400 >>> time.altzone / 3600 4 >>> midnight = (etime - (etime % 86400)) + time.altzone >>> time.gmtime(midnight) time.struct_time(tm_year=2010, tm_mon=5, tm_mday=20, tm_hour=4, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=140, tm_isdst=0) >>> time.ctime(midnight) 'Thu May 20 00:00:00 2010' >>> What’s that now? You want what? You want the epoch timestamp for the previous Sunday at midnight? Well, let’s see. The time module in Python doesn’t do deltas per se. You can calculate things out using the epoch bits and some math if you wish. The only bit that’s really missing is the day of the week our current epoch timestamp lives on. >>> time.ctime(midnight) 'Thu May 20 00:00:00 2010' >>> struct_midnight = time.localtime(midnight) >>> struct_midnight time.struct_time(tm_year=2010, tm_mon=5, tm_mday=20, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=140, tm_isdst=1) >>> dow = struct_midnight.tm_wday >>> dow 3 >>> midnight_sunday = midnight - ((dow + 1) * 86400) >>> time.ctime(midnight_sunday) 'Sun May 16 00:00:00 2010' You can do this going forward in time from the epoch time as well. Remember, we also want to grab 23:59:59 on the Saturday after the epoch timestamp you now have: >>> saturday_night = midnight + ((5 - dow+1) * 86400) - 1 >>> time.ctime(saturday_night) 'Sat May 22 23:59:59 2010' >>> And that’s how you do date manipulation using *only* the time module. Elegant,no? No. Not really. Unfortunately, the alternatives also aren’t the most elegant in the world, imho. So let’s try doing this all another way, using the datetime module and timedelta objects. Now with datetime! The documentation for the datetime module says: “While date and time arithmetic is supported, the focus of the implementation is on efficient member extraction for output formatting and manipulation.” Hm. Sounds a lot like what the time module functions do. Some conversion here or there, but no real arithmetic support. We had to pretty much do it ourselves mucking about with epoch integer values. So what’s this buy us over the time module? Let’s try to do our original task using the datetime module. We’re going to start with an epoch timestamp, and calculate the values for the previous Sunday at midnight, and the following Saturday at 23:59:59. The first thing I had a hard time finding was a way to deal with the notion of a “week”. I thought I’d found it in ‘date.timetuple()’, which help(date.timetuple) says is “compatible with time.localtime()”. I guess they must mean that the output is the same as time.localtime(), because I can’t find any other way in which it is similar. Running time.localtime() with no arguments returns a time_struct object for the current time. date.timetuple() requires arguments or it’ll throw an error, and to make you extra frustrated, the arguments it takes aren’t in the docs or the help() output. So maybe they mean it takes the same arguments as time.localtime(), eh? Not so much — time.localtime() takes an int representing an epoch timestamp. Trying to feed an int to date.timetuple throws an error saying it requires a ‘date’ object. So, the definition of “compatible” is a little unclear to me in this context. So here I’ve set about finding today, then “last saturday”, and then “the sunday before the last saturday”: def get_last_whole_week(today=None): # a date object date_today = today or datetime.date.today() # day 0 is Monday. Sunday is 6. dow_today = date_today.weekday() if dow_today == 6: days_ago_saturday = 1 else: # If day between 0-5, to get last saturday, we need to go to day 0 (Monday), then two more days. days_ago_saturday = dow_today + 2 # Make a timedelta object so we can do date arithmetic. delta_saturday = datetime.timedelta(days=days_ago_saturday) # saturday is now a date object representing last saturday saturday = date_today - delta_saturday # timedelta object representing '6 days'... delta_prevsunday = datetime.timedelta(days=6) # Making a date object. Subtract the days from saturday to get "the Sunday before that". prev_sunday = saturday - delta_prevsunday This gets me date objects representing the start and end time of my reporting range… sort of. I need them in epoch format, and I need to specifically start at midnight on Sunday and end on 23:59:59 on Saturday night. Sunday at midnight is no problem: timetuple() sets time elements to 0 anyway. For Saturday night, in epoch format, I should probably just calculate a date object for two Sundays a week apart, and subtract one second from one of them to get the last second of the previous Saturday. Here’s the above function rewritten to return a tuple containing the start and end dates of the previous week. It can optionally be returned in epoch format, but the default is to return date objects. def get_last_whole_week(today=None, epoch=False): # a date object date_today = today or datetime.date.today() print "date_today: ", date_today # By default day 0 is Monday. Sunday is 6. dow_today = date_today.weekday() print "dow_today: ", dow_today if dow_today == 6: days_ago_saturday = 1 else: # If day between 0-5, to get last saturday, we need to go to day 0 (Monday), then two more days. days_ago_saturday = dow_today + 2 print "days_ago_saturday: ", days_ago_saturday # Make a timedelta object so we can do date arithmetic. delta_saturday = datetime.timedelta(days=days_ago_saturday) print "delta_saturday: ", delta_saturday # saturday is now a date object representing last saturday saturday = date_today - delta_saturday print "saturday: ", saturday # timedelta object representing '6 days'... delta_prevsunday = datetime.timedelta(days=6) # Making a date object. Subtract the 6 days from saturday to get "the Sunday before that". prev_sunday = saturday - delta_prevsunday # we need to return a range starting with midnight on a Sunday, and ending w/ 23:59:59 on the # following Saturday... optionally in epoch format. if epoch: # saturday is date obj = 'midnight saturday'. We want the last second of the day, not the first. saturday_epoch = time.mktime(saturday.timetuple()) + 86399 prev_sunday_epoch = time.mktime(prev_sunday.timetuple()) last_week = (prev_sunday_epoch, saturday_epoch) else: saturday_str = saturday.strftime('%Y-%m-%d') prev_sunday_str = prev_sunday.strftime('%Y-%m-%d') last_week = (prev_sunday_str, saturday_str) return last_week It would be easier to just have some attribute for datetime objects that lets you set the first day of the week to be Sunday instead of Monday. It wouldn’t completely alleviate every conceivable issue with calculating dates, but it would be a help. The calendar module has a setfirstweekday() method that lets you set the first weekday to whatever you want. I gather this is mostly for formatting output of matrix calendars, but it would be useful if it could be used in date calculations as well. Perhaps I’ve missed something? Clues welcome. Mission 2: Calculate the Prior Month’s Start and End Dates This should be easy. What I hoped would happen is I’d be able to get today’s date, and then create a timedelta object for ‘1 month’, and subtract, having Python take care of things like changing the year when the current month is January. Calculating this yourself is a little messy: you can’t just use “30 days” or “31 days” as the length of a month, because: - “January 31” – “30 days” = “January 1” — not the previous month. - “March 1” – “31 days” = “January 30” — also not the previous month. Instead, what I did was this: - create a datetime object for the first day of the current month (hard coding the ‘day’ argument) - used a timedelta object to subtract a day, which gives me a datetime object for the last day of the prior month (with year changed for me if needed), - used that object to create a datetime object for the first day of the prior month (again hardcoding the ‘day’ argument) Here’s some code: today = datetime.datetime.today() first_day_current = datetime.datetime(today.year, today.month, 1) last_day_previous = first_day_current - datetime.timedelta(days=1) first_day_previous = datetime.datetime(last_day_previous.year, last_day_previous.month, 1) print 'Today: ', today print 'First day of this month: ', first_day_current print 'Last day of last month: ', last_day_previous print 'First day of last month: ', first_day_previous This outputs: Today: 2010-07-06 09:57:33.066446 First day of this month: 2010-07-01 00:00:00 Last day of last month: 2010-06-30 00:00:00 First day of last month: 2010-06-01 00:00:00 Not nearly as onerous as the week start/end range calculations, but I kind of thought that between all of these modules we have that one of them would be able to find me the start and end of the previous month. The raw material for creating this is, I suspect, buried somewhere in the source code for the calendar module, which can tell you the start and end dates for a month, but can’t do any date calculations to give you the previous month. The datetime module can do calculation, but it can’t tell you the start and end dates for a month. The datetime.timedelta object’s largest granularity is ‘week’ if memory serves, so you can’t just do ‘timedelta(months=1)’, because the deltas are all converted internally to a fixed number of days, seconds, or milliseconds, and a month isn’t a fixed number of any of them. Converge! While I could probably go ahead and use dateutil, which is really darn flexible, I’d rather be able to do this without a third-party module. Also, dateutil’s flexibility is not without it’s complexity, either. It’s not an insurmountable task to learn, but it’s not like you can directly transfer your experience with the built-in modules to using dateutil. I don’t think merging all of the time-related modules in Python would be necessary or even desirable, really, but I haven’t thought deeply about it. Perhaps a single module could provide a superclass for the various time-related objects currently spread across three modules, and they could share some base level functionality. Hard to conceive of a timedelta object not floating alone in space in that context, but alas, I’m thinking out loud. Perhaps a dive into the code is in order. What have you had trouble doing with dates and times in Python? What docs have I missed? What features are completely missing from Python in terms of time manipulation that would actually be useful enough to warrant inclusion in the collection of included batteries? Let me know your thoughts.
http://protocolostomy.com/2010/07/06/python-date-manipulation/
CC-MAIN-2021-17
refinedweb
2,506
63.49
If: By contrast, a pure DOM search for that same information would look something like Listing 1: Listing 1. DOM code to find all the title elements of books by Neal Stephenson: Listing 2. XML document containing book information::: Listing 5. XML document using the default namespace. Listing 6. A simple context for binding a single namespace plus the default: Listing 7. XPath query that uses namespaces. Listing 8. An XPath extension function for checking ISBNs. Listing 9. A function context that recognizes the valid-isbn extension function.. - Find out how to become an IBM-Certified Developer for XML and related technologies. - XML: See developerWorks XML Zone for a wide range of technical articles and tips, tutorials, standards, and IBM Redbooks. - developerWorks technical events and webcasts: Stay current with technology in these sessions. Get products and technologies - discussed in this article. -.
http://www.ibm.com/developerworks/xml/library/x-javaxpathapi/index.html
crawl-003
refinedweb
141
51.24
SOLVED Access current glyph collection view externally Is there a way to capture the current doodleGlyphCollectionViewwithout an observer like glyphCollectionDraw? I'd like to manipulate the glyph collection with a button somewhere. Is there something like CurrentGlyphCollectionView? Thanks! multiple ways to access it: from mojo.UI import CurrentFontWindow v = CurrentFontWindow().fontOverview.getGlyphCollection() print(v) # or v = CurrentFont().document().getGlyphCollection() print(v) Perfect, thanks Frederik! Is there an easy way to insert a button at the bottom of glyph collection? I've tried some things, but wondering if it would require a fontOverviewWillOpenor something of the like. Perhaps addSubview_()? hi @ryan have a look at this example (not sure if this is what you mean) Gah, of course I would completely forget that I've asked this before. Apologies for wasting time! no time wasted :) I will add this example to the docs, so it’s easier to find next time. thanks! Thanks! Last question on this now that I see the old thread—anyway to solve for Single Window Mode? Would love for the button to be just below the Font Overview, but subscribing to fontWindowWillOpenseems to anchor it to the whole window. This probably works well for the font overview in Multi-window Mode, but in SWM it's at the very bottom left. (see on top of the layout option buttons) The only observer I see firing when Font Overview is opened is glyphCollectionDraw. In my mind, even after I capture the window corresponding to the view: CurrentFontWindow().fontOverview.getGlyphCollection().getGlyphCellView()... I'd need to have an observer for when the Font Overview panel is hidden, so I could then hide the button. Hope this makes sense. A fontOverview has a statusBarwhich is a vanilla group. But be careful: RoboFont post messages in the statusbar, like glyph selection count, ... This is all exactly what I needed, thanks for your patience @frederik @gferreira Just a quick piggyback (sorry), is there a way to get CurrentFontWindow().fontOverview.statusBar.show(False) to work? I would love to add that to this gist: My small laptop screen thanks you in advance. This does most of the job, but leaves those two bottom left buttons, which I can't track down. from mojo.UI import CurrentFontWindow fo = CurrentFontWindow().fontOverview # gradient layer on bottom bar and any subviews fo.statusBar.show(0) # line of text showing what glyphs were selected fo.views.selectionStatus.show(0) # glyph cell size slider fo.views.sizeSlider.show(0) # remove bottom bar (negative space) by changing glyph collection area to full size fo.getGlyphCollection().setPosSize((0,0,0,0)) It was a dig but I got it, in case anyone is following (and sorry for abusing the Robofont UI so much, Frederik):
https://forum.robofont.com/topic/848/access-current-glyph-collection-view-externally/10?lang=en-US
CC-MAIN-2022-27
refinedweb
453
57.98
Preface: Part 1 covered the sketch the Arduino will run for this example, and part 2 covered the resources and other people’s code I used to make sure everything works as expected. In part 3 I’m going to go through a small program that does exactly what I want: read serial data from the Arduino. Part 4 is here. Part 3: Reading I’ve had a tough time writing this part. One of the hardest lessons I’ve had to learn over the past few years of learning C is that there is never one way to do something, and of the multiple ways to do something there is often never one clear better way to do it. My Arduino is sending “Hello World” once every second to my PC via a serial connection, and I want to read this and print it out on the PC. Should I write my program to wait until 12 characters are read? Should I wait until one second goes by and write whatever was in the buffer? Should I open the port to be blocking or non-blocking? Canonical or non-canonical? There is no right answer, so I went with what I could make work in a sensible way. Since my data is always going to be 12 characters long, and it’s always going to come in at once per second, I decided to stick with something similar to Tod Kurt’s example with the exception that instead of looking for the newline character, I’ll use the VMIN control to get exactly 12 (or however many) characters from the serial buffer. IMPORTANT: This means that read() function will block (pause) until all 12 characters are received! Here’s the program: #include <stdio.h> #include <stdlib.h> #include <ioctl.h> #include <fcntl.h> #include <termios.h> /* My Arduino is on /dev/ttyACM0 */ char *portname = "/dev/ttyACM" char buf[256]; int main(int argc, char *argv[]) { int fd; /* Open the file descriptor in non-blocking mode */ fd = open(portname, O_RDWR | O_NOCTTY); /* Set up the control structure */ struct termios toptions; /* Get currently set options for the tty */ tcgetattr(fd, &amp;toptions); /* Set custom options */ /* 9600 baud */ cfsetispeed(&amp;toptions, B9600); cfsetospeed(&amp;toptions, B9600); /* 8 bits, no parity, no stop bits */ toptions.c_cflag &= ~PARENB; toptions.c_cflag &= ~CSTOPB; toptions.c_cflag &= ~CSIZE; toptions.c_cflag |= CS8; /* no hardware flow control */ toptions.c_cflag &= ~CRTSCTS; /* enable receiver, ignore status lines */ toptions.c_cflag |= CREAD | CLOCAL; /* disable input/output flow control, disable restart chars */ toptions.c_iflag &= ~(IXON | IXOFF | IXANY); /* disable canonical input, disable echo, disable visually erase chars, disable terminal-generated signals */ toptions.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG); /* disable output processing */ toptions.c_oflag &= ~OPOST; /* wait for 12 characters to come in before read returns */ /* WARNING! THIS CAUSES THE read() TO BLOCK UNTIL ALL */ /* CHARACTERS HAVE COME IN! */ toptions.c_cc[VMIN] = 12; /* no minimum time to wait before read returns */ toptions.c_cc[VTIME] = 0; /* commit the options */ tcsetattr(fd, TCSANOW, &amp;toptions); /* Wait for the Arduino to reset */ usleep(1000*1000); /* Flush anything already in the serial buffer */ tcflush(fd, TCIFLUSH); /* read up to 128 bytes from the fd */ int n = read(fd, buf, 128); /* print how many bytes read */ printf("%i bytes got read...\n", n); /* print what's in the buffer */ printf("Buffer contains...\n%s\n", buf); return 0; } It’s not that interesting to look at. I’ll try to explain it best I can – there are several key things that I’m shaky on that require further experimentation. First up is declaring the headers, and a few constants, like the port name and the buffer I’ll be reading into. I picked 256 as the buffer size for no particular reason other than it’s bigger than I need. #include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; #include &lt;sys/ioctl.h&gt; #include &lt;fcntl.h&gt; #include &lt;termios.h&gt; /* My Arduino is on /dev/ttyACM0 */ char *portname = &quot;/dev/ttyACM0&quot;; char buf[256]; In main() I declare an integer ‘fd’ to be the file descriptor that open() returns on the next line. int fd; /* Open the file descriptor in non-blocking mode */ fd = open(portname, O_RDWR | O_NOCTTY); I mentioned before that some things were still unknown to me. In Tod Kurt’s example he uses open() with one more option – O_NDELAY – which opens the port in non-blocking mode. I had some complications with this, so I removed it and magically my complications went away. Several iterations of this program ago I found that using non-blocking mode meant that read() wouldn’t wait until data was in the buffer to return, but instead of read() returning 0 it was returning -1. This wound up being because my Arduino was busy rebooting (which is does when you open the port) while read() was running. I added some delay and it worked fine, but then I couldn’t reconcile the implications of a non-blocking port and non-canonical input. I’ve been oscillating between thinking I had a grip on it and being completely confused, so I’ll tackle it another day. Next is setting up the serial communications to our particular way of doing things. Terminal options are held in a termios structure, and it’s typical after declaring the structure (although not entirely necessary, I think) to then set it to the currently set options of the port. /* Set up the control structure */ struct termios toptions; /* Get currently set options for the tty */ tcgetattr(fd, &amp;toptions); So now we have the structure toptions set to what currently is set on the tty port. The terminal command stty can show you what a port is set to, so I assume tcgetattr() does something similar and plugs it into the toptions structure. The next part here is basically the same as Tod Kurt’s example, but it’s all pretty typical – the Unix programming book I’m referencing has something similar in the examples where you want the data coming at you in non-canonical raw mode. The comments explain what each line does. Some flags are grouped together for brevity – they don’t need to be like that (and they COULD all be lumped together). /* 9600 baud */ cfsetispeed(&amp;toptions, B9600); cfsetospeed(&amp;toptions, B9600); /* 8 bits, no parity, no stop bits */ toptions.c_cflag &amp;= ~PARENB; toptions.c_cflag &amp;= ~CSTOPB; toptions.c_cflag &amp;= ~CSIZE; toptions.c_cflag |= CS8; /* no hardware flow control */ toptions.c_cflag &amp;= ~CRTSCTS; /* enable receiver, ignore status lines */ toptions.c_cflag |= CREAD | CLOCAL; /* disable input/output flow control, disable restart chars */ toptions.c_iflag &amp;= ~(IXON | IXOFF | IXANY); /* disable canonical input, disable echo, disable visually erase chars, disable terminal-generated signals */ toptions.c_lflag &amp;= ~(ICANON | ECHO | ECHOE | ISIG); /* disable output processing */ toptions.c_oflag &amp;= ~OPOST; This was the first time I’d really had to set/unset options using bitwise operators (in a non-tutorial setting). It’s important to remember that it’s there for your convenience, and not meant to annoy. An alternative approach would be to have the termios structure have every individual option exist as a variable set to 0 or 1 in the structure, but for the sake of size and brevity (and because C doesn’t have a true bool type perhaps) it was done such that in the termios structure c_iflag, c_oflag, c_cflag, and c_lflag are all unsigned integers, and each of the 16 bits of the unsigned int represent a different option that can be set. Wikipedia helps explain what the & | and ~ do for us here. VMIN and VTIME are important in non-canonical processing of serial data. There’s a fantastic explanation of how to best utilize them here, but we can take it in this example to mean that read() will wait for 12 characters to come in before returning. /* wait for 24 characters to come in before read returns */ toptions.c_cc[VMIN] = 24; /* no minimum time to wait before read returns */ toptions.c_cc[VTIME] = 0; So at this point the bits are set, but the serial driver doesn’t know it, so call tcsetattr(). /* commit the options */ tcsetattr(fd, TCSANOW, &amp;toptions); All that setting happens nearly instantly after open() is called. It’s not very obvious, but when open is called a signal is sent via the serial port that the Arduino interprets as “reboot now”. There is a way around it on the Arduino end, but it’s easy enough to simply use usleep() to wait a bit before calling read(). /* Wait for the Arduino to reset */ usleep(1000*1000); /* Flush anything already in the serial buffer */ tcflush(fd, TCIFLUSH); /* read up to 128 bytes from the fd */ int n = read(fd, buf, 128); /* print how many bytes read */ printf(&quot;%i bytes got read...\n&quot;, n); /* print what's in the buffer */ printf(&quot;Buffer contains...\n%s\n&quot;, buf); The last bits of code do the waiting for reboot, flushes what’s still in the serial buffer, reads into the buffer declared (obeying the VMIN rules already defined) and prints out what was received. If VMIN is changed from 12 to 24 you get “Hello World” twice. This makes sense so all is well. So there is a minimal but functional example of reading serial data from an Arduino. Here are my remaining questions I want to answer someday. 1) Is tcgetattr() really necessary? What if I memset() the structure to 0 and only set what I want? 2) How much of what I set is really necessary? I’m only reading data so should I care about ECHO, ECHOE, or ISIG? 3) Why does Tod’s example work when using a non-blocking port, but mine doesn’t? 4) What’s the point of using a non-blocking port with non-canonical communication? Some of these questions likely have very complicated answers. I know that there are functions for intelligently knowing when a serial port is ready (which would help with question 3 and 4). Question 1 and 2 simply require experimenting (which requires time). Part 4 will take longer to come. I want to use what I’ve learned here to do something interesting with serial communication. I have a good project in mind, and I’m shooting for results and a writeup by mid-July. In the meantime I’m going to start learning about socket programming. EDIT: Part 4 is here. nice article. Thank you! I finally managed to get the communication working! You had the exactly same problems with the communications than I did! 🙂 My take on question 1 is that it’s not needed. I guess it could be system specific, but I have no need to reinstate the previous settings on the device (/dev/ttyACM?) after i’m done (i.e the program terminates). I’m assuming that’s what it is used for. Your article gave me a great start for writing my own code! However, I was having a problem where I could only receive data from my Arduino after I used a serial port monitor like GNU screen or the monitor that comes with the Arduino IDE. I finally solved this problem by adding cfmakeraw(&toptions) after tcgetattr(fd, &toptions). I’ll admit, I don’t really know what cfmakeraw does, but it seems to work. cfmakeraw() looks like a sort of helper function that sets some parts of the termios structure. Take a look here: and here: Instead of setting a handful of flags yourself, cfmakeraw() does it for you. This was very helpful, except from one particular command: /* wait for 24 characters to come in before read returns */ toptions.c_cc[VMIN] = 12; If I have understood this correctly, you don’t return the “read” function until 12 characters have been read? If so, this is an cumbersome approach and it got me stuck for hours trying to figure out why I couldn’t read my 7-something-character message from my Arduino. Why not read repeatedly until you reach a end of stream character (I use ‘\n”)? Then your message would be read, regardless of whether it is 10 or 14 characters. Considering you only read 12 characters at a time, why is your buffer 128 bytes? Sorry if I misunderstood what this function does, but I after a couple of frustrating hours I finally remove that line and now it works as I want. I honestly think you should remove it too, as it will ONLY work for 12-character messages. I appreciate the feedback. In the paragraphs above the code I explained my reasoning behind setting VMIN to 12 – sorry it caused problems. Using ‘/n’ as a delimiter is (if I recall correctly) when you’d use canonical mode, instead of non-canonical, where you depend on a character limit or time limit (blocking until the limit is met). I almost never use canonical mode day-to-day, so I’m not speaking from authority there. Woof, it’s been a long time since I’ve look at this. The whole serial communication series is due for an overhaul – I wrote it over three years ago! I’ll edit that part of the code to mention it will block until the set number of characters are met. Chris, The code was very helpful for a project I’m working but I’ve run into a problem I was hoping you could help with. I have my VMIN set to 56, so I assume that means I should be able to read in 56 bytes. My arduino loop, shown below, writes “Hello World” to the serial port once every 2 seconds for 30 iterations and then prints “ducks”. I’m planning on using “ducks” as a trigger value; eventually I want the code to search through an input string and trigger on a certain string or sequence of characters. With the carriage return and the newline character “Hello World” equates to 13 bytes. Yet, when I run your code it only reads the data on the port once before closing out the connection and returning me to the command line. Am I doing something wrong? Thanks in advance. void loop() { int i = 0; while (i < 30) { Serial.println("Hello World"); delay(2000); i++; } Serial.println("ducks"); } My output from the code is; 13 bytes got read… Buffer contains… Hello World Chris, With regard to my post a few minutes ago. I discovered what the problem was; I read through part 4 again and saw the part about your typo regarding “c_iflag to ICANON instead of c_lflag”. As soon as I changed it the code worked correctly. Thanks again. I’m glad you got it working! I’m going to fix that right now so that it doesn’t throw anyone else off. I cannot understand the difference between “blocking / non-blocking” vs “Canonical / Non-canonical”. Could you please explain a little about them or put some link for further reading. The answer to this SO question goes into it a bit: In short, canonical mode blocks until a whole line is received, non-canonical mode blocks until a minimum amount of characters have been received or a maximum amount of time has elapsed. Carefull because the code is badly represented.. When you have the tcgetattr(fd, &toptions); it appears as tcsetattr(fd, TCSANOW, &toptions); And some people they have to put #include (THE SYS/) to include the actual library. Appart from that is awsome that you took your time to do this walkthrough. Thanks for the feedback. A while back this post, and a few other posts with code samples, got really garbled up. The WordPress code formatter is wonky at best. I’ll see about fixing it.
https://chrisheydrick.com/2012/06/17/how-to-read-serial-data-from-an-arduino-in-linux-with-c-part-3/
CC-MAIN-2017-34
refinedweb
2,623
63.59
The QObjectCleanupHandler class watches the lifetime of multiple QObjects. More... #include <QObjectCleanupHandler> Inherits QObject. The QObjectCleanupHandler class watches the lifetime of multiple QObjects.. See also QPointer. Constructs an empty QObjectCleanupHandler. Destroys the cleanup handler. All objects in this cleanup handler will be deleted. See also clear(). Adds object to this cleanup handler and returns the pointer to the object. See also remove(). Deletes all objects in this cleanup handler. The cleanup handler becomes empty. See also isEmpty(). Returns true if this cleanup handler is empty or if all objects in this cleanup handler have been destroyed; otherwise return false. See also add(), remove(), and clear(). Removes the object from this cleanup handler. The object will not be destroyed. See also add().
http://doc.trolltech.com/4.3/qobjectcleanuphandler.html
crawl-002
refinedweb
121
64.67
#include <ggi/gg.h> struct gg_task { gg_task_callback_fn *cb; /* Function to call to run task */ void *hook; /* Task data can be hung here */ int pticks; /* Run once every pticks ticks. */ int ncalls; /* Run ncalls times (0 = infinite) */ int lasttick; /* last tick run (read-only) */ /* Other members present but are for internal use only. */ }; typedef int (gg_task_callback_fn)(struct gg_task *task); GG_SCHED_TICKS2USECS(uint32_t ticks); GG_SCHED_USECS2TICKS(uint32_t usecs); uint32_t ggTimeBase(void); int ggAddTask(struct gg_task *task); int ggDelTask(struct gg_task *task); The LibGG task scheduler uses a unit of time called a "tick", which may vary between architectures. The tick is guaranteed to be no more than one second, however, most environments will support at least 60 ticks per second. By default LibGG will select 60 ticks per second if it is supported, see below for instructions on modifying this behavior. The function ggTimeBase is used to find out the size of a tick. GG_SCHED_TICKS2USECS and GG_SCHED_USECS2TICKS are convenient macros that simplifies conversion between ticks and microseconds and vice versa. The maximum rate at which a periodic task may run is once per tick. The maximum period (minimum rate) of a LibGG task is the value of the macro GG_SCHED_TICK_WRAP minus one, and is also measured in ticks. ggAddTask will examine the values in the offered task control structure task. Before calling ggAddTask the task control structure must be initialized by filling it with zeros, including the internal-use-only area. The task control structure should be further initialized by providing at least a pointer to a callback handler function in the member cb, and initializing the pticks member to contain the number of ticks between each call to the handler function. The ncalls member may be left at zero, in which case the task remains scheduled to run once every pticks until explicitly deleted, or it may be set to a positive integer to indicate that the task should be automatically deleted after the handler has been called ncalls times. The int return type on the callback hook is only there for possible future expansion. For now callbacks should always return 0. Other values are undefined. The task control structure must only be used for one task, however a task handler may be called by multiple tasks. The member hook is provided for the application's use in the task control structure as a means to easily transport task-local data to the handler. If a tick arrives during a call to ggAddTask, the handler may be invoked before ggAddTask returns; A memory barrier is included in ggAddTask which ensures that all values in the task control structure are up to date on multiprocessor systems even in this case. The task control structure should not be altered, except by a task handler as noted below, while the task is scheduled. ggDelTask will remove a task from the scheduler. The task may be called after ggDelTask is called, but is guaranteed not to be called after ggDelTask has returned, until such a point as it is added again with ggAddTask. A task can be put to sleep for a certain amount of time in microseconds by altering the period of the task to the correct number of ticks, and then that task itself can reset it's period back based on a value in it's private hook when it next runs. A task can wait for an other task to finish either by writing code to poll the other task's flags, or by writing a callback into the latter task when it is done to reschedule a list of waiting tasks. How a task terminates is entirely up to the author. Each scheduled task is guaranteed never to be reentered by the scheduler. That is, only one call to a task handler for a given task control structure will be run at a time, though a single handler function that handles more than one task control structure may be entered simultaneously once per structure. When a task executes, the handler is invoked and the parameter task given to the handler contains the same pointer value as was given to ggAddTask. The ncalls member will be updated to contain the number of calls, including the current call, which remain before the task is automatically deleted (or zero if the task will never be automatically deleted.) Thus it is safe to call ggAddTask again to reuse the task control structure once the handler has returned with ncalls equal to 1. The lasttick member will contain the number of the LibGG scheduler tick being executed, which should increase monotonically unless a problem occurs as noted below, wrapping around modulus the value GG_SCHED_TICK_WRAP. ggAddTask and ggDelTask may not be called from within a task handler, however, the task handler is free to alter the pticks and ncalls members in the task control structure task in order to change its period, or increase or decrease the number of calls before auto-deletion. For example, to cancel itself, a task need only set ncalls to 1 before returning. The task handler may also change it's callback function or data hook members. A write memory barrier is included in the scheduler to prevent old values from being seen by other processors on SMP systems. LibGG ticks are measured in real (wall clock) time and LibGG makes every effort to ensure that drift due to runtime factors is kept at a minimum. When a process is suspended, however, LibGG ticks stop and resume where they left off. Likewise, when system utilization is very high or tasks are misused the LibGG scheduler may fail to count ticks. However the ggCurTime(3) function will still be accurate in these cases and can be used to detect such situations. All scheduled LibGG tasks may in the worst case have to be run serialized, and may be postponed slightly while a call to ggAddTask or ggDelTask is in progress, so there may be some delay between the start of a LibGG tick and the actual execution of the task. This can be minimized by limiting the duties of task handlers to very short, quick operations. When utilization is high or tasks misbehave, the scheduler may elect simply not to call a task handler even though it is scheduled to be called on a given tick. This may happen either to all tasks or to select individual tasks. The "lasttick" member of the task control structure can be safely read from within a task handler in order to detect such a circumstance (it will always contain the current tick, but can be compared to a previously stored value.) Since LibGG tasks may be called in a signal handler or other non-interruptible context, they should not call ggLock(3) on any locks that may already be locked. In addition, there may be limits imposed on the functions which are safe to use inside task handlers (that is, only reentrant functions may be safe.) More detailed information on using locks inside LibGG task handlers is contained in the manpage for ggLock(3). Scheduled tasks will be canceled, in a somewhat precarious fashion, by a normal call to ggExit(3). As such, it is considered best practice to use ggDelTask to cancel tasks when gracefully deinitializing LibGG or a library that uses LibGG. ggDelTask returns GGI_OK on success or: ggTimeBase returns an integer between 1 and 1000000, inclusive, which represents the number on microseconds between each tick of the LibGG scheduler. If the "-signum=n" option is present in the GG_OPTS environment variable when ggInit is first called, and LibGG is not compiled with threads support, the UNIX signal used by the scheduler may be selected. If n is not a valid signal for this purpose, the results are undefined, but should not be unsafe for SUID processes. The default signal used is usually SIGPROF, but may be chosen differently based on the needs of the package maintainer for any particular LibGG distribution. Applications using LibGG are forbidden from using this signal for other purposes, whether or not tasks are used. If the "-schedthreads=numthreads" option is present in the GG_OPTS environment variable when ggInit is first called, and LibGG is compiled with threading support, the scheduler will create numthreads additional threads to call task handlers. The default is one additional thread. If numthreads is not valid or causes resource allocation problems, the results are undefined, but should not be unsafe for SUID (or other elevated privilege) processes.
http://www.makelinux.net/man/3/G/GG_SCHED_USECS2TICKS
CC-MAIN-2013-48
refinedweb
1,412
55.27
Since the new .NET Framework has been announced, several new features have been introduced in C# with its new version. This is the concept of using the static classes as a namespace using the using keyword.Typically, when we have static classes with methods, we call the method using the convention staticClassName.MethodName. But with the new features in C# 6.0, we can directly call the static methods, like they are part of the same classes, instead of using the convention of staticClassName.MethodName. Let's see how to do this.For our example, let's create a new console application and use the old convention of calling the WriteLine method. So we write the code as:Now in C# 6.0, we can directly call the Console method. To do this, we first need to add the static class in our usings with the following code. In our case, the class is Console. We add it as: So we can use the same feature with our classes as well. Run the code and see the results. Happy coding. View All
http://www.c-sharpcorner.com/UploadFile/b1df45/static-as-namespace-in-C-Sharp-6-0/
CC-MAIN-2018-09
refinedweb
182
75.81
I called it cordova-3.1.x because it's about the 3.1.x version of Cordova, the CadVer, not the CLI's own version numbering. We can always make a new branch and drop the old one, if we like. Braden On Tue, Oct 1, 2013 at 3:38 PM, Steven Gill <stevengill97@gmail.com> wrote: > the branch for both CLI and Plugman is called cordova-3.1.x (don't know why > we didn't call it 3.1.x instead) > > > On Tue, Oct 1, 2013 at 12:04 PM, Jeffrey Heifetz <jheifetz@blackberry.com > >wrote: > > > It seems as though there is no cordova-cli 3.1.x branch, does this mean > we > > always release off of master? > > > > There is a bug I found where element tree needs to be bumped to the same > > version as plugman to support namespace xml elements and I'd like to know > > where to push this. > > > > Thanks, > > > > Jeff > > > > On 13-10-01 12:17 PM, "Steven Gill" <stevengill97@gmail.com> wrote: > > > > >Firefoxos is broken on master after the refactor. It works fine on the > > >cordova-3.1.x branch though. > > > > > >I am testing jesse's pull requests for CLI + plugman today for > > >cordova-3.1.x branch. If they look good I will merge them in. We should > > >release CLI + Plugman at the same time as 3.1.0. The release should be > > >based off cordova-3.1.x branch and not master. > > >On Oct 1, 2013 7:41 AM, "Andrew Grieve" <agrieve@chromium.org> wrote: > > > > > >> Braden's out doing intern interviews yesterday & today, so it's > unlikely > > >> he'll see this email until tomorrow if not Thursday. > > >> > > >> We could still do a tools release, but just don't update the > > >>platforms.js > > >> file to point at 3.1.0. That said, I think I saw Firefox was broken on > > >> HEAD? Think we'll want to fix that before doing so. > > >> > > >> In terms of testing the RC - as Steven said - using cordova@3.0.10 > > >>should > > >> do the trick > > >> > > >> > > >> On Tue, Oct 1, 2013 at 2:22 AM, Jesse <purplecabbage@gmail.com> > wrote: > > >> > > >> > I have an open pull request I would like someone else to look at. > [1] > > >> > This addresses the issue that Carlos brought up during plugin remove > > >>[2] > > >> > It would be great if this could be part of the release, or at least > > >>the > > >> > commit cherry picked into it. > > >> > > > >> > Cheers, > > >> > Jesse > > >> > > > >> > > > >> > > > >> > [1] > > >> > > > >> > [2] > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > @purplecabbage > > >> > risingj.com > > >> > > > >> > > > >> > On Mon, Sep 30, 2013 at 4:12 PM, Steven Gill < > stevengill97@gmail.com> > > >> > wrote: > > >> > > > >> > > I believe Braden did the same type of release for plugman that he > > >>did > > >> for > > >> > > cordova cli. He updated both packages on npm but set the latest > tag > > >>to > > >> > > point to the previous version until we were ready to do our full > > >> release. > > >> > > > > >> > > If you install the CLI RC > > >> > > sudo npm install -g cordova@3.0.10, you actually get version > 0.12.0 > > >>of > > >> > > plugman as a dependency. > > >> > > > > >> > > I will wait for Braden to chime in. I figure the CLI and Plugman > > >>should > > >> > > both be released on the same day we release 3.1.0. > > >> > > > > >> > > > > >> > > > > >> > > On Mon, Sep 30, 2013 at 1:39 PM, David Kemp <drkemp@google.com> > > >>wrote: > > >> > > > > >> > > > +1 to a plugman npm release. > > >> > > > > > >> > > > David Kemp > > >> > > > > > >> > > > > > >> > > > > > >> > > > On Mon, Sep 30, 2013 at 4:34 PM, Steven Gill > > >><stevengill97@gmail.com > > >> > > > >> > > > wrote: > > >> > > > > > >> > > > > Anyone see any issue with me doing a npm plugman release > today? > > >> > Testing > > >> > > > the > > >> > > > > CLI RC is kind of weird when the plugman dependency is > > >> incompatible. > > >> > > > > > > >> > > > > -Ste. > > > > >
http://mail-archives.apache.org/mod_mbox/cordova-dev/201310.mbox/%3CCAMk5LLn+oVdpuDVGFviBX2KmpA0gYPWokX7dtWv274purWOPvA@mail.gmail.com%3E
CC-MAIN-2017-47
refinedweb
583
85.18
My main target was to use the information about songs currently played in WinAmp with other programs, to show other people the songs that I listen to, or simply to see the title of a song that I do not remember. It is not easy, because I use WinAmp in a "stealth" mode: it's minimized in the system tray, and hidden between all other icons (I love this WinXP feature). So basically, since I use hotkeys to run through songs, I had no way to easily display the songs name. I already knew I needed some DDE. The fact is that I was lazy to code a plugin, so I just took one already done to be used with mIRC, which simply sends a message to it when a song is played. Given that, it is impossible to ask the plugin about information of any kind, rather it's the plugin that communicates actively. And that wasn't too much of a bothering, if it wasn't for streams. When a stream is played, the title can be updated, but this way the "active" DDE (which sends data only when a new song/stream is played) can't work. So, I threw away my laziness and started coding. I must admit it: I'm usually not a son of an "open source" philosophy. Not at all. Usually I take my sources ultra tight to my notebooks, but this is an exception. And who knows, maybe just a beginning of a whole series of exceptions. But let's cut it short. Apart from the other things that made me release these sources (and my friend [SkiD] is certainly one of those reasons), there is a fact that I could find nowhere an easy overview on how to setup a DDE Server. I had to look through hundreds of pages, manual references, MSDN articles, to find the 10 lines of code that were needed. Amazing stuff nobody did ever write a similar article before (or maybe I'm just too stupid to find it). Anyway, keeping aside the setup of the DDE Server there are several other nice things in this package, and although some may result to be totally useless to most, they can still be handy to some. Let's get this party started... For the ones of you who don't know this, DDE is an acronym for Dynamic Data Exchange. It can be used to let two applications communicate with each other. As in all connection style protocols, there is a Server and a Client, and here we will analyze how to setup a Server. The main reason why there's no client setup here is because I never coded one. As simple as that. I have used some already VB coded DDE Client that I created a long while ago. Off with the chitchats. #include <span class="code-keyword"><ddeml.h> </span> Above all, I would like to ask you to excuse me if the lines are too long for your monitor resolution: I use a 1400x1050 resolution, and as already stated before this is my first article, so... have mercy on me... With the DdeInitialize we initialize a DDE which acts in a standard way (APPCLASS_STANDARD), accepts only XTYP_REQUEST and XTYP_POKE modes (CBF_FAIL_ADVISES | CBF_FAIL_EXECUTES | CBF_FAIL_SELFCONNECTIONS) and refuses any kind of notification (CBF_SKIP_ALLNOTIFICATIONS). And not to be forgotten, we pass the callback function address with this initialization step. DdeInitialize APPCLASS_STANDARD XTYP_REQUEST XTYP_POKE CBF_FAIL_ADVISES CBF_FAIL_EXECUTES CBF_FAIL_SELFCONNECTIONS CBF_SKIP_ALLNOTIFICATIONS In order to use DDE APIs, we also need strings in a particular way, and to do so we call the DdeCreatStringHandle, which simply returns a handle to a HSZ, DDE formatted string. We need these strings to register the server name, and later, to check the topics and such. DdeCreatStringHandle Last of all, when the strings are ready, we register the name. The name is needed to let the other programs communicate with the server: just like a server listening on IP:Port on internet, the server usually listens to Name|Topic. In this case, I placed the code in the DllMain function in answer to a DLL_PROCESS_ATTACH reason. This allows us to avoid the DLL to be loaded if we are not able to start a DDE Server, so basically if the plugin would be useless. This procedure can be used in any DLL: if DllMain returns false, then the system stops its loading. DllMain DLL_PROCESS_ATTACH The uninitialization is very simple: void quit() { // // Unregisters DDE Server and save configuration // DdeNameService(idInst, DDEServerName, 0, DNS_UNREGISTER); DdeFreeStringHandle(idInst, DDETopicEvaluate); DdeFreeStringHandle(idInst, DDEServerName); DdeUninitialize(idInst); [...] } The code talks by itself. Now we have got the initialization and the uninitialization, but what's missing is the Callback function: HDDEDATA CALLBACK DdeCallback(UINT uType, UINT uFmt, HCONV hconv, HSZ hszTopic, HSZ hszItem, HDDEDATA hdata, DWORD dwData1, DWORD dwData2) { HDDEDATA ret = 0; switch (uType) { // // Allow DDE connections // case XTYP_CONNECT: ret = (HDDEDATA)true; break; // // Answer the DDE Requests. If the Topic is correct, obviously :P // case XTYP_REQUEST: // // We use the DdeCmpStringHandles instead of the strcmp // for two reasons: 1st we already have the strings in HSZ // format, 2nd this way it's case insensitive as the DDE // protocol is supposed to be. // if (DdeCmpStringHandles(hszTopic, DDETopicEvaluate) == 0) { [...] ret = DdeCreateDataHandle(idInst, (BYTE*)"\1Unknown Error...", 18, 0, hszItem, CF_TEXT, 0); } break; // // No pokes this time... // case XTYP_POKE: // Return FACK if poke message is processed // ret = DDE_FACK; ret = DDE_FNOTPROCESSED; break; } return ret; } Since I believe I never talked about what the difference between a XTYP_POKE and a XTYP_REQUEST could/should be, I will do that now. Usually (at least for what I saw until now), REQUESTs are used to get information from the server. In fact, the return value of the DdeCallback should either be a handle to DdeData (HDDEDATA) created through DdeCreateDataHandle, or NULL. The POKEs instead are used to "throw" commands, or at least tell the server to do a certain thing. If the server actually does that, it returns DDE_FACK, otherwise it has to return DDE_FNOTPROCESSED. The thing I did not mention here is about the XTYP_REQUEST: there is a particular condition in which you have to return a certain value, and that should be done when the calculation of the result requires some time. But I'd leave that case to you and MSDN. DdeCallback HDDEDATA DdeCreateDataHandle NULL DDE_FACK DDE_FNOTPROCESSED The XTYP_POKE case is pretty explicative, so I'll avoid commenting. Every time you receive a call, the first thing you have to do is to check whether the topic is valid or not. In this example we have EVALUATE only, so the job is easy. A call to DdeCmpStringHandles is enough to see if the topic is valid, and if it is we can simply create a result data and return. It is done by calling DdeCreateDataHandle and returning its result. DdeCmpStringHandles I guess this is all for the DDE. ... this is the question... As I have stated in the Readme contained in the source package, I am using the latest Intel C++ compiler with the VS IDE. Given that, I do not know if the problems that I have with the inline name export technique is due to that: I just know I do have a problem. What's this problem you might ask? Let's take a look at it: __declspec( dllexport ) winampGeneralPurposePlugin* winampGetGeneralPurposePlugin() { return &plugin; } Easy, isn't it? It would be if it works fine. The function is actually exported in the DLL, but the problem is that while we need the name to be exactly winampGetGeneralPurposePlugin, the compiler decides to use the object-based function name, and the result is this: winampGetGeneralPurposePlugin ?winampGetGeneralPurposePlugin@@YAPAUwinampGeneralPurposePlugin@@XZ That does not really seem like winampGetGeneralPurposePlugin to me, does it? It is known that, we have the possibility to use pragmas to do the job. The new version of that would be: winampGeneralPurposePlugin *winampGetGeneralPurposePlugin() { return &plugin; } #pragma comment(linker, "/EXPORT:winampGetGeneralPurposePlugin=?winampGet[...]@@XZ") I have cut the function name to make the line fully visible with lower resolutions, but you would need the full name, of course... The export works like this: "/EXPORT:ExportName:FunctionName". This allows us to export any function with any name. There are two things to underline: the first is that the /EXPORT: command is not as easy as it is used here, but as always I leave that to you to learn it with MSDN if you are interested (in this case, take a look here); the other is that there are other methods to export names, as well as the always recommended usage of the .DEF file, but that is not in my style. If you want to learn about the .DEF... well... go to MSDN again. This link maybe of help to you. If the world is not enough, go figure if a goto is so. I'll make it short and describe what is my problem like. int main() { bool error = true; if (error) goto Errore int ret = 1; return ret; Errore: return 0; } Now... although this snippet is pretty useless, it describes the problem. While compiling this, you are most likely to have an error, because the compiler sees that the goto might skip the initialization of the variable named ret. Even though this is correct, and the problem can be easily resolved by writing the "int ret;" right after the "error" declaration, it's also true that in several cases (included a lot of my sources) the "politically correct" code might be a little bit harder to read and understand. goto ret int ret; error It might be a matter of taste, I perfectly know that, and I'm also aware of the fact that I have strange tastes. But anyway in my code I like to do what I want to, and not what the compiler tells me to. The problem can be fixed in the following way: #define goto __asm jmp With this, the compiler now sees the goto as an assembly jump and not a C command, therefore it's not an object of compiler checks. With this trick you can easily avoid the troubles in the above snippet. For all of you who don't know, jmp is the assembly operator equivalent to C, basic and Pascal goto. Nothing too hard to understand here. jmp This is as powerful as dangerous: when it comes to code mistakes, this can seriously make you waste a lot of time trying to hunt down the bug. But if you can manage to use it wisely, it can save your day. It just happened once to me when I had troubles with an error like that. Some of you might say that for small things like that, even a try-throw-catch block could save the day. Little that you know, is that the try blocks costs, and do really cost much in terms of CPU, when it comes to calls inside the block itself. I will not discuss about that here, but I, as a damascene coder, always look for the quickest, smallest and definitely the best way to solve a technical problem like this one. And so far, I have never seen anything better. Although I do not want anything else. try throw catch By the way, in the way described above you could change every single language operator. I do not recommend it, or at least I do not see any reason to do that, but it is possible. And after all, all I've been trying to do with this code is to show you different possible solutions to common problems. Or to what I consider common problems. All in all, I hope that with this article you have learnt something new, or at least useful in some way. There are also other things inside the package that I have not explained here, but I believe that the commented sources and the readmes in the source package are more than enough. So, I guess that this is really the end of my first article for Code Project, and all in all I must say that I'm glad I, for the first time, have contributed with my knowledge to a community that at more than a couple times has helped me a bit. These are my 5 cents... if you like this article, and found it useful or interesting, let me know, as well as if you find any problems with my code or just anything else. I'm considering to start release of part of my sources, and I believe that the greatest part of the game will be played by the feedbacks I will have with this. Well... so long... This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here When he was 2 years old some friends of his mother made him play with arcade machines, and was almost clear what his path would have been. When he was 10 years old he's been given a 80286 as birthday present, and from that day on it's been a crescendo of programming techniques. Started with batch ms-dos file, that in a short while got interfaced with .com files created with debug.exe embedded ms-dos file... and then gw-basic, turbo pascal, quick basic, assembly, vb, vc++, delphi, java, plus a bunch of scripting languages, php included. His other main passion is music and art in general, and has been awarded several times for his graphics and songs. He founded and still leads a demogroup named 3D, which stands for DBC Demo Division, and from time to time he takes the chance to attend demoparties, and eventually, submit artworks to some of the competitions offered by the different parties. He's currently workin as a sales agent. Nothing to do with computers, but still he has to gain some money to buy some food... DWORD DdeGetData( HDDEDATA hData, LPBYTE pDst, DWORD cbMax, DWORD cbOff ); char buffer[1024]; DdeGetData( hData, &buffer[0], 1024, 0 ); Private Sub DDE_KeyDown(KeyCode As Integer, Shift As Integer) If KeyCode = 116 And Shift = 0 Then On Local Error Resume Next Query.Text = "" Query.LinkTopic = "WinAmp|EVALUATE" Query.LinkMode = 2 Query.LinkItem = "File: %FILE%" & vbCrLf & "Title: %TITLE%" Err.Clear Query.LinkRequest If Err Then DDE = "Error #" & Err & ":" & vbCrLf & Err.Description Else If Len(Query.Text) <> 0 Then If Asc(Left(Query.Text, 1)) = 1 Then If Query.Text = Chr(1) + "[ NOTHiNG ]" Then DDE.Text = "No files currently played in WinAmp" Else DDE.Text = "Server error:" & vbCrLf & """" & Mid(Query.Text, 2) & """" End If Else DDE.Text = Query.Text End If End If End If End If End Sub BOOL APIENTRY DllMain( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { // // You should check the "WinAmp DDE.h" to understand this // particoular usage of the "goto", since it's not the // standard one, being overridden. // MessageBox(NULL,"Inside DLL","WinAmp DDE ",MB_ICONSTOP); switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: MessageBox(NULL,"Inside DLL_PROCESS_ATTACH","WinAmp DDE ",MB_ICONSTOP); extern "C" __declspec( dllexport ) winampGeneralPurposePlugin* winampGetGeneralPurposePlugin() { return &plugin; } General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/10522/WinAmp-DDE?msg=2474286
CC-MAIN-2015-35
refinedweb
2,561
69.62
RedmineWikiFormatting » History » Version 235 Version 235/248 (diff) - Current version Jeremy Tang, 08/11/2011 04:18 PM RedmineWikiFormatting¶ Table of Contents¶ The table of contents below was typed by hand using Bulleted Lists and Anchors. The yellow table of contents to the right was automatically generated using the {{toc}} tag (technically the {{>toc}} tag which aligns the table to the right), but it only makes links to Headers and not to Anchors. - Table of contents - RedmineWikiFormatting - Basic: Text Formatting - Intermediate: Wiki Formatting - Advanced: Formatting with p-tags - Advanced: Formatting with Styles - Lists - Tables - Other Things Basic: Text Formatting¶ Bold Put double-asterisks around the text you want to bold. For example, **text** results in text. Italics Put double-underscores around the text you want italicized. For example, __text__ results in text. Underline Put plus signs around the text you want . For example, +text+ . Unfortunately, there is a limit as to how many format tags can be done at a time; attempting to make superscripted italicized bolded underlined strikethroughed text results in ****. Intermediate: Wiki Formatting¶ Headers Start any header you want with h1. h2. or h3. For example, "h2. Intermediate: Wiki Formatting" resulted in the header above. Use of headers h4. h5. and h6. are discouraged as they don't generate Anchors and therefore do not show up in an automatically generated Table of Contents. Also, their font is way too small.: #include <iostream> for (int i=0; i<10; i++) std:(Environment, Safety, and Health): To resize an image to be 100px wide and 100px tall, add {width:100px;height:100px} after the first ! but before the image url. For example, !{width:100px;height:100px}! results in To align an image to the right, add a > after the first ! but before the image url. For example, !>! results in the image to the right. Tooltipped Hyperlinked Images ! results in: Attachments At the bottom of this page is an attachments section. To link to an attachment in that section, simply type attachment:fermilab_logo.jpg, where fermilab_logo.jpg is the name of the attachment. In case you were wondering, here's what the attachment link looks like: fermilab_logo.jpg Redmine does not natively support linking to attachments from other wiki pages. Image Attachments Embedding an image attachment is just like embedding an image from a URL; just put the name of the attached image between exclamation points. For example, !fermilab_logo.jpg! results in this: Redmine does not natively support embedding image attachments from other wiki pages. For more information about images in general, see Images Linking to Other Wiki Pages Typing [[Overview]] results in Overview. That's all it takes to create a quick hyperlink to the Fermilab Redmine Overview wiki page. In the event that you want to link to the Overview wiki page but want the link text to be "OV", type [[Overview|OV]], which results in OV. Anchors Creating a Header automatically creates an anchor, where the anchor tag is the same as the header text except with the spaces replaced with hyphens and punctuation removed. Typing [[RedmineWikiFormatting#Table-of-Contents|top of this page]] will create an anchor that goes all the way back to the table of contents at the top|AnchorLink]] is typed, it results in this AnchorLink¶ separated only by a single line break are considered to be part of the same "paragraph". There is an indentation p-tag in front of the first line of this paragraph, and that tag indents the whole paragraph. p((. Attempting to put another p-tag, such as a double-indent, on a different line of the same paragraph doesn't work. Lines separated by two line breaks are considered to be separate paragraphs, Here, putting a double-indent p. Advanced: Formatting with Styles¶ What are Styles? Styles are a set of formatting tags that are enclosed in {curly braces}. They can be inserted into p-tags, but they can also be inserted into span tags. Span tags only format a word or phrase from a paragraph, while the p-tag formats the entire paragraph. Styles can also be found in Tables and even in Images. Spans Spans are capable of applying attributes such as color and font size to individual words or phrases within a paragraph. The span tag itself is nothing more than a % symbol. A boring span can be made by surrounding a phrase with two % symbols. Nothing to it. The fun comes when styles come into play. For example, typing %({color:white;background:black})awesomeness% results in awesomeness. Colored Text & Highlighting Everyone loves colored text. This text was made blue by putting this tag in front of the first line of this paragraph: p{color:blue}. Similarly, this text was made this funky color by putting p{color:#CBA321} in front of this paragraph. This text was highlighted by putting the p{background:yellow}. tag large green text aligned to the right with two indents at the left and four indents at the right. Note the semicolon between the color:green and the font-size:150%. Lists¶ Bulleted Lists¶ Basic Tables Basic tables are relatively easy. They are created using the vertical line symbol. For example, |Unit 1|Unit 2|Unit 3| results in: For multidimensional tables, just add a line break.|Apple|Balloon| |Cat|Dog| results in: Table Titles Putting _. in front of a table entry will turn the entry into a title. Which, admittedly, is the same thing as bolding that particular entry (see Bold). |object|, |has|no|background| results in Putting the tag inside a cell of a table colors the background of that single cell. For example,|{background:red}. red|white| |white|{background:cyan}. blue| results in Table Borders This tag changes the thickness, type, and color of the border of the table.{border:5px dotted orange}. |has|border| |no|border| results in Attributes for the type of border include solid, dashed, and dotted. Text Alignment in Tables|<. align left | |>. align right| |=. center | |<>. justify | |^. align top | |~. align bottom | results in: Table Cell Size Putting the \#. tag in front of an entry of a table cell makes that particular table cell # times as wide as a normal cell. This table: |\2. Two cells wide| |column 1|column 2| results in this: Similarly, putting the /#. tag in front of an entry of a table cell makes that particular table cell # times as tall as a normal cell. This table: |3|4| |5|6| |7|8| results in this: Combining Table Tags By combining different table tags, some crazy tables can be constructed. This table, |={background:gray;border:dashed silver}. X|\3. *Columns*| |/4. *Rows*|1|2|3| |2|4|6| |3|6|9| |4|8|12| results in this: Lists in Tables? Unfortunately, it is not possible to put a list inside a table (see Lists) Other Things¶ Notes. Redmine Source Code For This WikiPage If you want the source code for this WikiPage for any reason, feel free to download it from the attachments below, or by clicking this link: RedmineWikiFormatting.redmine -- Jeremy Tang - Summer 2011
https://cdcvs.fnal.gov/redmine/projects/fermi-redmine/wiki/RedmineWikiFormatting/235
CC-MAIN-2020-29
refinedweb
1,175
56.86
As I’m oft to do on a Sunday evening, I had a chance to play around with a few Amazon services. It started with a general question/challenge: “Can I do OCR on a non-text PDF with Amazon Rekognition using the detect_text function?” Short answer is yes. Long-answer is sort-of… Rekognition only returns 50 WORD or LINE text elements per image. So at this point in time if you need a cloud-based OCR service that can handle dense text (like you would have in most PDF documents), Google Vision is probably a better choice. For this exercise you will need: - An AWS developer account - A spare EC2 instance - An IAM role assigned to your EC2 instance allowing S3 and Rekognition services - A S3 bucket we can write to - Python 3.x - ImageMagick (OSS suite for image manipulation/conversion) - Wand (ImageMagick wrapper for Python) - Boto3 (AWS SDK for Python) For this experiment I used the standard Amazon Linux AMI (Centos): Start by updating yum: sudo yum update -y Next install Python 3.x (we’ll use Python 3.6 here): sudo yum install python36 -y Next install ImageMagick: sudo yum install ImageMagick-devel Then install wand and boto3: sudo pip-3.6 install wand boto3 IAM Policy Now let’s make sure IAM policy is setup correctly and attached to your EC2 instance: In EC2, select your running instance and choose Actions->Instance Settings->Attach/Replace IAM Role Since you probably don’t have a specific role just for S3 and Rekognition, go ahead and choose ‘Create New IAM Role’. Then follow these simple steps: - Click ‘Create Role’ - Choose ‘EC2’, then click ‘EC2’ use case and click the ‘Next: Permissions’ button - Search for ‘AmazonS3FullAccess’ and ‘AmazonRekognitionFullAccess’ and select their checkboxes. Then click ‘Next:Review’ button. - Finally give your IAM role a name and click ‘Create Role’ Now back at the ‘Attach/Replace IAM Role’ screen, hit the refresh button and choose the newly created role in dropdown. Your EC2 instance can now make trusted calls to S3/Rekognition! And here’s the quick and dirty code (this was a 20-30 minute hack for proof of concept – yes we would be catching errors, etc. if we were going to stand this up or roll it into a application): import boto3 import json from wand.image import Image from wand.color import Color fname = "./sample.pdf" # Take apart PDF into individual image for each page page_images = [] with Image(filename=fname, resolution=300) as pdf: for i, page in enumerate(pdf.sequence): with Image(page) as img: img_name = "image_{}.png".format(i+1) img.format= 'png' img.alpha_channel = False # Set to false to keep white background img.save(filename=img_name) page_images.append(img_name) # Connect to S3 and upload images to OCR s3 = boto3.resource('s3') bucket = '<S3_BUCKET_NAME>' for image in page_images: print("Uploading {} to S3...".format('./' + image)) s3.meta.client.upload_file('./' + image, bucket, image) # Call Rekognition to detect_text rekog = boto3.client('rekognition', '<EC2_REGION_NAME>') img_responses = {} for image in page_images: response = rekog.detect_text( Image={ "S3Object": { "Bucket": bucket, "Name": image, } } ) # Write JSON response from Rekognition text detection out to file with open(image + ".json", 'w') as outfile: json.dump(response, outfile)
http://outofmyhead.olssonandjones.com/2018/02/25/python-driven-ocr-text-detection-with-amazon-rekognition/
CC-MAIN-2019-13
refinedweb
530
54.32
Single Layer Neural Network : Adaptive Linear Neuron using linear (identity) activation function with batch gradient method In this tutorial, we'll learn another type of single-layer neural network (still this is also a perceptron) called Adaline (Adaptive linear neuron) rule (also known as the Widrow-Hoff rule). The key difference between the Adaline rule (also known as the Widrow-Hoff rule) and Rosenblatt's perceptron is that the weights are updated based on a linear activation function rather than a unit step function like in the Perceptron model. Perceptron Adaptive linear neuron The difference is that we're going to use the continuous valued output from the linear activation function to compute the model error and update the weights, rather than the binary class labels. The perceptron algorithm enables the model automatically learn the optimal weight coefficients that are then multiplied with the input features in order to make the decision of whether a neuron fires or not. In supervised learning and classification, such an algorithm could then be used to predict if a sample belonged to one class or the other. In binary classifiers perceptron algorithm, we refer to our two classes as either 1 (positive class) or -1 (negative class).. We can then define an activation function $\phi(z)$ that takes a linear combination of certain input values $x$ and a corresponding weight vector $w$ , where $z$ is the so-called net input ($z = w_1x_1 + ... + w_mx_m$).$$w=\begin{bmatrix} w_1 \\ . \\ . \\ . \\ w_m \\ \end{bmatrix}, x=\begin{bmatrix} x_1 \\ . \\ . \\ . \\ x_m \\ \end{bmatrix} $$ For our case (binary), we can have:$$\text if \quad \sum_{i=1}w_ix_i > \theta \quad \text then \quad \phi = 1 $$ $$\text else \quad \sum_{i=1}w_ix_i < \theta \quad \text then \quad \phi = -1 $$ If we set $x_0=1$ and $w_0=-\theta$, we can have more compact form:$$\text if \quad \sum_{i=0}w_ix_i > 0 \quad \text then \quad \phi(z) = 1 $$ $$\text else \quad \sum_{i=0}w_ix_i < 0 \quad \text then \quad \phi(z) = -1 $$ where $$z=\sum_{i=0}w_ix_i$$ Or more compact form. - Heaviside step activation function:$$\phi(z) = \begin{cases}1 & z > 0 \\ -1 & \text{otherwise} \end{cases}$$ - Linear activation function:$$\phi(z) = z$$ Update of each weight $w_j$ in the weight vector $w$ can be written as:$$w_j : = w_j + \Delta w_j$$ The value of $\Delta w_j$ , which is used to update the weight $w_j$, is calculated as the following: - Heaviside step activation function:$$\Delta w_j=\eta(y^{(i)}-\hat y^{(i)})x_j^{(i)}$$ - Linear activation function:$$\Delta w_j=-\eta \frac{\partial J}{\partial w_j}$$ where $\eta$ is learning rate ($0.0 < \eta <1.0$), $y^{(i)}$ is the true class label of the i-th training sample, and $\hat y^{(i)}$ is the predicted class label. One of the most critical tasks in supervised machine learning algorithms is to minimize cost function. In the case of Adaptive linear neuron, we can define the cost function J to learn the weights as the Sum of Squared Errors (SSE) between the calculated outcome and the true class label: $$J(w)=\frac{1}{2}\sum_{i}(y^{(i)}-\phi(z^{(i)}))^2$$ COmpared with the unit step function, the advantages of this continuous linear activation function are: - This cost function is differentiable. So, the partial derivative of the SSE cost function with respect to the jth weight is: - Because it is convex, we can use a simple and powerful, optimization algorithm called gradient descent to find the weights that minimize our cost function to classify the samples in the Iris dataset. $$\frac{\partial J}{\partial w_j}=-\sum_{i}(y^{(i)}-\phi(z^{(i)}))x_{j}^{(i)}$$ where $i$ is the sample # and $j$ is the feature # (dimension of a dataset). Although the adaptive linear learning rule looks identical to the perceptron rule, the $\phi(z^{(i)})$ with $z^{(i)}=w^Tx^{(i)}$ is a real number and not an integer class label. Also, the weight update is calculated based on all samples in the training set (instead of updating the weights incrementally after each sample), which is why this approach is also referred to as batch gradient descent. We can minimize a cost function by taking a step into the opposite direction of a gradient that is calculated from the whole training set, and this is why this approach is also called as batch gradient descent. Since the perceptron rule and Adaptive Linear Neuron are very similar, we can take the perceptron implementation that we defined earlier and change the fit method so that the weights are updated by minimizing the cost function via gradient descent. Here is the source code: import numpy as np) import matplotlib.pyplot as plt import numpy as np import pandas as pd df = pd.read_csv('', header=None) y = df.iloc[0:100, 4].values y = np.where(y == 'Iris-setosa', -1, 1) X = df.iloc[0:100, [0, 2]].values fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4)) # learning rate = 0.01 aln1 = AdaptiveLinearNeuron(0.01, 10).fit(X,y) ax[0].plot(range(1, len(aln1.cost) + 1), np.log10(aln1.cost), marker='o') ax[0].set_xlabel('Epochs') ax[0].set_ylabel('log(Sum-squared-error)') ax[0].set_title('Adaptive Linear Neuron - Learning rate 0.01') # learning rate = 0.01 aln2 = AdaptiveLinearNeuron(0.0001, 10).fit(X,y) ax[1].plot(range(1, len(aln2.cost) + 1), aln2.cost, marker='o') ax[1].set_xlabel('Epochs') ax[1].set_ylabel('Sum-squared-error') ax[1].set_title('Adaptive Linear Neuron - Learning rate 0.0001') plt.show() As we can see in the resulting cost function plots below, we have two different types of issues. The left one shows what could happen if we choose a learning rate that is too large. Instead of minimizing the cost function, the error becomes larger in every epoch because we overshoot the global minimum. On the other hand, we can see that the cost decreases for the plot on the right side. That's because we chose the learning rate $\eta=0.0001$ is so small that the algorithm would require a very large number of epochs to converge. The following figure demonstrates how we change the value of a particular weight parameter to minimize the cost function $J$ (left). The figure on the right illustrates what happens if we choose a learning rate that is too large, we overshoot the global minimum: Picture from "Python Machine Learning by Sebastian Raschka, 2015" Feature scaling is a method used to standardize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step. Gradient descent is one of the many algorithms that benefit from feature scaling.Here, we will use a feature scaling method called standardization, which gives our data the property of a standard normal distribution. In machine learning, we can handle various types of data, e.g. audio signals and pixel values for image data, and this data can include multiple dimensions. Feature standardization makes the values of each feature in the data have zero-mean (when subtracting the mean in the enumerator) and unit-variance. This method is widely used for normalization in many machine learning algorithms (e.g., support vector machines, logistic regression, and neural networks). This is typically done by calculating standard scores. The general method of calculation is to determine the distribution mean and standard deviation for each feature. Next we subtract the mean from each feature. Then we divide the values (mean is already subtracted) of each feature by its standard deviation. - from Feature scaling So, to standardize the $j$-th feature, we just need to subtract the sample mean $\mu_j$ from every training sample and divide it by its standard deviation $sigma_j$:$$ x_j^\prime=\frac {x_j-\mu_j}{\sigma_j}$$ where $x_j$ is a vector consisting of the $j$-th feature values of all training samples $n$. We can standardize by using the NumPy methods mean and std: X_std = np.copy(X) X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std() X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std() After the standardization, we will train the Linear model again using the not so small learning rate of $\eta = 0.01$: Here is our new code for the two pictures above: import matplotlib.pyplot as plt import numpy as np import pandas as pd from matplotlib.colors import ListedColormap.8, c=cmap(idx), marker=markers[idx], label=cl)() # learning rate = 0.01 aln = AdaptiveLinearNeuron(0.01, 10) aln.fit(X_std,y) # decision region plot plot_decision_regions(X_std, y, classifier=aln) plt.title('Adaptive Linear Neuron - Gradient Descent') plt.xlabel('sepal length [standardized]') plt.ylabel('petal length [standardized]') plt.legend(loc='upper left') plt.show() plt.plot(range(1, len(aln.cost) + 1), aln.cost, marker='o') plt.xlabel('Epochs') plt.ylabel('Sum-squared-error') plt.show() Continued to Single Layer Neural Network : Adaptive Linear Neuron using linear (identity) activation function with stochastic gradient descent (SGD).
http://www.bogotobogo.com/python/scikit-learn/Single-Layer-Neural-Network-Adaptive-Linear-Neuron.php
CC-MAIN-2017-26
refinedweb
1,497
54.63
It has been just over a year since Windows Phone 7 (WP7) was released. While the WP7 application ecosystem has been expanding quickly (in comparison to its market share), some developers have concerns about the future of the platform, particularly with regards to the upcoming Windows 8. The good news for WP7 developers is that working with native Windows 8 applications (i.e., those built with the Metro UI and the WinRT API) is extremely similar to writing WP7 applications. I'll walk you through the process. In a nutshell, the WinRT API has the majority of the calls that are available in the WP7 Silverlight APIs; there are some differences, though not many. For example, the graphics items, especially XNA, are not available. Push notifications are specific to WP7 and will not work in Windows 8, and there are some differences in Live Tiles. Read the Microsoft documentation to get a better idea of the differences. The first thing to do is to get a copy of the Windows 8 Developer Preview up and running, and a copy of the Visual Studio 11 (VS11) Developer Preview (an Express version comes with the Windows 8 Developer Preview) installed. For my work here, I am using a full version of VS11 installed directly within Windows 8; I don't believe it's a great idea to mix and match preview copies of Visual Studio with my main production work machines, but that's just personal preference. I pushed a copy of an existing application to the Windows 8 machine. The first hitch I found was that VS11 did not like the solution from my WP7 project. Trying to create a new solution in VS11 showed that there was no WP7 solution types installed. No worries, I just went ahead and created a new solution. I tried to add the existing project to the solution, but no dice. What did work was doing an "add existing items" to the project, and then picking all of the class files, graphics, XAML, and codebehind files from the project I wanted to convert. I had to close the default MainPage.xaml that was open so it would overwrite. I suppose that if I my previous project had direct changes to App.xaml or App.xaml.cs I would need to cautiously merge them. Then I did a find/replace to change the references to my old project's namespace to the new project's namespace. I tried to do a Rebuild, but it failed (as expected) due to all of the places that I was using references to the WP7 APIs instead of the Windows 8 APIs. Not a problem. Using the migration documentation and the compilation errors, I replaced the bad references with the right ones in the codebehind files. In the XAML files, a bit more work was needed. I replaced the <phone> tag with <UserControl> (and updated the inheritance on the codebehind file). I also discovered that the references to namespaces in the XAML are a bit different. The next pain point was in the UI. The biggie was that the Pivot control does not exist in Windows 8. In fact, the entire UI paradigm is different enough that it may be easier to make a new page entirely, and then try to cram the controls from the old page into the new one, or just start from scratch entirely on the UI side. Ugh. This is when you really value the separation of UI from code. Another oddity is the lack of the InputScope attribute on the TextBox control. This is a valuable item, especially on the tablets that Windows 8 hopes to capture. Outside of these changes, the biggest kick in the pants was regarding IsolatedStorage. I had to switch to Windows.Storage.ApplicationData.LocalSettings, which is backed by the registry. Instead of the simplified settings system, now you need to create ApplicationDataContainers and instead of directly accessing them by name, you need to burrow into the tree of containers and pull the "Values" property and index your settings within that. Yes, it's more capable, but it's a big pain the neck compared to the simple settings in WP7. Overall, the conversion is not quite as rosy as the MSDN documentation suggests, essentially because the documentation omits any of the issues around the UI. Once you get past the UI end of things, the conversion is not bad — it's mostly tedium, to the point where compiler directives and XSLT can handle much of it. The changes to the UI controls are not fun. That said, Windows 8's form factors are different enough from WP7's that you would want to change the UI anyway, and if you are using the MVVM pattern (like you probably are for any application of even moderate complexity), the issues are much less painful. J.Ja Full Bio Justin James is the Lead Architect for Conigent.
http://www.techrepublic.com/blog/software-engineer/porting-a-wp7-app-to-windows-8/?count=50&view=collapsed
CC-MAIN-2016-50
refinedweb
825
70.23
Windows 2000 & Windows NT 4 Source Code Leaks 2764 PeterHammer writes "Neowin.net is reporting that Windows 2000 and Windows NT source code has been leaked to the internet. More on this as we hear it." This is an unauthorized cybernetic announcement. Open Source (Score:5, Funny) Full Article Text (Score:0, Funny) The server is too busy at the moment. Please try again later. What now? (Score:5, Funny) "We fix bugs in 24 to 40 hours, much faster than OSS." Hmmm... (Score:4, Funny) Seriously, this should be pretty interesting. I wonder how many bugs are ACTUALLY in the NT kernels... What's the big deal? (Score:5, Funny) Instead of the sky falling... (Score:2, Funny) Maybe this will be positive for all of mankind! Or maybe I'm crazy. New Licensing Model (Score:5, Funny) Simpsons mode equals one (Score:3, Funny) Leaky (Score:2, Funny) Oh wait a sec...8-) Error message (Score:3, Funny) The server is too busy at the moment. Please try again later. Yep, looks like an error. Must be real Windows code then... Maybe they will rethink Open Source... (Score:5, Funny) -S Damn.... (Score:1, Funny) Oh, say...If the code to IE is REALLY in there, can we have some smart, talented hacker PLEASE fix all those stupid security holes (and Oh yeah..That's right..It's called Firefox, eh? One a related note (Score:5, Funny) Seriously, the previous article [slashdot.org] lambasting open source for being vulnerable is nothing when compared to eyes backed with malicious intent poring over Windows source code for new exploits. So much for security through ignorance. Oh boy... (Score:2, Funny) Oh boy... Re:Torrent? (Score:5, Funny) emerge win2000 Fortune (Score:5, Funny) "Never trust an operating system you don't have sources for. -- Unknown source" Code (Score:5, Funny) The Internet, however, being a polite sort of fellow and completely undesirous of the undoubtedly horrible ramifications of having such a beastie running around loose, gently replaced the source code and gave Windows a friendly pat on the head. Re:Open Source (Score:5, Funny) error.h (Score:5, Funny) So, what does it say? Article +1 Ironic (Score:4, Funny) Re:Torrent? (Score:1, Funny) chroot Re:it's true (Score:5, Funny) So Windows is now fertile ground for foul play? (Score:4, Funny) Open Source is Dangerous??!? (Score:2, Funny) How about Forcibly Opened source? How...surprising! Look! A really good reason to move from previous versions to MS's new DRM enforced versions. Re:Server problems ALREADY... (Score:1, Funny) What the line says... What I saw on first glance... Re:site was /.ed before story went live (Score:2, Funny) How is this to the benefit of the IT community? Re:Mirror With Comments (Score:5, Funny) Re:Mirror of article (Score:2, Funny) In other news... (Score:5, Funny) #1 news item reported after analysis: (Score:3, Funny) Microsoft Windows 2000 was written with GNU/Emacs! Here's the source (Score:5, Funny) If I was big into conspiracy theories... (Score:3, Funny) ... I might think Microsoft leaked it on purpose, so the OSS community would find the bugs, point them out publically, and even describe how to fix the problems. Of course, I'm not the suspicious typeJ ... :-) Wow (Score:3, Funny) Re:Server problems ALREADY... (Score:0, Funny) Critical systems ? I mean Windows is OK for games and office apps, but who the hell would run a critical system with Microsoft rubbish ?. SCO going after Microsoft? (Score:3, Funny) Imagine that! Now we just have to wait for SCO to have a leak and everyone's dirty laundry is out in the open. Re:About bloody time. (Score:4, Funny) Or perhaps you meant /.ed? this is some powerful stuff going on (Score:2, Funny)." Why ofcourse! (Score:5, Funny) Easy to spot packages (Score:4, Funny) How to easily find the Windows source code packages in your daily P2P incoming directory: rosco@dipstick:~/emule/incoming$ ls -l --sort=size -r total %@*@&^23462&^% bytes -rw-r--r-- 1 rosco rosco 645124103 Feb 12 22:49 starwars.zip -rw-r--r-- 1 rosco rosco 658124896 Feb 12 22:50 nt.zip -rw-r--r-- 1 rosco rosco 660100457 Feb 12 22:49 goodbadugly.zip -rw-r--r-- 1 rosco rosco 705012756 Feb 12 22:49 dasboot.zip -rw-r--r-- 1 rosco rosco 706107014 Feb 12 22:56 daftpunk.zip -rw-r--r-- 1 rosco rosco 710127685 Feb 12 22:58 chembros.zip -rw-r--r-- 1 rosco rosco 9874520782^45 Feb 12 22:59 2ksrc.zip -rw-r--r-- 1 rosco rosco 4578924574^37 Feb 12 23:12 ntsrc.zip Segmentation fault. Core dumped.) Re:The shit will hit the fan + Mirror (Score:3, Funny) What if we just use the parts that MS lifted from BSD? Re:Hmmm... (Score:1, Funny) Based on these windows screenshots [freedomware.org], I would say - A Lot. Re:Open Source (Score:5, Funny) Re:it's true (Score:4, Funny) Speaking of torrents, anybody got one? Re:site was /.ed before story went live (Score:2, Funny) P.S. This is my first attempt at writing a funny comment on Slashdot, so please don't be too harsh Re:Close you eyes! (Score:2, Funny) Raider's of the Lost Ark Eww.. melty eye balls. Re:Open Source (Score:3, Funny) The list of files has none other than: win2k/private/inet/mshtml/tools/include/errno.h Linux/GPL code in Windows (Score:1, Funny) Re:Server problems ALREADY... (Score:4, Funny) MOD PARENT DOWN, IT'S NOT FUNNY... SCO Code in Win2000 (Score:5, Funny) Re:For those that need more proof (Score:3, Funny) Bill did it! (Score:1, Funny) "a deluge of worms/virri" - how will we know? (Score:1, Funny) And the sign for this will be???? Re:For those that need more proof (Score:2, Funny) Re:Not good (Score:4, Funny) Here's some of it.... (Score:5, Funny) The server is currently slashdotted, but I managed to download the first few lines of the Windows 2000 codebase. Here they are: Re:How to Build? (Score:2, Funny) Re:Server problems ALREADY... (Score:5, Funny) Argh! Trying to get rid of images of naked NeoWin people thinking about ramifications.... What, no GPFL? (Score:5, Funny) Pffft... (Score:5, Funny) Re:Torrent? (Score:5, Funny) ACCEPT_KEYWORDS="~x86" emerge win2000 Post the damn link! (Score:2, Funny) Re:Just don't use the code (Score:1, Funny) proves rather negative, can we sue them ? Re:So much for security through obscurity (Score:5, Funny) Re:It's a TRAP!!! /Adm. Ackbar (Score:4, Funny) Re:it's true (Score:2, Funny) SOURCE CODE SAMPLE (file #32 of the ~200 MB TAR)!! (Score:0, Funny) (); } write_something(anything); display_copyright_message(); do_nothing_loop(); do_some_stuff(); if (still_not_crashed) { display_copyright_message(); do_nothing_loop(); basically_run_windows_31();:5, Funny) 15 fw calum $ grep -ir " fuck" 40 fw calum $ grep -ir " crap" 98 Should I have been doing this on the company firewall? Probably not. Re:I'll believe it when I see it. (Score:4, Funny) No wonder, with half a meg of memory [usenix.org] Re:it's true (Score:5, Funny). Re:For those that need more proof (Score:5, Funny) AT LAST! The secret to beating Solitaire... This could perhaps be the most significant event of our times! Re:The odds of getting the full source: experience (Score:2, Funny) Re:That is a MYTH (Score:5, Funny) It was only a matter of time before people started saying this.... -Derek my eyes must be getting old (Score:5, Funny) Re:error.h (Score:5, Funny) Re:Here's some of it.... (Score:2, Funny) Re:That is a MYTH (Score:1, Funny) Re:it's true (Score:5, Funny) If it's later demonstrated that you had access to the W2K source and contributed vaguely similar code (even by accident) to a project, it could have severe repercussions for that project. I seriously doubt that having looked at that crappy code, anyone would want to duplicate it in even a vague way. At best it would provide an example of what not to do Re:it's true (Score:5, Funny) However, if someone should glance upon the evil known as win2k source, I hear that are some mystical perl monks who can cleanse your soul. The Iraqi Information Minister (Score:5, Funny) Re:Download it HERE (Score:4, Funny) File headers (Score:3, Funny) Re:What if it were discovered that ... (Score:3, Funny) Re:Torrent? (Score:3, Funny) Ok, probably wasting three karma here, but ++parent Re:The Iraqi Information Minister (Score:1, Funny) Re:Torrent? (Score:5, Funny) TAR!? BZ2?! What the hell? That's not ZIP!!!! Re:it's true (Score:5, Funny) [from drivers/usb/spca50x.c, a usb camera driver] * Function compares two strings. * Return offset in pussy where prick ends if "prick" may penetrate * int "pussy" like prick into pussy, -1 otherwise. */ static inline int match(const char* prick, const char* pussy, int len2) { int len1 = strlen(prick); int i; const char* tmp; for (i = 0; i len2) return -1; if (!strncmp(prick, tmp, len1)) return i + len1; return -1; } To get around stupid slashdot filter: #:So much for security through obscurity (Score:5, Funny) Re:There is no evidence listed (Score:3, Funny) Re:It's a TRAP!!! /Adm. Ackbar (Score:5, Funny) We like Linux as it is. Reliable, stable, and fast. Copying Microsoft code in would jeopardize that. Never mind the IP issues. . . Microsoft leaks the code!! (Score:0, Funny) 2. Open source coders switch from Linux to Window, eliminating bugs 3. Profit!! Re:it's true (Score:5, Funny) Rakshasa Re:backups (Score:5, Funny) This is probably old hat now, but.... Real men don't do backups, they just pack their files into windows_2000_source_code.zip and post them to their website.... with torrent links... Re:So much for security through obscurity (Score:5, Funny) Hrmph. (Score:5, Funny) Hrmph. I opened one of those files and all it said was: Re:So much for security through obscurity (Score:3, Funny) Naaaaa.... --anactofgod--- If code is criminal, only criminals will have code (Score:5, Funny) Now that was a very satisfying cliche re-use. I hope it was an original cliche re-use. BTW the server seems ve-wy slow to-day. I think we were just Farked. Re:Semi-slashdotted? Here's the text... (Score:5, Funny) "There seems to have been a slight problem with the database. Please try again by pressing the refresh button in your browser." Refresh, you say? Oh-kay... It's worse than that! (Score:5, Funny) Never mind the sourcecode... (Score:2, Funny) Or must we say in this case: backslashdotted ? News flash (Score:2, Funny) Inside sources also report that microsoft is also deliberating on firing all its employees and relying completely on the so-called underground community to maintain and develop new features of the Windows operating system. More on this as it comes. Re:So much for security through obscurity (Score:5, Funny):it's true (Score:3, Funny) kots of kuck to you! Life is good. (Score:5, Funny) And I have 5 Moderator points. Today -- today, life is good. Re:it's true (Score:5, Funny) People were milling about in the room, I finally took the dive and made a couple of prank calls for pizza. Some other guys managed to get the US up to def con 4. I envied them because I managed to get only arrested. It seemed real. Very real. Someone had broken into the potting shed, stuffed a key to the nuke room under a bush and escaped with it. There was some small mention about it on the Drudge too but I couldn't find it right now. It seems the government was able to really sweep that one under the carpet. I wonder how. There are people around with the phone number still, trust me. I envy them. I would gladly make the call to nuke France. Even though it would be a HUGE task. So the now Brittany Spear's leaked cell number is mostly just boring and obsolete. Re:Don't even LOOK at the code (Score:2, Funny) Beware the One tarball from the dark lord - it is highly corruptible and any OS coder who gazes at it is forever damned Some snippets of code (Score:5, Funny) if (app.exename="NETSCAPE.EXE") system.sluggify(); And this one provides for the future... if (site.url="") { browser.renderer.togglebuggyrenderer(); browser.fakepopup(""); } I can't say anything about this one though: if (user.status==PISSED_OFF) prick.annoyingpopup("Hello, I noticed you are writing a letter") Seriously, given the denounces of delayed APIs for Navigator, I wouldn't doubt the first one... could someone with the codes please grep for netscape.exe? Re:So much for security through obscurity (Score:4, Funny) Re:IAAL??? (Score:5, Funny) My god, this is simply not possible - man, this is Well, I believe the latter must be the case. Be more careful on your next post, OK? Re:Mirror With Comments (Score:2, Funny) Well, it's not that hard when you think about it. His comments are largely thus: (some code here) (more code) (yet more code) Re:I know that... (Score:3, Funny) Amongst their other technological feats, Microsoft have now invented the time machine and have succeeded in travelling to the future, getting hold of the Samba source code and travelling back to the early development days of Windows 2000 to incorporate future Samba source code within Windows 2000. So now that the source code to Windows 2000 is released, MS can now sue the Samba team for copying their code. Fiendish... i found it! (Score:1, Funny) Re:The odds of getting the full source: experience (Score:3, Funny) Microsoft source code leak? Pfft, that's nothin... (Score:5, Funny) Re:Oh, no! I Looked! (Score:4, Funny) 200 GOSUB 38000 ; * Profit When you find them.... (Score:5, Funny) Re:It's not a problem. (Score:5, Funny) OK, it just HAD to be said.. Re:tin foil hat (Score:3, Funny) Onan Source developers are ALREADY in trouble for their leaks. They need to be taught that just because a tool is available, does not make it right to use it however they see fit. Re:Compilation and Windows source code (Score:3, Funny) And Bob Barker always claimed it took a good 24 hours to restart the Plinko machine after a contestant stopped it, but that wasn't necessarily true either... Re:That is a MYTH (Score:2, Funny) found a security hole! (Score:3, Funny) Re:It's a TRAP!!! /Adm. Ackbar (Score:4, Funny),) Re:Life is good. (Score:5, Funny) Here it is... (Score:2, Funny) Its a little long but here it is: -------- #include "win31.h" #include "win95.h" #include "win98.h" #include "workst~1.h" #include "evenmore.h" #include "oldstuff.h" #include "billrulz.h" #include "monopoly.h" #define INSTALL = HARD char make_prog_look_big[160000]; void main() { while(!CRASHED) { display_copyright_message(); display_bill_rules_message(); do_nothing_loop(); if (first_time_installation) { make_50_megabyte_swapfile(); do_nothing_loop(); totally_screw_up_HPFS_file_system(); search_and_destroy_the_rest_of_OS/2(); make_futile_attempt_to_damage_Linux(); disable_Netscape(); disable_RealPlayer(); disable_Lotus_Products();:2, Funny) Ok guys, just kidding... Really... Don't flame m... Ouch! Re:Life is good. (Score:5, Funny) Toxic leak (Score:3, Funny) Emergencies crews are working around the clock to clean up the most toxic leak since Exxon Valdez! SCO Action (Score:1, Funny) I'll send him the hardcopy by UPS. In a related story, Wine annnounces (Score:5, Funny) "Don't ask us how we did it!!!" Re::: prediction :: (Score:5, Funny) IANAL, but from what I've read on slashdot... This is good stuff Re:Life is good. (Score:5, Funny) What, and ruin a perfect day? You slashdotted forbes (Score:3, Funny) Someone got into Mac OS X's source and posted it 2 (Score:5, Funny) I didn't point you to it Funny how different two companies feel about source code. Apple has somewhat embraced the open source model, contributing to KHTML, and using many other open source projects. While Microsoft has shunned them all. Re:IAAL??? (Score:5, Funny) Re:So much for security through obscurity (Score:3, Funny) Improve Wine!!! (Score:2, Funny) *That* would be something to make people start using Linux as a desktop! Re:You're missing the point (Score:3, Funny) Dude, where have you been for the past three years? Oh, I know... government IT. How'd I guess? Re:So much for security through obscurity (Score:5, Funny) Is that true? Can you prove it? For years after Windows 95 came out, there were more Windows 3.1 systems than there were Windows 95 systems. Why is this? It's probably for the same reason that there are more dead people than live people. Re:The real question is, of course - (Score:5, Funny) 1. look at the linux source 2. find a mistake 3. send a patch to the maintainer. 4. PROFIT!! B) 1. look at the windows source 2. find a mistake 3. ??? 4. write a worm 5. get caught 6. JAIL=tEH_SuXX0rZZ!!!1!! lolomgrofl Win95 User Right Here (Score:0, Funny) Re:So much for security through obscurity (Score:5, Funny) what my first thought was: Because every idiot skr1pt k1dd13 and their lam0r grandmother can code winDOZE viriii, but only 1337 H4XX0rZ can ownzor teh LiNuX and MaC BoXxEn!!!1!! how it should be phrased: Successfully designing, implementing and deploying a worm/virus targetting the aforementioned "alternative" platforms Linux and/or Apple would - although being a much more complex undertaking and promising less quantifiable success (for example, infected hosts) than targetting the Microsoft Windows platform - could strengthen the Programmer's social status amongst his peers. how it should be phrased on slashdot: Frist psot! Oh no.......... (Score:2, Funny) Does it say something about me that I'm more interested and excited about this than any news story that I've read in the last year? (Janet's tit included.) $geek++; Re:So much for security through obscurity (Score:5, Funny) Re:So much for security through obscurity (Score:5, Funny) :SHORT THE STOCK? (Score:5, Funny) Why do I predict that? Simple: The Stock Market's reality is the exact opposite of Slashdot's reality Proof? One word: SCO WINE development moves at an exponential rate! (Score:2, Funny) Re:SHORT THE STOCK? (Score:1, Funny) maybe its that thing, atm 23 seeders, 239 downloading and it was created on 2/12/2004 11:16:13 PM, so looks good so far What a waste of bandwidth. I don't even want the binary on my computer. Why would I want that massive blob of repeatedly patched DOS 3.0/Win 3.1 source code contaminating my disk? If I need a laugh, I'll just turn on the comedy channel. So here's what you do (Score:5, Funny) 2. Reproduce windows bugs. 3. Fix bugs faster the MS does. [...] 6. Profit! Finnaly de-lurked (Score:2, Funny) Re:it's true (Score:2, Funny) So where can i find this budum-chink! Re:It's a TRAP!!! /Adm. Ackbar (Score:3, Funny) Re:It's a TRAP!!! /Adm. Ackbar (Score:5, Funny) Wait a minute.... Re:it's true (Score:5, Funny) grep -ir " shit" windows_2000_source_code/* private/inet/wininet/urlcache/conman.cxx:// BUGBUG - DON'T DO THIS SHIT. private/shell/ext/netplwiz/mnddlg.cpp: private/shell/win16/commctrl/ctl3d.c: private/windows/media/avi/avicap/capdib.c: private/windows/media/avi/avicap.16/capdib private/windows/media/avi/avicap.io/capdib private/windows/media/avi/msrle/rle.c: Re:Microsoft source code leak? Pfft, that's nothin (Score:3, Funny) Re:In a related story, Wine annnounces (Score:4, Funny) DEAR SLASHDOT SUCKERS (Score:2, Funny) Re:Life is good. (Score:5, Funny) So your girlfriend reads I got it (Score:2, Funny) Re:The real source is 300GB (Score:3, Funny) Re:Life is good. (Score:5, Funny) Guy 1: "It's midnight, the windows source in leaked, we have 5 moderator point and our sunglasses on..." Guy 2: "hit it" Sorry, that image just popped into my head it wasnt leaked!!! (Score:5, Funny) Re:it's true (Score:2, Funny) I should have seen that coming a mile away Re:Life is good. (Score:5, Funny) Re:Life is good. (Score:2, Funny) >What, and ruin a perfect day? Fsck, no! Wait till after dark, and then blow her up.! Bogus Bogus Bogus (Score:2, Funny) Re:So much for security through obscurity (Score:3, Funny) Windows source code posted!! [soundspectrum.com] Re:It's a TRAP!!! /Adm. Ackbar (Score:3, Funny) Father: Where did you learn to do this? Tell me, where?! Kid: I learned it from you, dad! I learned it from you! Coincidence? I think not... (Score:4, Funny) [click] A fatal exception OE has occured at 0028:C001539A. The current application will be terminated. "...what the hell?" ( meanwhile, deep inside Windows... ) if( sourceLeaked == true && url = "slashdot.org") { BSOD(); SendEmail( "bgates@microsoft.com", "IP of teh L1n|_|x haxx0r: "+userIP ); } Re:So much for security through obscurity (Score:5, Funny) I have noticed some viruses for linux. One was just a script and it recommended that the indivdual chmod a+x and then run it. The other one you had to type gcc -o virus virus.c and then run the resulting binary in order to get it to work. And then there was that one where it wanted to load a module but it couldn't because modules weren't supported on that kernel, although it did try for Then there was that one that installed an irc backdoor: JOIN #ddos# vrfx MODE lamer +i MODE #ddos# +nts 23:14 < lamer HTTP server listining on poort: 999 root dir: c:\ Address Oh, wait. that last one was a Windows thing. But those other ones. Look out. They'll do some nasty things. I mean, it takes a bit of work to get them running. But once you do. Look out. They're dangerous! Ssshhh (Score:1, Funny) Open Source Community Compared to Car Bombers.. (Score:2, Funny) ." Interesting choice of comparisons, if you ask me. Re:The real question is, of course - (Score:2, Funny) but i doubt a sourcecode leak is all that dangerous, surely security can't be that bad, can it? instances of "fuck" (Score:5, Funny) bsc/.glimpse_index:fuck?sMP bsc/.glimpse_inde bsc/.glimpse_index:fucked?sM` bsc/.g private/shell/applets Re:Easy to spot packages (Score:2, Funny) Clippy? (Score:2, Funny) Announce: New OS Project Starting (Score:1, Funny) Re:It's a TRAP!!! /Adm. Ackbar (Score:3, Funny) After all, why else would they they shift the VMS letters forward one to get WNT (Windows NT)? -Bill Re:Microsoft source code leak? Pfft, that's nothin (Score:3, Funny) Nimda infection (Score:3, Funny) Empty Re:It's a TRAP!!! /Adm. Ackbar (Score:5, Funny) Patch submission (Score:1, Funny) Re:News (Score:3, Funny) Re:It's a TRAP!!! /Adm. Ackbar (Score:2, Funny) Re:The real question is, of course - (Score:2, Funny) Re:It's a TRAP!!! /Adm. Ackbar (Score:5, Funny) Viruses are well supported by their authors, their program code is fast, compact and efficient and they tend to become more sophisticated as they mature. So, Windows is not a virus. how things suck (Score:1, Funny) or printf("Ha! There is no verbose mode, sucker. Try again\n"); and so on Is this an ear? (Score:1, Funny) Re:Microsoft called me... (Score:3, Funny) SCO's new target (Score:1, Funny) os/2 is in there! (Score:1, Funny) for a laugh Re:it's true (Score:3, Funny) That's kinda sad. I've written a lot of code, and I've never felt the need to use profanity (no matter how frustrated I might have been). Programs should be written as professionally as any other document--there's room for humor, but words like fuck really shouldn't have a place in them, IMO. Re:Life is good. (Score:5, Funny) Do you have any idea how much that costs around this time of year? Re:Life is good. (Score:1, Funny) But my wife does!!! Evidence (Score:4, Funny) This file is the absolute strong evidence that Microsoft did increase the security in the Windows kernel. Re:It's a TRAP!!! /Adm. Ackbar (Score:3, Funny) while(1) fork(); to moderate efficiency. I'm now blind (a la "don't look in the Ark of the Covenant")...and of course running from both SCO and MS. Solitaire! (Score:2, Funny) Some goodies from the source code comments (Score:1, Funny) shell/win16/commctrl/ctl3d.c: inet/controls/framewrk/ctlview.cpp: inet/mshtml/src/site/text/linesrv.hxx:// basically an oversized v-table. C sucks. inet/urlmon/search/b4hook.cxx:// SUPER HACK FUNCTION because InternetCrackUrl sucks. shell/browseui/iaccess.cpp: shell/ext/webcheck/throttle.cpp: shell/ext/cscui/dll/filelist.h:// fnl.AddFile(TEXT("\\\\performance\\poor"), TEXT("sucks.doc")); shell/ext/ftp/priv.h: extracted in such a way that we hit the net. This figgen sucks!!! shell/ext/msident/multiusr.cpp: shell/ext/msnspa/proxy.c: windows/media/avi/drawdib/drawdibi.h:#defi windows/media/avi/mciwnd/mciwnd.c: inet/wininet/ftp/test/multfind/multfind.c inet/wininet/http/headers.cxx: ntos/w32/ntcon/server/output.c: * ICK!!!!!! Convert to chars. This sucks. We know inet/mshtml/src/site/text/lscomplx.cxx: windows/shell/shole/shole.c: shell/ext/docprop/propdlg.c: inet/mshtml/src/site/text/onerun.cxx: inet/mshtml/tried/triedit/lexer.cpp: windows/media/avi/compman/icm.c: sdktools/vctools/rcdll/p0io.c: NV_DECLARE_TEAROFF_METHOD( DoTheDarnPasteHTML, dothedarnpastehtml, (IMarkupPointer*, IMarkupPointer*, HGLOBAL )); shell/lib/util.cpp:// _SHPrettyMenu -- make this menu look darn purty shell/comctl32/cutils.c:// Don't freak out about this code. It will do nothing on NT, nothing yet inet/mshtml/src/site/text/linesrv.cxx: putting a $ in "Micro$oft" (Score:3, Funny) Oh, no question, the use of the dollar sign is a cheap shot. But, hey, at least a quarter of why I hang out at Maybe my serious stuff would be read more if I were to adopt a more "proper" tone but after too many years in jacket and tie (or even suit-bound - blech!) in flourescent-lit office buildings, I just can't be bothered. I mean, criminy, I've been in self-imposed exile from the land of corporate jobs and "serious" business prose for over three years now and have just come home from the mushiest, sappiest, flat out cutest Valentine's Day dinner of my life, part of which was spent discussing the implications of my swiftly growing business and my swiftly improving finances. So doggone it, the silly letter usages stay. The world will just have to survive the trauma of it all. Down with propriety! Hail giggling and ditzy cheap shots! Rustin
https://slashdot.org/story/04/02/12/2114228/windows-2000-windows-nt-4-source-code-leaks/funny-comments
CC-MAIN-2017-17
refinedweb
4,535
68.16
Hi Glenn, Adam, On Wed, Dec 08, 1999 at 01:14:02AM -0500, Adam Di Carlo wrote: > Glenn McGrath <Glenn.McGrath@jcu.edu.au> writes: > >. I think what I'm doing will tie in quite nicely with this. I have fdisk_reread() trying devices from the following list (copied straight from my new code): "/dev/sd{a-h}", "/dev/hd{a-h}", "/dev/md{a-h}", "/dev/ida/c{0-7}d{0-15}", "/dev/rd/c{0-7}d{0-31}", #if #cpu (i386) "/dev/ed{a-d}", #elif #cpu (m68k) "/dev/ad{a-h}", #endif Which means MD devices should be given to select_not_mounted() through fdisk_find_partition_by_type(). I've been holding back a little because I've been trying to make sure I don't include tape drives and CD-ROMs - Erik has been helping me out here. I'd probably be the one Adam is thinking of, grubbing around in /proc. :-) However, I think I should send in what I have now and add to it later - I'll make it a little more respectable (comments, etc.) and post it to the list in a couple of hours. Regards, Mark.
http://lists.debian.org/debian-boot/1999/12/msg00125.html
CC-MAIN-2013-48
refinedweb
191
66.78
01-11-2013 02:49 PM I'm loading XML into a ListView and no matter what, it's coming in in the reverse order of which it is listed. Here's my code (with personal website removed)... anyone have any ideas how to change the order back to the way it's originally listed? import bb.cascades 1.0 import bb.data 1.0 Page { content: ListView { id: myListView dataModel: dataModel listItemComponents: [ ListItemComponent { type: "item" StandardListItem { title: ListItemData.test } } ] } attachedObjects: [ GroupDataModel { id: dataModel }, DataSource { id: dataSource source: "" query: "root/test" onDataLoaded: { dataModel.insertList(data); } } ] onCreationCompleted: { dataSource.load(); } } I can't use sortingKeys because I need the XML to come in the way it's listed, which isn't sortable. I tried using sortedAscending: false/true and it didn't change the order. Thank you! 01-25-2013 02:45 AM Hi Loomist, I am also facing the same issue right now. Did u found its solution?? Can you please share it. Thanks in advance. 01-29-2013 09:53 PM Likewise I have the exact same problem described by Loomist. 01-29-2013 10:06 PM - edited 01-29-2013 10:06 PM I still haven't found a solution to this problem. I have actually set aside the app that is requiring this, because it's just too big of an issue and I won't submit an app with such a glaring problem. I'm guessing it might be a bug in the XML handling if others are having this issue. 01-29-2013 10:08 PM - edited 01-29-2013 10:09 PM I'm getting the issue using JSON data. The problem is probably internal to GroupDataModel (unless we're using it incorrectly). 02-11-2013 10:08 PM Hi all, Try using an ArrayDataModel instead of a GroupDataModel. A GroupDataModel is designed to sort your data automatically, while an ArrayDataModel allows you to arrange the data in the order you want (including to leave it as is). Also, use the append() function instead of the insert(). For more information on the ArrayDataModel class, check this link: Samar Abdelsayed - Application Development Consultant - BlackBerry Did this answer your quetion? Please accept post as solution. Please refrain from posting new questions in solved threads. Found a bug? Report it using the Issue Tracker 02-12-2013 06:02 AM Hi Samar, I tried your solution and it works fine.When trying to delete items it do so from the listview and xml when there is more than one item in the list. But when there exist only a single item in the list, deleting it removes data from xml but doesnt reflect the change in the listview even if I have given datasource.load(); to make the changes effective. Please, Help me out ot find a solution
http://supportforums.blackberry.com/t5/Cascades-Development/XML-DataSource-is-sorting-in-reverse-unable-to-change-sort-order/m-p/2117883
CC-MAIN-2013-20
refinedweb
470
66.23
t20130917: sieve3 horizontal subdivision slashes the size of Freebase Showing 1-1 of 1 messages t20130917: sieve3 horizontal subdivision slashes the size of Freebase Paul Houle 9/17/13 1:37 PM As of there is a first draft of 'sieve3', which splits up an RDF data set into mutually exclusive parts. There is a list of rules that apply to the triple, matching a rule diverts a triple to a particular output, and triples that fail to match a pattern fall into the 'output'. The horizontal subdivision looks like this Here are the segments 'a' - rdfs:type 'description' 'key' -- keys represented as expanded strings 'keyNs' -- keys represented in the key namespace `label` -- rdfs:label `name` -- type.object.name entries that are probably duplicative of rdfs:label `text` -- additional large text blobs `web` -- links to external web sites `links` -- all other triples where the ?o is a URI `other` -- all other triples where the ?o is not a Literal Overall this segmentation isn't all that different from how DBpedia is broken down. Last night I downloaded 4.5 GB worth of data from `links` and `other` out of the 20 GB dump supplied by Freebase and I expect to be able to write interesting SPARQL queries against this. This process is fast, completing in about 0.5 hrs with a smallAwsCluster. I think all of these data sets could be of interest to people who are working with triple stores and with Hadoop since the physical separation can speed most operations up considerably. The future plan for firming up sieve3 is to get spring configuration working inside Hadoop (I probably won't put spring in charge of Hadoop first) so that it will be easy to create new rule sets in either by writing Java or XML. This data can be download from the requester paid bucket s3n://basekb-lime/freebase-rdf-2013-09-15-00/sieved/
https://groups.google.com/forum/?_escaped_fragment_=topic/infovore-basekb/vtFso8nWHvg
CC-MAIN-2016-26
refinedweb
318
56.79
Created on 2015-04-24.11:54:51 by eaaltonen, last changed 2015-11-10.16:30:18 by zyasoft. My patched version of IPython 0.10.2 crashed on startup with jython 2.7rc3. The reason was simply mismatch of attribute name. 98c98 < _console.startup_hook = function --- > _console.startupHook = function With this change, the old Ipython did start. No tab completion, sadly. Let's see if we can get the current version of IPython to work in 2.7.1. Please note that Jython's default console now supports tab completion. there's a setStartupHook method, one probably should use that one. As a side note, I'm trying this branch, and with the change in readline.py I have IPython 3.0 running. Tab completion does work, but is a bit wonky. This is the diff against current trunk: diff -r bb6cababa5bd Lib/readline.py --- a/Lib/readline.py Sun May 17 09:10:22 2015 +0100 +++ b/Lib/readline.py Wed Jun 24 18:02:31 2015 +0200 @@ -95,7 +95,7 @@ _reader.redrawLine() def set_startup_hook(function=None): - _console.startup_hook = function + _console.setStartupHook(function) def set_pre_input_hook(function=None): warn("set_pre_input_hook %s" % (function,), NotImplementedWarning, stacklevel=2) This is an easy fix, but hard to test with something like pexpect. In part, this is because JLine2 does not have completely compatible support compared to readline for this functionality, especially under a pty. I also suspect that this may vary from platform to platform (OS X vs Linux vs Windows), but something we need to look into more. I. Fixed as of
http://bugs.jython.org/issue2338
CC-MAIN-2018-13
refinedweb
261
70.9
. Parameters Closure parameters are listed before the -> token, like so: def printSum = { a, b -> print a+b } printSum( 5, 7 ) //prints "12": def myConst = 5 def incByConst = { num -> num + myConst } println incByConst(10) // => 15 Or another example: def localMethod() { def localVariable = new java.util.Date() return { println localVariable } } def clos = localMethod() println "Executing the Closure:" clos() //prints the date when "localVariable" was defined Implicit variables Within a Groovy Closure, several variables are defined that have special meaning: it If you have a Closure that takes a single argument, you may omit the parameter definition of the Closure, like so: def clos = { print it } clos( "hi there" ) //prints "hi: class Class1 { def closure = { println this.class.name println delegate.class.name def nestedClos = { println owner.class.name } nestedClos() } } def clos = new Class1().closure clos.delegate = this clos() /* prints: Class1 Script1 Class1$_closure1 */ Closures as Method Arguments When a method takes a Closure as the last parameter, you can define the Closure inline, like so: def list = ['a','b','c','d'] def newList = [] list.collect( newList ) { it.toUpperCase() } println newList // ["A", "B", "C", "D"] In the above example, the collect method accepts a List and a Closure argument. The same could be accomplished like so (although it is more verbose): def list = ['a','b','c','d'] def newList = [] def clos = { it.toUpperCase() } list.collect( newList, clos ) assert newList == ["A", "B", "C", "D"] More Information Groovy extends java.lang.Object and many of the Collection and Map classes with a number of methods that accept Closures as arguments. See GDK Extensions to Object for practical uses of Groovy's Closures. See Also:
http://groovy.codehaus.org/closures
crawl-002
refinedweb
269
50.87
Setting up a WebAPI 2.0 project using Visual Studio 2012Step 1: We start our project with a Visual Studio 2012 ASP.NET Project. Note that we use .NET Framework 4.5. This is because Web API 2.0 takes dependency of .NET Framework 4.5. Open the Nuget Package Manager Console and fire away the following install commands: 1. Install WebAPI OData first thing. This installs the required WebAPI dependencies as well. PM> install-package Microsoft.AspNet.WebApi.OData –pre 2. We will use WebAPI’s help pages to create dummy data. To do this, we need to uninstall the Microsoft.AspNet.Mvc.FixedDisplayMode package first. Once uninstalled, we install the Microsoft.AspNet.WebApi.HelpPage pre-release package. PM> uninstall-package Microsoft.AspNet.Mvc.FixedDisplayModes PM> install-package Microsoft.AspNet.WebApi.HelpPage –pre PM> install-package WebApiTestClient 3. The TestClient package also uses jQuery, jQuery UI and KnockoutJS packages. So we install that next. PM> install-package jquery PM> install-package jquery.ui.combined PM> install-package knockoutjs One final piece of hookup is necessary to show the Test Client in the Help Pages. Open the Api.cshtml in the path Areas/HelpPage/Views/Help. After the last <div> ends add the following line to include the Test Client UI @Html.DisplayForModel("TestClientDialogs") In the same page, in the ‘Scripts’ section, add the following @Html.DisplayForModel("TestClientReferences") 1. Finally we install EntityFramework package to save and load data from our database. PM> install-package EntityFramework This completes the Project setup and we can now go ahead and build our API Controller and backend. The data model and scaffolding the controllersNext we setup our Model. We’ll have three classes Customer, Order and OrderDetails. A Customer may have zero or more Orders and each Order will have one or more Order Details. public class OrderDetails { public int Id { get; set; } public string Name { get; set; } public int Amount { get; set; } public decimal Price { get; set; } } public class Order { public int Id { get; set; } public DateTime PurchaseDate { get; set; } public string BillingAddress { get; set; } public virtual IList<OrderDetails> OrderItems { get; set; } } public class Customer { public int Id { get; set; } public string Name { get; set; } public virtual IList<Order> Orders { get; set; } } Before we scaffold the API Controllers, build the project. Scaffolding the Customers ControllerTo scaffold the Customers controller, we right click on the Controllers folder and select ‘Add->New Controller’ to add a new Api controller for the Customer Model class and call it CustomersController. Making it Queryable via ODataOnce the code generation complete, open the Customers Controller and on the GetCustomers() action method, add the attribute called Queryable as follows: [Queryable(MaxExpansionDepth=2)] public IEnumerable<Customer> GetCustomers() { return db.Customers.AsEnumerable<Customer>(); } The Queryable attribute enables OData querying. However the attribute has a bunch of parameters to enable fine grained control over data being returned. Above we have, set MaximumExpansionDepth parameter to 2. This enables us to query Customers and two levels deep from Customers, which in our case is Orders and OrderItems. Additional parameters in the Queryable attribute are as follows: This is all that we’ve to do enable OData querying. Let’s run the application and some data. Adding Data using TestClientWe run our application and navigate to the /Help page. We’ll see Web API Help Page generate the following page for us To add data, we click on ‘POST api/Customer’ link. At the bottom of the page, we’ll see the ‘Test API’ button. Click on the button to bring up the following dialog. It actually comes up with Sample data. I’ve Modified the sample data slightly. You can see the Data in the SampleData.Json.txt file in the code repository. In the popup, click Send to POST the data. We’ll get the following response indicating success: This adds one Customer, three orders and three Order Items per order. Now let’s see how we can query the data. Querying Using $select and $expand1. The $select keyword enables us to filter out the specific number of queries we would like. So let’s do a $select=Name. This should return only the Customer Name. As we can see, the downloaded JSON has only the Name. We could give /Customers?$select=Name,Id to get both Name and Id. 2. The $expand query allows us to specify which collections in the current Entity to load. For example, $expand=Orders will return the Customer and all their orders. 3. We can specify hierarchy of collections to load using the / separator. Example $expand=Orders/OrderItems will get the Orders and OrderItems for each order. 4. We can use both $expand and $select to retrieve selected fields in the entire hierarchy. For example the following query will get the Name, PurchaseDate and Amount from the hierarchy: expand=Orders/OrderItems&$select=Name,Orders/PurchaseDate, Orders/OrderItems/Amount Pretty neat! The $select and $expand OData keywords thus add a lot of flexibility for us to fine tune the data we want to retrieve. Note: All the JSON data shown in the above screenshots have been formatted manually for better readability. ConclusionWeb API 2.0 brings two new keywords $select and $expand that help us fine tune the data returned by our OData queries over Web API. Additionally we also saw how to setup a Web API 2 project using Visual Studio 2012 and how to use the Web API help Page and the Test Client packages. Download the entire source code of this article (Github) Tweet
https://www.devcurry.com/2013/07/aspnet-web-api-20-and-new-odata-keywords.html
CC-MAIN-2019-39
refinedweb
919
59.6