text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Using objc_util and ctypes to convert Python code to Obj-C code I want to program a game and then run it through a converter to convert the python code into objective c. Example: Library/example.py print 'hello world' You know what that does when it is run. Library/converter.py #This is the code where I need help When run, converter.py prints this >>>What file should I convert? example.py >>>Converting... >>>Parsing... >>> >>> #Converted output in objective c Can anybody help me with this? Maybe some example code? - Webmaster4o Why? Translating code between languages doesn't always work out too well. Have you looked at the app template? I looked at the template, I like it. Just for future reference, I want know how to convert. I often use a Windows computer; I can't use Xcode there. Is there any way? As @Webmaster4o said, automatic code converters produce horrible code most of the time. How horrible the code is depends on how different the languages are. For example, there is lib2to3, which is Python's standard Python 2 to Python 3 converter. The two versions of Python have many differences, but in the end they are still very similar, so lib2to3's output is generally quite readable. (Keep in mind that lib2to3does not attempt to convert every bit of code. Anything that it can't handle is simply ignored.) A more serious example, there's a Java-to-Python converter that I came across a while back (don't remember the name off the top of my head, but it's on PyPI somewhere). Java and Python have some major differences besides syntax, most importantly the standard library. This converter actually included the (most important) standard Java classes rewritten as Python modules, that way all method calls remained the same. As you can probably guess, this will produce code that works (if you're lucky), but nothing that looks like normal Python code. There are also some Java language constructs that cannot be translated directly to Python, such as assignments or increment/decrement operators in expressions. Converting between Python and C is even worse than that. The two lanugages have few things in common besides basic constructs like if/else. They also have very different concepts of what is a "basic language feature". For example, this is fairly basic Python code: mylist = [] mylist.append("hi") mylist.append(-42) print(mylist[-1]) However this would be quite hard to convert as C has no standard equivalent to Python's list. There are arrays, however they are fixed-size and their contents must all be of the same type. The other way around (C to Python) isn't much easier, and sometimes basically impossible. For example, this is valid C: #include <stdio.h> int main(int argc, char **argv) { printf("%s\n", argv[argc]); return 0; } You should never write this in a real program - what this does is get the data located in memory directly behind the argvarray, pretend that it's a char *, and print the text data that it points to. How would you translate this to Python? (Actually I think this falls under "undefined behavior", which means that if you give this to a C-to-Python translator, it would be allowed to crash, format your hard drive and/or summon Richard Stallman armed with a katana.) Point is, writing a code translator is hard, and there will always be cases where it will not work perfectly. If you want to run Python code from C, you should embed CPython. That way you have a full Python environment that is guaranteed to work. is on GitHub Trending Python these daze. It focuses more on algorithm porting rather than porting an entire script. I figured that because I heard that Python compiles into C and then into Assembly when run, somebody had invented a program that could print your Python code in c, halfway compiled. No, Python is not compiled into C. Python source code is internally compiled into Python bytecode, which is then run by the Python runtime. The runtime (CPython) is written in C, but that doesn't mean that Python code is compiled to C. There is a project called Cython (not to be confused with CPython, the "standard" Python implementation from) which is based on Python's syntax, but has some extra features for working with C libraries. Cython source code is "compiled" to C, but only so it can be compiled to native code by a normal C compiler. Most Python code is also valid Cython, so you can probably use that to compile Python scripts to native code.
https://forum.omz-software.com/topic/2932/using-objc_util-and-ctypes-to-convert-python-code-to-obj-c-code
CC-MAIN-2018-05
refinedweb
778
72.16
Red Hat Bugzilla – Bug 177634 AIM7 File Server Performance -15% relative to U2 Last modified: 2007-11-30 17:07:22 EST From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050921 Red Hat/1.7.12-1.4.1 Description of problem: The initial data show some regression from U2 to U3 beta is -15% for the aim7 fileserver workload. The delta correlates with amount of I/O performed in the workload. System is a 16-cpu IPF HP rx8620, 64 GB memory w/ 12 Fiber Controllers and 144 ext3 filesystem spread over 144 luns over 6 MSA1000 FC storage arrays. Version-Release number of selected component (if applicable): kernel-2.6.9-27.ELsmp How reproducible: Always Steps to Reproduce: 1. Configure 16-cpu, 64 GB, 144 luns on 12 FC w/ 6 MSA1000. 2. Run AIM7 fileserver mix 3. Actual Results: HP does regression testing since GA on RHEL3 and RHEL4, threshold for reporting is -3%. Expected Results: Within 3-5% unless understood by kernel changes. Additional info: See attached performance graph with many RHEL4 regression data points. Created attachment 123124 [details] AIM7 regression tests I marked this bug as a regression. Created attachment 123175 [details] Oprofile data on U3 - AIM7 fserver point run @ 2000 Created attachment 123176 [details] oprofile data on U2 - AIM7 fserver point run @2000 Created attachment 123177 [details] lockstat data on U3 - AIM7 fserver point run @2000 Created attachment 123178 [details] lockstat data on U2 - AIM7 fserver point run @ 2000 We did point runs of the AIM7 fserver workload at 2000 jobs both on RHEL4U2 and RHEL4U3 and ran oprofile. We also built lockstat-enabled kernels and gathered lockstats on point runs - we see no major changes in lock contention. See attachments. We also tried a run with the U2 qla2xxx driver in place of the U3 one. Preliminary results show no major difference (maybe 1.5%). Also, any idea if this regression is ia64 specific? No idea. The testing so far has only been on an ia64, but we believe that the regression is not ia64-specific. The qla2xxx driver is still our #1 suspect (notwithstanding the preliminary results I mentioned in comment #9 - we are planning to test the driver more carefully in the next day or two). Yes I am back attached are some IOzone and lmbench results measured on HP gear RHEL4 U3 vs U2 ... clearly we are not able to reproduce your regressions to 1,2 or 4 file systems IOzone R4_U2 EXT3 R4_U3_EXT3 %Diff Writer 70347 70234 99.8% Re-writer 88214 88213 100.0% Reader 85973 85506 99.5% Re-reader 89678 90016 100.4% Random Read 87647 87740 100.1% Random Write 99174 98984 99.8% Backward Read 83299 83587 100.3% Record Rewrite 103825 104428 100.6% Stride Read 87594 87792 100.2% Overall GeoMean 87956 88028 100.1% R4_U2 EXT3 R4_U3_EXT3 %Diff Fwrite 155545 155225 99.8% Re-fwrite 368702 368955 100.1% Fread 859550 877280 102.1% Re-fread 946345 965794 102.1% Overall GeoMean 464743 469342 101.0% L M B E N C H 3 . 0 S U M M A R Y ------------------------------------ (Alpha software, do not distribute) Processor, Processes - times in microseconds - smaller is better ------------------------------------------------------------------------------ Host OS Mhz null null open slct sig sig fork exec sh call I/O stat clos TCP inst hndl proc proc proc --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- perf4.lab Linux 2.6.9-2 1831 0.36 0.49 5.38 6.68 26.2 0.56 5.72 300. 994. 2912 r4_u2 Linux 2.6.9-2 1831 0.36 0.49 5.35 6.69 26.1 0.56 5.72 303. 994. 2995 r4_u2_hug Linux 2.6.9-2 1831 0.36 0.49 5.41 6.64 26.1 0.56 5.70 299. 985. 2986 Basic integer operations - times in nanoseconds - smaller is better ------------------------------------------------------------------- Host OS intgr intgr intgr intgr intgr bit add mul div mod --------- ------------- ------ ------ ------ ------ ------ perf4.lab Linux 2.6.9-2 0.5400 0.5400 5.4000 33.6 46.0 r4_u2 Linux 2.6.9-2 0.5400 0.5500 5.4100 34.0 45.4 r4_u2_hug Linux 2.6.9-2 0.5400 0.5400 5.4000 33.5 45.3 Basic float operations - times in nanoseconds - smaller is better ----------------------------------------------------------------- Host OS float float float float add mul div bogo --------- ------------- ------ ------ ------ ------ perf4.lab Linux 2.6.9-2 2.7400 3.7900 24.8 17.6 r4_u2 Linux 2.6.9-2 2.7100 3.8300 24.5 17.4 r4_u2_hug Linux 2.6.9-2 2.7000 3.7800 24.5 17.3 Basic double operations - times in nanoseconds - smaller is better ------------------------------------------------------------------ Host OS double double double double add mul div bogo --------- ------------- ------ ------ ------ ------ perf4.lab Linux 2.6.9-2 2.7000 3.8300 28.9 31.4 r4_u2 Linux 2.6.9-2 2.7400 3.7900 29.2 31.8 r4_u2_hug Linux 2.6.9-2 2.7000 3.7900 28.8 31.4 --------- ------------- ------ ------ ------ ------ ------ ------- ------- perf4.lab Linux 2.6.9-2 8.6700 8.8500 9.1300 8.8900 9.9100 8.87000 10.0 r4_u2 Linux 2.6.9-2 9.4500 11.1 9.7000 9.3100 10.5 10.4 10.9 r4_u2_hug Linux 2.6.9-2 9.0700 11.3 10.3 10.2 10.6 9.61000 13.6 *Local* Communication latencies in microseconds - smaller is better --------------------------------------------------------------------- Host OS 2p/0K Pipe AF UDP RPC/ TCP RPC/ TCP ctxsw UNIX UDP TCP conn --------- ------------- ----- ----- ---- ----- ----- ----- ----- ---- perf4.lab Linux 2.6.9-2 8.670 24.5 27.7 38.3 44.5 44.2 58.2 65. r4_u2 Linux 2.6.9-2 9.450 25.4 29.4 39.4 50.4 45.3 59.8 65. r4_u2_hug Linux 2.6.9-2 9.070 25.0 28.2 34.3 50.5 45.3 52.7 69. File & VM system latencies in microseconds - smaller is better ------------------------------------------------------------------------------- Host OS 0K File 10K File Mmap Prot Page 100fd Create Delete Create Delete Latency Fault Fault selct --------- ------------- ------ ------ ------ ------ ------- ----- ------- ----- perf4.lab Linux 2.6.9-2 24.9 24.8 79.0 50.0 42.6K 1.666 21.2 r4_u2 Linux 2.6.9-2 25.5 25.2 82.1 50.7 45.2K 1.689 21.1 r4_u2_hug Linux 2.6.9-2 24.7 25.1 80.8 50.4 42.6K 1.663 21.2 *Local* Communication bandwidths in MB/s - bigger is better ----------------------------------------------------------------------------- Host OS Pipe AF TCP File Mmap Bcopy Bcopy Mem Mem UNIX reread reread (libc) (hand) read write --------- ------------- ---- ---- ---- ------ ------ ------ ------ ---- ----- perf4.lab Linux 2.6.9-2 213. 876. 476. 1301.5 1849.0 730.9 716.3 1781 1027. r4_u2 Linux 2.6.9-2 203. 877. 486. 1293.5 1847.0 737.8 716.1 1780 1028. r4_u2_hug Linux 2.6.9-2 203. 873. 480. 1307.7 1849.2 738.5 723.6 1783 1026. Memory latencies in nanoseconds - smaller is better (WARNING - may not be correct, check graphs) ------------------------------------------------------------------------------ Host OS Mhz L1 $ L2 $ Main mem Rand mem Guesses --------- ------------- --- ---- ---- -------- -------- ------- perf4.lab Linux 2.6.9-2 1831 2.1610 15.1 95.4 409.4 r4_u2 Linux 2.6.9-2 1831 2.1910 15.1 95.4 409.9 r4_u2_hug Linux 2.6.9-2 1831 2.1670 15.1 95.5 408.9 A couple of things: o We did a much more careful test of replacing the QLA driver with the U2 version and came up empty: there is less than 1% difference. So the preliminary result held up and our primary suspect seems to have a good alibi. o Shak suggested checking whether audit is turned on: it is not. o We checked whether the elevator was switched somehow: it was not - both use CFQ. o In trying to characterize the differences, we ran "iostat -x 30" during a point run of AIM7 fserver at a load of 2000. There are significant differences between U2 and U3. The following table shows the calculated means for the U2 and U3 runs for each iostat variable shown: Mean U2 U3 wrqm/s 375 483 avgrq-sz 63 104 avgqu-sz 0.8 1.06 await 8.9 11.3 svctm 0.68 0.82 So U3 is merging more write requests per second, it is doing larger IO's, the queues are longer, it is taking longer to get them out of the queue and it is taking longer to service them (the latter presumably because they are larger?) is this a 16-way have 16 physical cpus? Or is it dual core and/or hyper threaded. An output of /proc/cpuinfo, would resolve the question. thanks. This is a MAdison IPF system so its single core full 16-cpu IA64 kernel. ok. Can you try backout the: linux-2.6.13-ia64-multi-core.patch. This is just a guess, if you're looking for things to try. This will also disable the CONFIG_SCHED_SMT .config, which was added during U3. All you need is something like: --- kernel-2.6.spec 13 Jan 2006 21:47:45 -0000 1.1357 +++ kernel-2.6.spec 23 Jan 2006 22:17:08 -0000 @@ -1624,7 +1624,7 @@ %patch429 -p1 %patch430 -p1 %patch431 -p1 -%patch432 -p1 +#%patch432 -p1 %patch433 -p1 %patch434 -p1 %patch435 -p1 That was a (small) step in the right direction: the point run @ 2000 attained a throughput of 17235, roughly a 3% increase. hmmm. Well the ext3 changes, qlogic changes, and scheduler changes give us back about 7%, which would explain abuot half of this performance issue. But certainly there could be interaction b/w those patches or with other patches that would increase or decrease that figure. We systematically applied all the patches that were new with U3. The ones that seem to account for all of the regression are patch 1997, the sched-pin-inline patch, and patch 1458, the kprobes scalability patch. Before those two were applied, the throughput @ 2000 was 18654 (and had stayed at 18700 +/- 150 with *all* the other patches). Adding patch 1997 brought the throughput down to 17240. Adding patch 1458 on top of that brought it down to 16953. Applying just patch 1458 on top of everything, except patch 1997, brought the throughput down to 18349. We are now doing experiments at the other end: starting with *no* patches applied (except 249 and 2554, otherwise rpmbuild barfs), we are applying just the two above to see if there are interactions with other patches. Also, patch 1997 contains three patches that seem independent of each other, so we'll try splitting it apart and applying the pieces. not sure if this patch will improve performance but it packs the task_aux structure better. And could potentail help with bouncing cachelines. --- linux-2.6.9/include/linux/sched.h.bak 2006-02-01 13:10:29.000000000 -0500 +++ linux-2.6.9/include/linux/sched.h 2006-02-01 13:11:29.000000000 -0500 @@ -468,10 +468,10 @@ struct task_struct_aux { struct key *thread_keyring; /* keyring private to this thread */ unsigned char jit_keyring; /* default keyring to attach requested keys to */ #ifndef __GENKSYMS__ - struct key *request_key_auth; /* assumed request_key authority */ #if defined(CONFIG_SMP) int last_waker_cpu; /* CPU that last woke this task up */ #endif + struct key *request_key_auth; /* assumed request_key authority */ #endif }; The problem seems to be caused by the third part of the patch: --- linux-2.6.9/kernel/sched.c.orig 2005-11-17 01:56:18.000000000 -0500 +++ linux-2.6.9/kernel/sched.c 2005-11-17 02:28:16.000000000 -0500 @@ -1150,6 +1150,9 @@ static int try_to_wake_up(task_t * p, un new_cpu = cpu; + if (task_aux(p)->last_waker_cpu != this_cpu) + goto out_set_cpu; + if (cpu == this_cpu || unlikely(!cpu_isset(this_cpu, p->cpus_allowed))) goto out_set_cpu; @@ -1224,6 +1227,8 @@ out_set_cpu: cpu = task_cpu(p); } + task_aux(p)->last_waker_cpu = this_cpu; + out_activate: #endif /* CONFIG_SMP */ if (old_state == TASK_UNINTERRUPTIBLE) { @@ -1295,6 +1300,9 @@ void fastcall sched_fork(task_t *p) #ifdef CONFIG_SCHEDSTATS memset(&p->sched_info, 0, sizeof(p->sched_info)); #endif +#if defined(CONFIG_SMP) + task_aux(p)->last_waker_cpu = smp_processor_id(); +#endif #ifdef CONFIG_PREEMPT /* * During context-switch we hold precisely one spinlock, which I applied just this patch with no other U3 patches (except the two that are necessary for rpmbuild not to barf) and I get the bulk of the regression accounted for: Tasks jobs/min jti jobs/min/task real cpu 2000 17189.90 88 8.5950 705.07 10153.04 Thu Feb 2 00:28:31 2006 2300 16940.32 88 7.3654 822.77 11923.56 Thu Feb 2 00:42:26 2006 2500 16801.41 89 6.7206 901.71 13097.94 Thu Feb 2 00:57:40 2006 The throughput @ 2000 without the patch is around 18740. I didn't have access to BZ#164444 to see if this part of the patch has anything to do with it: from the description, I get the (quite possibly incorrect) impression that 164444 was solved by the first part of the sched-pin-inline patch. The question is: are the two parts independent? can we roll back just part 3? One more point: I have not tried rearranging the task_struct_aux fields as Jason suggested, because there is no request_key_auth field at this point. It gets added by another U3 patch. I'll go back to the full U3 and do the rearrangement, but given the data above, I don't have high expectations of it doing much to improve the situation. There was no improvement with the rearranged task_struct_aux patch on top of all the patches: Tasks jobs/min jti jobs/min/task real cpu 2000 16772.83 90 8.3864 722.60 10457.63 Thu Feb 2 14:30:35 2006 2300 16509.95 89 7.1782 844.22 12270.38 Thu Feb 2 14:44:51 2006 2500 16422.41 89 6.5690 922.52 13460.91 Thu Feb 2 15:00:26 2006 Created attachment 124111 [details] Removing handling of DIE_PAGE_FAULT in Kprobes For every page fault, I saw that Kprobes execption is getting called and is doing preempt disable and then reenable. For now I am commenting this and can you see if this bring performance back. Created attachment 124230 [details] fserver results with U3 beta1 *except* for patch1997-part3 The graph shows that eliminating part 3 of patch 1997 brings AIM7 fileserver performance to within 3-4% of U2 performance (with the patch, the performance drop is > 10%). I applied Anil's patch (commenting out the DIE_PAGE_FAULT case in the kprobes code) on top of the above configuration (U3 except 1997-part3). I have not done a full run yet, but a few spot checks are very encouraging: I get 18640 @ 2000 which puts the performance drop (if any) in the noise region. we are trying to reproduce the aim7 regression in our lab and I can't seem to be able to reproduce the result. How many disk does HP have in their setup? I'm using OSDL's aim7. Is that what HP folks are using? We are using the version from sourceforge: The configuration is a 16-cpu machine with 64Gb of memory and 12 MSA1000s with 144 disks. Each disk is 72Gb with a single ext3 filesystem. Created attachment 124396 [details] original wake balance patch by Ingo Nick, please experiment patch id=124396. It is relative to kernel-2.6.9-29.EL. Ken, I tried the patch relative to a beta1 (2.6.927.EL) kernel. The result is much worse: at 2000, the throughput was 14700, vs 16900 without the patch applied and 18700 with Update 2. Created attachment 124421 [details] dynamically control wake balancing behavior Thank you Nick, this experiment and your earlier experiments all point to the fact that for aim7, the best performance is achieved with load balancing in the wake up path. The characteristics of sequence of CPU that executes try_to_wake_up() is a bit random with aim7 workload and on U3 beta, it basically shortcut some of the load balancing action. And it hurts aim7. With patch id=124396, the effect is more amplified that we only do load balancing if the waker CPU is idle. Thus sort of represent worst case scenario. However, for other workloads like TPC, the requirement is exactly opposite. In the wake up path, best performance is achieved with absolutely zero load balancing. We simply wake up the process on the CPU that it was previously run. Worst performance is obtained when we do load balancing at wake up. There isnât an easy way to auto detect the workload characteristics. Ingoâs earlier patch that detects idle CPU and decide whether to load balance or not doesnât perform with aim7 since all CPUs are busy. What Iâm proposing here is to add a sysctl variable to control the behavior of load balancing in wake up path, so user can dynamically select a mode that best fit for the workload environment. And kernel can achieve best performance in two extreme ends of incompatible workload environments. Patch attached (id 124421). Ken, I ran with this latest patch on top of a U3-beta1 kernel and get the expected results: 18250 @ 2000. Anil's kprobes patch (id=124111) should get us an improvement of around 400, so provided that these two patches are adopted, we should be on par with U2 performance in the fserver benchmark. Created attachment 124517 [details] AIM7 shared workload - U2, U3 beta1 and U3 beta1+patches This attachment shows a graph of the results of the AIM7 shared workload in Update2, Update3 beta1 and Update3 beta1 with the two patches: Ken's patch (attachment id= 124421) and Anil's patch (attachment id=124111). The next attachment shows dbase results (partial: the run is not finished yet but the region of interest is covered). Both of them show a small regression relative to Update2 (about 0.5%) but a big improvement over the results with the unpatched Update3 beta1. We would be happy with a release that incorporates these two patches. Created attachment 124518 [details] AIM7 dbase workload results for U2, U3 beta1 and U3 beta1+patches See comment in previous attachment 124517 [details]. I spot checked (fserver @ loads between 2000 and 2500 and dbase at loads between 10000 and 15000) the 2.6.9-31 kernel. The results look good: any regression from Update 2 is certainly minor, less than 0.5% (and quite possibly non-existent): fserver AIM Multiuser Benchmark - Suite VII Run Beginning Tasks jobs/min jti jobs/min/task real cpu 2000 18669.51 87 9.3348 649.19 9544.14 Mon Feb 13 19:55:07 2006 2300 18251.55 87 7.9355 763.66 11222.36 Mon Feb 13 20:08:03 2006 2500 18297.63 87 7.3191 827.98 12201.98 Mon Feb 13 20:22:04 2006 AIM Multiuser Benchmark - Suite VII Testing over dbase AIM Multiuser Benchmark - Suite VII Run Beginning Tasks jobs/min jti jobs/min/task real cpu 10000 59682.12 93 5.9682 995.27 15561.54 Mon Feb 13 21:19:56 2006 12000 59753.44 94 4.9795 1192.90 18680.08 Mon Feb 13 21:39:56 2006 14000 59133.60 93 4.2238 1406.31 21963.22 Mon Feb 13 22:03:30 2006 15000 59071.80 95 3.9381 1508.33 23647.12 Mon Feb 13 22:28:46 2006 AIM Multiuser Benchmark - Suite VII Testing.
https://bugzilla.redhat.com/show_bug.cgi?id=177634
CC-MAIN-2017-43
refinedweb
3,229
74.49
On Fri, 2008-09-05 at 19:44 +0200, Ingo Molnar wrote:> * Gary Hade <garyhade@us.ibm.com> wrote:> > > Add memory hotremove config option to x86_64> > > > Memory hotremove functionality can currently be configured into the > > ia64, powerpc, and s390 kernels. This patch makes it possible to > > configure the memory hotremove functionality into the x86_64 kernel as > > well.> > hm, why is it for 64-bit only?> > > +++ linux-2.6.27-rc5/arch/x86/Kconfig 2008-09-03 13:34:55.000000000 -0700> > @@ -1384,6 +1384,9 @@> > def_bool y> > depends on X86_64 || (X86_32 && HIGHMEM)> > > > +config ARCH_ENABLE_MEMORY_HOTREMOVE> > + def_bool y> > so this will break the build on 32-bit, if CONFIG_MEMORY_HOTREMOVE=y? > mm/memory_hotplug.c assumes that remove_memory() is provided by the > architecture.> > > +#ifdef CONFIG_MEMORY_HOTREMOVE> > +int remove_memory(u64 start, u64 size)> > +{> > + unsigned long start_pfn, end_pfn;> > + unsigned long timeout = 120 * HZ;> > + int ret;> > + start_pfn = start >> PAGE_SHIFT;> > + end_pfn = start_pfn + (size >> PAGE_SHIFT);> > + ret = offline_pages(start_pfn, end_pfn, timeout);> > + if (ret)> > + goto out;> > + /* Arch-specific calls go here */> > +out:> > + return ret;> > +}> > +EXPORT_SYMBOL_GPL(remove_memory);> > +#endif /* CONFIG_MEMORY_HOTREMOVE */> > hm, nothing appears to be arch-specific about this trivial wrapper > around offline_pages().Yes. All the archs (ppc64, ia64, s390, x86_64) have exact samefunction. No architecture needed special handling so far (initialversions of ppc64 needed extra handling, but I moved the codeto different place). We can make this generic and kill all arch-specific ones.Initially, we didn't know if any arch needs special handling -so ended up having private functions for each arch. I think its time to merge them all.> > Shouldnt this be moved to the CONFIG_MEMORY_HOTREMOVE portion of > mm/memory_hotplug.c instead, as a weak function? That way architectures > only have to enable ARCH_ENABLE_MEMORY_HOTREMOVE - and architectures > with different/special needs can override it.Yes. We should do that. I will send out a patch.Thanks,Badari
http://lkml.org/lkml/2008/9/5/276
CC-MAIN-2018-09
refinedweb
296
57.06
2.2.3.154 Responses The Responses element is an optional child element of the Collection element in Sync command responses that contains responses to operations that are processed by the server. Each response is wrapped in an element with the same name as the operation, such as the Add element and the Change element. The response contains a status code and other information, depending on the operation. All elements referenced in this section are defined in the AirSync namespace. The Responses element appears only in responses that are sent from the server to the client. It is present only if the server has processed operation from the client. It is omitted otherwise (for example, if the client requested server changes but had no changes to send to the server). If present, it MUST include at least one child element. The server is not required to send an individual response for every operation that is sent by the client. The client only receives responses for successful additions, successful fetches, successful changes that include an attachment being added, and failed changes and deletions. When the client does not receive a response, the client MUST assume that the operation succeeded unless informed otherwise..
https://docs.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-ascmd/f98e79f2-204f-4014-a7d7-66c37db8e04f
CC-MAIN-2021-10
refinedweb
201
53.21
Often during the iterative development cycles of integration projects, a Web service provider will update the interface it supplies to its consumers by republishing the WSDL document describing the interface on the company's intranet or internet site. It's important for software vendors and integration specialists to be able to quickly adapt their deployed code to respond to these interface changes. Where wholesale functional code changes are not required (for example, where new message formats are being added, rather than existing formats being changed), the WebSphere Message Broker scenario presented in this article addresses this requirement using the following technologies: - ESQL to convert an HTTPRequest node to carry out an HTTPGet - ESQL to regulate versioning of a WSDL document - A stylesheet for use by the XML transform node, to extract a schema from a WSDL - A Java plug-in node to write the schema document to the file system - A Java plug-in node to run Ant build files - An Apache Ant build file that uses the Message Broker command line utilities for automated deployment (mqsicreatemsgdefs, mqsicreatebar and mqsideploy) Outside of the main scenario, the plug-in nodes we've written will help developers get more value from Message Broker. One node writes files to the broker's local file system (for production uses, you should also consider implementing the Message Broker File Extender product). The second node executes Ant scripts. Ant is a Java and XML-based make tool developed by the Apache Foundation. By calling an Ant script, a message flow can cause a number of different processes to be carried out. For instance, with the right Ant extension tasks installed, an FTP transfer could be triggered, a log file could be written, or another operating system process could be executed. The Ant node thus extends the functionality of a broker deployment, and also presents some opportunities for simple automation. The message flow shown in Figure 1 illustrates the stages involved in the end-to-end scenario provided in this article. Figure 1. DynamicWSUpdate message flow Install the FileUdn in the toolkit - Unzip Scenario.zip, provided in the Download section of this article, to the root of your directory, such as C: on Windows™. All of the samples and scenarios are provided within a single directory, named Resources. The zip file contains the following: Note: If you choose to unzip the files to a different location, you may have to reconfigure the Ant build file and the message flow properties later. - Start the Message Broker V6 toolkit. When prompted, select the workspace C:\Resources\UdnDevWorkspace. This workspace provides all the source files for the message flow nodes and Update Manager Site projects. This directory structure is used by the Eclipse Install/Update Manager to find features and plug-ins to install in a workbench. - When the toolkit starts, select Software Updates => Find and Install from the Help menu. - Select Search for new features to install, then click Next. - Click New Local Site. - Select the FileUdn Update Site project at C:\Resources\UdnDevWorkspace\com.ibm.issw.fileudn.site. - Expand the Update Site project and select FileUdnFeature, then click Next. - Check com.ibm.issw.file AntUdn in the toolkit - Start the Message Broker toolkit. When prompted, select the workspace C:\Resources\UdnDevWorkspace. - When the toolkit starts, select Software Updates => Find and Install from the Help menu. - Select Search for new features to install, then click Next. - Click New Local Site. - Select the AntUdn Update Site project at C:\Resources\UdnDevWorkspace\com.ibm.issw.antudn.site - Expand the Update Site project and select AntUdnFeature, then click Next. - Check com.ibm.issw.ant FileUdn and AntUdn in the runtime broker - Copy the files ComIbmIsswAntUdn.par and ComIbmIsswFileUdn.par into the directory C:\IBM\MQSI\6.0\jplugin, or the equivalent location of the jplugin directory on your broker machine. - Restart the Message Broker. - Start the Message Broker toolkit. When prompted, select the workspace C:\Resources\ScenarioWorkspace. This workspace provides the message flow project for the scenario. - When the toolkit starts, select Window => Open Perspective => Broker Administration Perspective. - Open the message flow project named DynamicWSUpdate and explore the message flow. - Deploy the broker archive file DynamicWSUpdate.bar to your runtime broker, and then close the toolkit before running messages through the message flow. The toolkit must be closed because the Ant Build File executes Message Broker command utilities that update the scenario workspace. The workspace contains a lock file that stops these headless commands from executing if the toolkit is open. This avoids Eclipse project metadata becoming corrupted. If you've unzipped the Scenario.zip file to a directory other than the root of your C drive, you'll need to reconfigure some of the message flow node properties before deploying the flow to your broker. If you unzipped the file to the root, you may skip to step 5 below. - On the XML transform node ExtractXSD, set the property Stylesheet Directory. - On the FileUdn node, the property FileName is set to Output.xsd and the property DirectoryName is set to C:\Resources\FileUdnOutput. The latter of these values should match to the directory name in the target, createmsgdefs, in the Ant Build File DynamicWSUpdate_AntBuild.xml, which is used in the scenario. - On the AntUdn node, set the property AntBuildFile to the location of the build file DynamicWSUpdate_AntBuild.xml. By default this is C:\Resources\DynamicWSUpdate_AntBuild.xml. Also on the AntUdn node, set the property AntLogFile to the location of the log file DynamicWSUpdate_AntBuild.xml. By default this is C:\Resources\Logs\AntLog.log. - In the Ant Build file, DynamicWSUpdate_AntBuild.xml, the property named "base" is set to the broker toolkit's base installation path, C:\IBM\MessageBrokersToolkit\6.0\. If your broker is installed in a different location, change this setting. - The last section of the Ant build file is concerned with automating the mqsideploy command. This command must be pointed at your domain connection file, so that the deploy operation can be successfully directed to your configuration manager. On import, the build file is expecting a project named Servers to contain the connection properties inside a file configdomain.configmgr. These parameters may have to be changed to match your Message Broker installation (Queue Manager name is set to QM1 and Listener port set to 1414). The build file is also set up to deploy to a broker named BK1 and its execution group named default. You may want to edit these settings. Technical description of the scenario This section provides a full description of the scenario: Section 1: IN => Convert => HTTPRequest The first three nodes in the message flow are concerned with contacting the internet URL where the WSDL document is published. The compute node, named Convert, consists of a single module whose purpose is to configure the LocalEnvironment tree in order to control the behavior of the following HTTPRequest node. The values are recognised by the HTTPRequest node, and cause it to send an HTTP GET to the internet URL rather than an HTTP POST, as it would in its standard mode of operation. So that changes to the LocalEnvironment are not lost after leaving the node, remember to alter the Compute Mode property of the compute node. Listing 1. Convert node ESQL The ESQL also sets the value of the RequestURL field to the value carried in the URL field of the input message. The test message supplied with this article provides a URL which currently provides an example available WSDL document available on the internet. Obviously, although correct at time of publishing, this addresses will be subject to change in the future. Section 2: Standardize => ExtractXSD WSDL documents provide a contract between a Web service consumer and a Web service provider. The information contained includes a full specification for the service's interface, location and bindings. Like all the best open standard strategies, this specification is designed to be independent of implementation. For this reason, an XML schema definition is used to specify the precise format of messages that will be exchanged in the Web service interaction. Message Broker provides support to wrap existing legacy systems with new Web service interfaces. In order to provide this function, the message broker requires details specifying metadata for each message that will be received, parsed and validated. The Message Repository Manager (MRM) domain is used in circumstances where validation support is required. In these situations, a message dictionary must be deployed to a broker installation to facilitate parsing. This article describes a Message Broker solution that allows changes to an internet published WSDL interface to be dynamically extracted and then deployed to a Message Broker Web services client application. If the Web service provider changes the message formats used to communicate with its Web service requester, changes must be incorporated in the client code. In the case of Message Broker wrapper message flows, these changes require the redeployment of a message set dictionary. The first stage in this new deployment is the extraction of metadata from the online WSDL document. Listing 2 shows the basic anatomical structure of a WSDL document. Listing 2. Anatomical structure of a WSDL document Section 2 of the message flow extracts the schema from the WSDL document that is passed as part of the logical tree from the HTTP node. Following the stated aims of reusability and promotion of open standards, we've chosen to execute this stage of the scenario using the Message Broker's XML transform node, which alters the logical tree using an XSLT stylesheet. Before invoking the stylesheet transformation, the compute node Standardise slightly alters the captured WSDL document. This code is only necessary so that the scenario can be used with a wide array of publicly published WSDL documents. When using the stylesheet transform node, the stylesheet is sensitive to the layout of XML comments between elements in the WSDL. The ESQL removes comments from the WSDL document in the levels above the beginning of the XSD file that is being extracted. With tighter control over the precise syntax of the WSDL being provided to the flow, the scenario's complexity could be significantly reduced. The stylesheet starts by matching the root element of the document that it is passed, using the regular expression /*. The XML element that is found is stored in the variable named wibble, and then its local name is compared to definitions and its namespace compared to. Note the use of XML entitities within the comparison, in order to escape the quotation characters. The child elements of definitions are then searched using a variable named wobble and compared with the expected child called types. If the bitstream found by the HTTP request has successfully located a valid and well-formed WSDL document, then types will contain a child element named Schema, which resides in the namespace. If the above checks result in a valid schema being found embedded in the WSDL at the correct level, the schema is copied to the output logical tree. Listing 3. The XSLT stylesheet Section 3: PrepareEnv => FileUdn Section 3 takes the extracted schema file and writes it to the broker's local file system using a user-defined File output node. The Message Broker does not supply this form of output node among its primitive nodes (readers may be interested in finding out more about IBM's Message Broker File Extender product). The node, which is also supplied as a download with this article, has two basic mandatory properties: FileName and DirectoryName. These properties dictate the location where the data should be written in the file system. The buffered data that is written to the file includes the content of the child, named File, of the global environment section of the logical tree. The PrepareEnv compute node simply takes the Binary Large OBject (BLOB) message body that is output by the XML transform node and copies it to the Environment.File section of the tree. This extra step could easily have been contained within the Java code of the File node, but for the purpose of this article, we chose to code both the plug-in nodes in as generic a method as possible, in order to maximize their re-use outside of this particular scenario. The message is forwarded unchanged to the output terminal of the PrepareEnv compute node. From this point on in the flow, the propagation of a message body from one node to the next does not actually supply any data for the nodes' operation, but simply acts as a trigger for the execution of the Ant build file. The final section of the scenario uses a user-defined Java node, coded to execute an Ant Build file. The Apache Ant framework provides a Java and XML based framework for automating common build tasks. The Ant plug-in is a generic piece of coding that successfully runs any Ant build file that conforms to the Ant 1.6.5 build level. The build file supplied with the scenario imports the XML schema document that was written to the file system by the preceding node, and creates a Message Broker message definition file inside a message set project. Having imported the schema, the build file then uses the two Message Broker command line utilities: mqsicreatebar and mqsideploy. mqsicreatebar generates a Broker Archive File that contains a compiled message set dictionary. mqsideploy deploys the archive. The scenario took a WSDL document from a public internet page and updated the online broker's deployed message set dictionary, using Web technologies and open standards where available. The user-defined Java nodes supplied with this article provide significant value to message flow developers outside of the scenario. Information about download methods Learn - Apache Ant: Learn more about Ant. - Web Services Definition Language (WSDL): Read the WSDL specification. - What's new in WebSphere Message Broker V6: Read this developerWorks article to learn more about the new features of Message Broker V6. - IBM WebSphere Message Broker V6.0 announcement: Read the announcement letter for details on the prereqs and features of Message Broker V6. - IBM WebSphere Message Broker V6.0 for z/OS announcement: Read the announcement letter for details on the prereqs and features of Message Broker V6 for z/OS. - IBM WebSphere Message Broker File Extender: Get product information. - WebSphere Business Integration zone: Get the latest WebSphere Message Broker technical resources and information. - WebSphere Web services zone: Get the latest WebSphere Web Services technical resources and information . Get products and technologies - IBM WebSphere Message Broker SupportPacs: Get downloadable code and documentation that supports WebSphere Message Broker. Ben Thompson is a Senior IT Specialist in IBM Software Group. Andy Piper is a Senior IT Specialist with IBM Software Services for WebSphere in the United Kingdom. His role is to work with leading customers to provide consultancy on architecture, design and implementation. He was also part of the team that delivered the beta program for WebSphere Message Broker Version 6 and has developed and presented education on the new function. Andy is also a co-author of the IBM Redbook "Migration to WebSphere Message Broker Version 5," and a contributor to the SupportPac "IC04: WBIMB V5 Change Management and Naming Standards." You can reach Andy at andy.piper@uk.ibm.com or via his blog at The lost outpost.
http://www.ibm.com/developerworks/websphere/library/techarticles/0602_thompson/0602_thompson.html
crawl-003
refinedweb
2,546
53.31
round() function in C++ In this tutorial, we will learn about the round() function in C++. This function is defined in cmath header in C++. We will be using it in our C++ program to round a given number with halfway cases rounded away from zero. The syntax for the round() function is as follows: round(number); The return type for this function is the same as the type of input parameter. i. e. number in the above syntax. It can be double, float, long double etc. Examples of C++ round() function: round() function in C++ rounds a number to the nearest integral value. Here are a few examples that illustrates the working of this function. I will show you multiple examples of this function in the same program. Suppose the input number is 45.8. If we pass this number as input in round() function, it returns 49. round(45.8) = 49. Similarly, round(34.4) = 34. Let us implement it in a C++ program. This will clear all the doubts. #include <iostream> #include <cmath> using namespace std; int main() { cout << round(-0.9) << endl; cout << round(-0.7) << endl; cout << round(-0.5) << endl; cout << round(-0.3) << endl; cout << round(-0.1) << endl; cout << round(0.2) << endl; cout << round(0.4) << endl; cout << round(0.6) << endl; cout << round(0.8) << endl; return 0; } The above program gives the below output: -1 -1 -1 -0 -0 0 0 1 1 As you can observe from the above example round() function works very well to round the floating-point numbers. This function can be used in C++ in many applications. Thank you. Also, read: C++ program to round off numbers nearest 10
https://www.codespeedy.com/round-function-in-cpp/
CC-MAIN-2020-29
refinedweb
285
86.3
The world of building user interfaces can be a complex landscape to navigate. The sheer number of tools that are at the disposal of a developer is overwhelming. In my last tutorial , we discussed a few of those tools (React, Webpack and Babel) and went over the basics of what they are and how they work. More over, we also learnt how we can stitch them together to build an application code base from scratch that is suitable for development. The application that was pieced together has minimal features. It does not allow us to test the code we're writing, among other things, and it's certainly not suitable for deploying to production. In this guide, we will build on top of the setup we have and take it further The introduction segments can be skipped. Click here to jump straight to the step by step guide . An application consists of features and every feature has a life cycle --- from it being developed, then going through testing and finally being deployed to production, it lives on different environments (envs). The environments serve different purposes and therefore, their needs vary accordingly. For instance, we don't care about performance or optimization in dev env, neither do we care about minifying the code. Often, we enable tools in dev env that helps us write code and debug it, like source maps, linters, etc. On the other hand, on prod env, we absolutely care about stuff like application performance and security, caching, etc. The tools we are going to use while walking through this guide is not going to play with all the items we discussed here, however, we will go through basics (and some more) of how environment configuration works and why it is useful. A test framework provides us with a platform and a set of rules that allows us to test the code we're writing. Any application that is intended to be deployed for users must be tested. Here is why: The frameworks come in various different flavors --- and they all have their pros and cons. For our purposes, we will use two of the more popular frameworks, Jest to test functional JS and Enzyme to test our React components. As the application grows in size, it starts to present maintainability and scalability concerns for developers. CSS is one such area where the code can get real messy real fast. Sass is a tool that helps us in this regard: No reason to not use a tool that will surely improve our development workflow, right? Another point of concern as the code base begins to grow is ensuring high standards of code quality. This is especially more important when there are multiple teams or developers working on the same code base. ESLint saves the day here --- it enforces common coding standards, or style guides, for all devs to follow. There are many industry approved style guides out there, for instance Google and AirBnB. For our purposes, we will use the AirBnB style guide. This encompasses all the pretty stuff that will be used in the application --- custom fonts, font icons, SVGs and images. They are placed in a public folder, although an argument can be made for a different setup. Please note: The rest of the guide builds on top of the last piece I wrote. You can either follow that first before proceeding here, or do the following: node -vto check. If the version does not match the requirements, grab the latest from here . README. npm install, run npm startto compile the code and spin up the dev server. At this point, you should see a new browser tab open, rendering a hello worldcomponent. Make sure you're inside the repository directory that you just "git cloned" before trying out the command. After having gone through the basics of the tools we're about to use and setting up our base repo, we can finally move forward to the guide. Assuming repo has been successfully downloaded, open it up in a text-editor of your choice. You should see a file called webpack.config.js. This is where webpack configs currently live in its entirety. In order to separate production and development builds, we will create separate files to host their configs, and another file will contain settings that are common between them, in the interest of keeping our code DRY. Since there will be at least 3 config files involved, they will need to merge with each other at compile time to render the application. To do this, we need to install a utility package called webpack-merge to our dev dependencies. npm install webpack-merge --save-dev Then rename webpack.config.js to webpack.common.js. As the name implies, this will contain the common configs. We will create two more files webpack.production.js--- to contain production env settings webpack.development.js--- to contain development env settings While we're on the subject of configuring webpack builds, we will take the opportunity to install a couple of npm packages that will help with our tooling and optimize our builds. First, we will install a package called CleanWebpackPlugin . npm install clean-webpack-plugin --save-dev Webpack puts the output bundles and files in the /dist folder, because that is what we've configured it to do. Over time, this folder tends to become cluttered as we do a build every time (through hot reloading) we make a code change and save. Webpack struggles to keep track of all those files, so It is good practice to clean up the /dist folder before each build in order to ensure the proper output files are being used. CleanWebpackPlugin takes care of that. We will install another package called path. It will allow us to programmatically set entry and output paths inside webpack. npm install path --save Now that we have the necessary packages in place to configure a clean, optimized webpack build, lets change webpack.common.js to contain the following code, const path = require('path'); const { CleanWebpackPlugin } = require('clean-webpack-plugin'); const HtmlWebPackPlugin = require("html-webpack-plugin"); module.exports = { output: { filename: '[name].bundle.js', path: path.resolve(__dirname, 'dist') }, module: { rules: [ { test: /\.(js|jsx)$/, exclude: /node_modules/, use: { loader: "babel-loader" } }, { test: /\.html$/, use: [ { loader: "html-loader" } ] } ] }, plugins: [ new CleanWebpackPlugin(), new HtmlWebPackPlugin({ template: "./src/index.html", filename: "./index.html", }) ] }; Add the following lines to webpack.development.js const merge = require('webpack-merge'); const common = require('./webpack.common'); module.exports = merge(common, { mode: 'development', devtool: 'inline-source-map', devServer: { contentBase: './dist', hot: true } }); ... and these lines to webpack.production.js const merge = require('webpack-merge'); const common = require('./webpack.common'); module.exports = merge(common, { mode: 'production' }); There are a few changes here from its previous iteration that requires explanation: webpack.common.js outputproperty. It renames the bundle file and defines the path to where it can be found. CleanWebpackPluginto clean up dist folder webpack.development.js source maps webpack.production.js That was a lot of information! We have accomplished a significant step towards setting up the project. Although I have tried my best to explain the concepts and code changes, I would advise additional reading into each of these topics to get a complete grasp. Webpack is a beast --- it might be a stretch even for the smartest developer to completely understand everything in the first read through. Let's move on to the next step. We will add test frameworks to our code base in this step! There are two frameworks we need to add, one to test functional JS and the other to test React components. They are called Jest and Enzyme, respectively. Once we configure that, we will write a small, uncomplicated JS module and React component to try them out. We will set them up and work with them in separate steps. Let's get started! We will install Jest first as a dev dependency, since it is a test framework and it has no use in the production bundle. To install, npm install jest --save-dev Next, we will add a file called jest.config.js to the root directory of our codebase that will dictate how we want to configure our tests. This is the official documentation page for Jest that contains details of every piece of configuration --- it is worth giving a read. We will not need all the pieces, thus I have condensed the necessary pieces to write our own custom config file. It contains detailed comments on what each piece is doing. This is what jest.config.js file will look like for the project we're configuring // For a detailed explanation regarding each configuration property, visit: // module.exports = { // All imported modules in your tests should be mocked automatically // automock: false, // Stop running tests after the first failure // bail: false, // Respect "browser" field in package.json when resolving modules // browser: false, // The directory where Jest should store its cached dependency information // cacheDirectory: "C:\\Users\\VenD\\AppData\\Local\\Temp\\jest", // Automatically clear mock calls and instances between every test clearMocks: true, // Indicates whether the coverage information should be collected while executing the test // collectCoverage: false, // An array of glob patterns indicating a set of files for which coverage information should be collected collectCoverageFrom: ['src/tests/*.test.js'], // The directory where Jest should output its coverage files coverageDirectory: 'src/tests/coverage', // An array of regexp pattern strings used to skip coverage collection coveragePathIgnorePatterns: [ "\\\\node_modules\\\\" ], // A list of reporter names that Jest uses when writing coverage reports coverageReporters: [ "json", "text", "lcov", "clover" ], // An object that configures minimum threshold enforcement for coverage results coverageThreshold: { "global": { "branches": 80, "functions": 80, "lines": 80 } }, // Make calling deprecated APIs throw helpful error messages errorOnDeprecated: false, // Force coverage collection from ignored files using an array of glob patterns // forceCoverageMatch: [], // A path to a module which exports an async function that is triggered once before all test suites // globalSetup: null, // A path to a module which exports an async function that is triggered once after all test suites // globalTeardown: null, // A set of global variables that need to be available in all test environments // globals: {}, // An array of directory names to be searched recursively up from the requiring module's location // moduleDirectories: [ // "node_modules" // ], // An array of file extensions your modules use moduleFileExtensions: ['js', 'json', 'jsx'], // A map from regular expressions to module names that allow to stub out resources with a single module // moduleNameMapper: {}, // An array of regexp pattern strings, matched against all module paths before considered 'visible' to the module loader // modulePathIgnorePatterns: [], // Activates notifications for test results // notify: false, // An enum that specifies notification mode. Requires { notify: true } // notifyMode: "always", // A preset that is used as a base for Jest's configuration // preset: null, // Run tests from one or more projects // projects: null, // Use this configuration option to add custom reporters to Jest // reporters: undefined, // Automatically reset mock state between every test resetMocks: false, // Reset the module registry before running each individual test // resetModules: false, // A path to a custom resolver // resolver: null, // Automatically restore mock state between every test restoreMocks: true, // The root directory that Jest should scan for tests and modules within // rootDir: null, // A list of paths to directories that Jest should use to search for files in // roots: [ // "<rootDir>" // ], // Allows you to use a custom runner instead of Jest's default test runner // runner: "jest-runner", // The paths to modules that run some code to configure or set up the testing environment before each test // setupFiles: ['<rootDir>/enzyme.config.js'], // The path to a module that runs some code to configure or set up the testing framework before each test // setupTestFrameworkScriptFile: '', // A list of paths to snapshot serializer modules Jest should use for snapshot testing // snapshotSerializers: [], // The test environment that will be used for testing testEnvironment: 'jsdom', // Options that will be passed to the testEnvironment // testEnvironmentOptions: {}, // Adds a location field to test results // testLocationInResults: false, // The glob patterns Jest uses to detect test files testMatch: ['**/__tests__/**/*.js?(x)', '**/?(*.)+(spec|test).js?(x)'], // An array of regexp pattern strings that are matched against all test paths, matched tests are skipped testPathIgnorePatterns: ['\\\\node_modules\\\\'], // The regexp pattern Jest uses to detect test files // testRegex: "", // This option allows the use of a custom results processor // testResultsProcessor: null, // This option allows use of a custom test runner // testRunner: "jasmine2", // This option sets the URL for the jsdom environment. It is reflected in properties such as location.href testURL: '', // Setting this value to "fake" allows the use of fake timers for functions such as "setTimeout" // timers: "real", // A map from regular expressions to paths to transformers // transform: {}, // An array of regexp pattern strings that are matched against all source file paths, matched files will skip transformation transformIgnorePatterns: ['<rootDir>/node_modules/'], // An array of regexp pattern strings that are matched against all modules before the module loader will automatically return a mock for them // unmockedModulePathPatterns: undefined, // Indicates whether each individual test should be reported during the run verbose: false, // An array of regexp patterns that are matched against all source file paths before re-running tests in watch mode // watchPathIgnorePatterns: [], // Whether to use watchman for file crawling watchman: true, }; According to our configuration, our tests should live inside a directory called tests inside /src. Let's go ahead and create that --- and while we're on the subject of creating directories, lets create three in total that will allow us to set ourselves up for future steps of the guide tests- directory that will contain our tests core/js- we will place our functional JS files here, the likes of helper, utils, services, etc. core/scss- this will contain browser resets, global variable declarations. We will add these in a future piece. Alright, we are making progress !! Now that we have a sweet test setup, let's create a simple JS module called multiply.js inside core/js const multiply = (a, b) => { return a* b; }; export default multiply; ... and write tests for it, by creating a file called multiply.spec.js inside tests directory. import multiply from '../core/js/multiply'; describe('The Multiply module test suite', () => { it('is a public function', () => { expect(multiply).toBeDefined(); }); it('should correctly multiply two numbers', () => { const expected = 6; const actual1 = multiply(2, 3); const actual2 = multiply(1, 6); expect(actual1).toEqual(expected); expect(actual2).toEqual(expected); }); it('should not multiply incorrectly', () => { const notExpected = 10; const actual = multiply(3, 5); expect(notExpected).not.toEqual(actual); }); }); The final piece of configuration is to add a script in our package.json that will run all our tests. It will live inside the scripts property "scripts": { "test": "jest", "build": "webpack --config webpack.production.js", "start": "webpack-dev-server --open --config webpack.development.js" }, Now, if we run npm run test in our terminal (inside the root directory of the project), it will run all our tests and produce and output like this. You can keep adding more modules and test suites in similar manner. Let's move on to the next step ! It's time to install Enzyme and test our React components! We need to install a version of Enzyme that corresponds to the version of React we're using, which is 16. In order to do that, we need to do the following, keeping in mind that this tool will also be installed as a dev dependency because like Jest, the test framework does not need to be compiled to production bundle npm install enzyme enzyme-adapter-react-16 --save dev Next, we will create enzyme.config.js at the root directory of the project, similar to what we did for Jest. This is what that file should look like import { configure } from 'enzyme'; import Adapter from 'enzyme-adapter-react-16'; configure({ adapter: new Adapter() }); Now, if you go take a look at line 119 in jest.config.js, you will see that we have done ourselves a favor by preparing for this moment where we setup Enzyme to work with Jest. All that needs to be done is uncomment line 119 and our setup will be complete! Let's write a test for the <App /> component to see if what we've set up is working. Create a directory called components inside tests --- this will hold all the tests for the components you will write in the future. The separate directory is created to keep functional and component tests separate. This segregation can be done in any way, as long as all the tests live inside the src/tests directory. It will help in the future when the app starts to grow. Inside src/tests/components directory, create a file called App.spec.js and add the following lines import React from 'react'; import { shallow} from 'enzyme'; import App from '../../components/App'; describe('The App component test suite', () => { it('should render component', () => { expect(shallow(<App />).contains(<div>Hello World</div>)).toBe(true); }); }); Now if we run our test script in the terminal, you will see this test is running and passing ! Please note: In step 2 and 3, we have simply set up Jest and Enzyme to work together in our code base. To demonstrate that the setup is working, we have written two overly simple tests. The art of writing good tests is an entirely different ball game and these tests should not be taken as any form of guide/direction. In this part of the guide, we will configure our code base to lend .scss support. However, before we can learn to run, we need to learn to walk --- that means we will have to get css to load first. Let's go grab the necessary npm packages npm install css-loader style-loader --save-dev npm install node-sass sass-loader --save In the explanation block below, you can click the names of the tools that appear like this to visit their official documentation. css-loader is a webpack plugin that interprets and resolves syntax like @import or url() that are used to include .scss files in components. style-loader is a webpack plugin that injects the compiled css file in the DOM. node-sass is a Node.js library that binds to a popular stylesheet pre-processor called LibSass . It lets us natively compile .scss files to css in a node environment. sass-loader is a webpack plugin that will allow us to use Sass in our project. Now that we have installed the necessary npm packages, we need to tell webpack to make use of them. Inside webpack.common.js, add the following lines in the rules array just below where we're using babel-loader and html-loader { test: /\.s[ac]ss$/i, use: [ // Creates `style` nodes from JS strings 'style-loader', // Translates CSS into CommonJS 'css-loader', // Compiles Sass to CSS 'sass-loader', ] } The setup is complete ! Let's write some sass !! In src/components directory, create a file called App.scss and add the following lines 1px; padding-top: 40px; & > div { display: flex; font-size: 25px; font-weight: bold; justify-content: center; margin: 0 auto; } }letter-spacing: The explanation of sass syntax is beyond the scope of this article. This is an excellent resource for beginners to learn more in depth. Now, save the file and boot up the project by running npm run start. The application should load with the style rules we just wrote. It's time to install ESLint. Similar to what we've been doing so far, we need to install a few npm packages and then add a config file to our code base. This is a tool that is needed purely for development purpose, so we will install it as a dev dependency. Let's get started ! npm install eslint eslint-config-airbnb-base eslint-plugin-jest --save-dev eslint-config-airbnb-baseis the airbnb style guide we're asking eslintto apply on our project. eslint-plugin-jestis the eslint plugin for jesttest framework. The airbnb style guide has peer dependencies that needs to be installed as well. You can input npm info "eslint-config-airbnb@latest" peerDependencies in your terminal and list them, however, to install, do the following npx install-peerdeps --dev eslint-config-airbnb Next, we need to create a file called .eslintrc.json (note the . at the beginning, indicating it's a hidden file)at the root directory of the project, similar to how the other config files (webpack, jest, enzyme, babel) have been added, ... and add these lines { "extends": "airbnb", "plugins": ["jest"], "env": { "browser": true, "jest": true }, "rules": { "arrow-body-style": [2, "always"], "react/jsx-filename-extension": [1, { "extensions": [".js", ".jsx"] }], "no-unused-expressions": "off", "max-len": "off", "import/no-extraneous-dependencies": "off", "react/destructuring-assignment": "off", "react/prop-types": "off" } } The official documentation is a good read if you're looking to understand in details how configuring ESLint works. The most pertinent lines of code in that file is the rules object --- here we're basically overriding some of the rules from the style guide to suit the specific needs of our project. These are not set in stone, so please feel free to play with them to best suit your needs, but it's probably not a good idea to override too many of the rules --- that defeats the purpose of using a style guide in the first place. Let's add a script to package.json that will apply the airbnb style guide to our code base. We need to tell Eslint what files and/or directories we would like it to scan --- so we will tell it to scan all JS files "lint": "eslint '**/*.js' --ignore-pattern node_modules" Now, if you run npm run lint in your terminal, eslint will scan the file types and patterns specified in the script and display a list of issues. Fair warning, the project will have quite a few errors, but if you're using popular code editors like IDEA products, Visual Studio Code, Sublime, etc, they provide out of the box support to fix most of these issues in one quick stroke (format document). If the large number of errors is proving to be a hindrance to your learning, please feel free to uninstall ESLint by running npm uninstall eslint eslint-config-airbnb-base eslint-plugin-jest --save-dev We're almost done with setting up our project --- the finish line is within our sights !! In this last step, we will configure our project to make use of various static assets like images, SVGs, icons and custom typefaces. Any respectable front end setup should have varying fonts displaying information on the page. The weight of the font, along with its size, is an indicator of the context of the text being displayed --- for instance, page or section headers tend to be larger and bolder, while helper texts are often smaller, lighter and may even be in italics. There are multiple ways of pulling in custom fonts into an application. Large enterprise code bases usually buy licenses to fonts and have its static assets as part of the the server that hosts the application. The process to do that is slightly complicated --- we need a dedicated piece to walk through that. The most convenient way of using custom fonts is to use a public domain library that has a large collection and hosted on a CDN (Content Delivery Network), like Google Fonts. It is convenient because all we need to do is, select a couple of fonts we like and then simply embed their url in our static markup index.html ...and we're good to go !! So let's get started. For our purposes, we shall use Roboto Mono font family. Open up index.html and paste the following stylesheet link in the head <link rel="stylesheet" href=""> We're done. Now we can use font-family: 'Roboto Mono' in any of our .scss files. We can use any number of fonts in this way. Images, like fonts, are an essential part of a front end setup. In order to enable our project to utilize images in the application, we need to install a loader for webpack. This step is identical to what we've done multiple times in this guide --- install the loader and add a few lines to webpack config to make use of it npm install url-loader --save-dev ... then add the following lines to the rules array in webpack.common.js ... { test: /\.(jpg|png)$/, use: { loader: 'url-loader', }, }, ... The project is now ready to use images of type .jpg and .png . To demonstrate, create a public/images folder at the root directory of the project. Then add any image to the subdirectory images. For our purposes, I downloaded a free image from Unsplash and named it coffee.png Next, we will create a directory inside src/components called Image --- then create the Image component. Image.js import React from 'react'; const Image = (props) => { return ( <img src={props.src} alt={props.alt} height={props.height} wdth={props.wdth} /> ); }; export default Image; Then, import both the Image component and the actual image coffee.png in App.js. At this point, we will have to make minor edits to the App.js to use the image import React from 'react'; import './App.scss'; // component imports import Image from './Image/Image'; // other imports import coffee from '../../public/images/coffee.png'; const App = () => { return ( <div> <span>Hello World</span> <Image src={coffee} </div> ); }; export default App; Now, if you start the application, you will see the image is being loaded on the page. That concludes our step by step guide to setting up a modern React project from scratch. There was a lot of information to digest here, but to think of it, we have also come a long way from the minimal setup we did earlier. I hope the guide has been helpful in learning some key concepts in the area of modern front end setup tooling. The future pieces I have planned for this series are package.jsonscripts and global scss stylesheets like resets and variables. Please feel free to leave a comment and share among your friends. I will see you in the next piece ! The repo for the advanced setup can be found here .
https://nasidulislam.hashnode.dev/advanced-react-webpack-4-babel-7-web-application-setup-ck8ytdqpn0179n3s1us66l466?guid=none&deviceId=519149a5-c3ff-45c1-bdb9-2d91328d261f
CC-MAIN-2020-40
refinedweb
4,361
60.95
Selenium can not handle file downloading because browsers use native dialogs for downloading files which cannot be controlled by JavaScript . There are some third party tools using which we can automate download functionality.Some of the tools are AutoIt and Sikuli. I have used AutoIt for downloading file . If you want to use AutoIt script in your selenium script then - Get an exe file of AutoIt script - Call the exe file from Selenium script 1. Get .exe file of AutoIt script : Here is the AutoIt Script for downloading a file from website.This script takes one parameter as an input i.e the exact location from where we would like to download the file .And the output (downloaded file) will be stored at C:\Users\Public\Downloads. Save the below script with extension as Download.au3 and run it then it will generate Download.exe file You can also download the .exe format of script from here . Save the below script with extension as Download.au3 and run it then it will generate Download.exe file You can also download the .exe format of script from here . #comments-start InetGet ( "URL" ,"filename" , options , background) >>URL URL of the file to download. The URL parameter should be in the form "" - just like an address you would type into your web browser. >>filename [optional] Local filename to download to. >>options [optional] 0 = (default) Get the file from local cache if available. 1 = Forces a reload from the remote site. 2 = Ignore all SSL errors (with HTTPS connections). 4 = Use ASCII when transfering files with the FTP protocol (Can not be combined with flag 8). 8 = Use BINARY when transfering files with the FTP protocol (Can not be combined with flag 4). This is the default transfer mode if none are provided. 16 = By-pass forcing the connection online (See remarks). >>background [optional] 0 = (default) Wait until the download is complete before continuing. 1 = return immediately and download in the background (see remarks). #comments-end ;Exact location of WebSite from where we would like to download the file.We are reading url from commandline $URL =$CmdLineRaw ;Local address to which we would like to download the file $filename = "C:\Users\Public\Downloads\Recognised_Student_Form.pdf" InetGet ($URL, $filename , 1, 0) 2.Call the .exe file from Selenium script : If you are using java then you can run the .exe file from your selenium script using the below sample code: Runtime.getRuntime().exec("Path of autoIt exe file"); Here is the Selenium code which will naviagte to the Oxford Application form .Then it will download "Recognised Student 2013/14 - Application Form (230 kb)" file and save it at C:\Users\Public\Downloads with name as "Recognised_Student_Form.pdf". import java.io.IOException; import java.util.ArrayList; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class FileDownload { WebDriver driver; @BeforeTest public void setUpDriver() { driver = new FirefoxDriver(); } @Test public void start() throws IOException{ driver.get(""); //Get the downloadable file location from the site with link name as "Recognised Student 2013/14 - Application Form (230 kb) " String href=driver.findElement(By.xpath(".//*[@id='aapplications_for_recognised_student_status']/div/ul/li[1]/a")).getAttribute("href"); //Framing the command string which includes two parameters //Parameter 1- Location where the Download.exe file is located //Parameter 2 - file location which we have stored in "href" variable String command="Location where Download.exe file is saved"+" "+href; //If you happened to save Download.exe file at "Public downloads" folder then command statement will be like //String command="\"C:/Users/Public/Documents/Download.exe\""+" "+href; System.out.println("command is "+command); ArrayList argList = new ArrayList (); argList.add(href); //Running the windows command from Java Runtime.getRuntime().exec(command); } }
http://www.mythoughts.co.in/2013/11/download-file-using-selenium-webdriver.html
CC-MAIN-2020-05
refinedweb
630
52.05
isympy/sympy sagemath integration ? I'm trying to integrate integrate((1/x**a)*(1/(1-x)),x) With sympy; which succeeds in isympy. But I need to separately import sympy from sympy import * x, a = symbols('x a') integrate(1/(1-x)*1/(x^a),x) Works, whereas simple integrate(1/(1-x)*1/(x^a),x, algorithm="sympy") fails when the import is missing. It also fails (differently) after the import :) only integrate(1/(1-x)*1/(x^a),x) works. I didn't see that mentioned. No big problem, after I spent some time going down the subprocess error rabbit hole. Is this a documentation error that should be reported?
https://ask.sagemath.org/question/52397/isympysympy-sagemath-integration/
CC-MAIN-2021-17
refinedweb
112
51.55
I bad code: namespace MyNamespace { using System; public class Customer { public void PlaceOrder(string orderReference, OrderedProductData[] orderedProductData) { if (orderReference = "") throw new OrderCreationException(); orderReference = orderReference.Trim(); Order newOrder = new Order(orderReference); newOrder.Customer = this; int i; for (OrderedProductData dataItem in orderedProductData) { Product product = sqlserverdatabase.findproduct(dataItem.ProductID); newOrder.AddOrderItem(product); ++i; } LoggingService.LogInformation( "A new order has been placed: " + orderReference + " at " + DateTime.Now); CommunicationService.SendEmail( "New Order!", "A new order has been placed: " + orderReference + " at " + DateTime.Now, "ordernotifications@mycompany.com", "orders@mycompany.com"); } } } What I Thought Was Wrong With It 1. Inconsistent styling: the curly braces are placed inconsistently, there's inconsistent capitalisation in class and method names, double whitespace before the Order newOrder = line and inconsistent indentation in the arguments to the LoggingService and CommunicationService calls. Some of this is a matter of taste, but bracket placement and casing conventions should at least be consistent. I like StyleCop, and don't like to make it angry :) 2. OrderedProductData being passed in as an array is unnecessarily restrictive; if it was an IEnumerable<OrderedProductData> instead then clients could pass in arrays or Lists without having to convert them. 3. orderReference = "" doesn't compile because it's assignment instead of comparison. 4. orderReference is checked for being an empty string, but not for being null. It's then Trimmed, which could throw a nasty NullReferenceException. As pointed out more than once in the comments, the check should use string.IsNullOrWhitespace(). 5. orderReference is checked for validity, orderedProductData isn't. If the latter was null it would throw a NullReferenceException when the method tried to enumerate it. 6. The OrderCreationException thrown if orderReference is an empty string is really very unhelpful. I thought it should have an error message; one of the comments recommended using an ArgumentException instead. I suppose that comes down to how you do your Exception handling; maybe a layer above the Customer would catch any Exceptions and wrap them in an OrderCreationException; maybe it'd be nicer to throw an OrderCreationException with an ArgumentException as its InnerException. In any case throwing a custom exception with no message is a bit rubbish. 7.The Order constructor takes an order reference, but not a Customer; the Customer is assigned later using a setter. I can't put this any better than Alastair Smith: "The created Order's Customer property is set outside the constructor, thus making the class mutable and allowing the Customer property to change. I can't think of any reason why an Order would need to be assigned to a different Customer". 8. The int i is created and incremented, but never used for anything. What's the point of that? 9. The second reason the code doesn't compile: a foreach loop declared as a for loop. 10. dataItems in the foreach loop could be null; this isn't checked. 11. sqlserverdatabase, the LoggingService and the CommunicationService should not be used directly by anyone or anything; generally speaking, static method use on a dependency is evil. All three of these classes should be accessed via an interface and injected. 12. To be really strict, it shouldn't be dataItem.ProductID, it should be dataItem.ProductId; 'id' is an abbreviation, not an acronym. 13. Finding Products and adding them to the Order is not the job of the Customer; this violates the SRP. For the same reason the Customer should not be sending emails. 14. Again, if you're being really strict (as I tend to like to be) the same message is constructed twice for the LoggingService and CommunicationService calls. Maybe in only a small way, but this violates DRY. 15. Also pointed out more than once in the comments, the string concatenation used in the LoggingService and CommunicationService calls should be done using string.Format; this is not only neater, but with use of a CultureInfo would ensure appropriate formatting of the DateTime in the message. 16. Sending emails from a Customer object is not only a violation of the SRP, it's also incorrect from a layering point of view. Why does a Customer know anything sending emails? This should be handled in the Application layer, and is an ideal candidate for Domain Events. With reference to point 11, not only should the CommunicationService be injected, but it shouldn't be injected into Customer. I'd argue the same is true of the LoggingService. The email sent is supposed to be to alert the company with whom the Order has been placed, but that's not exactly clear. So those were the things I intended to be available for spotting by an interview candidate, and as I said the vast majority were picked up in the comments - well done all! :) What I Didn't Spot Was Wrong With It There were also things picked up I hadn't considered. Namely: 1. "orderReference is a string. This feels weakly-typed and I would argue it should be an instance of a separate class." Good point! I suppose it depends on the source of order references - if they're chosen by users and can be any combination of any characters a string pretty much does the job, but if there are any rules around them (and at very least they're going to have a maximum length) I can see the argument for a dedicated class. 2. "AddOrderItem is dealing directly with Products, when the PlaceOrder() method has an array of OrderedProductData. It would seem more sensible to make use of the OrderedProductData objects directly, because AddOrderItem() is losing any notion of quantities of products ordered." So it is - again - I just didn't think of that :) Stuff Which *Might* Be Wrong With It Interestingly, the Customer object having a PlaceOrder method was cited more than once as an error, and less preferrable to an OrderService.PlaceOrder method, or some other service method, perhaps on the Order object. A friend at work pointed out this is an example of the Spreadsheet Conundrum (the link is to Google's cache of the page, as the actual site was down at the time of writing) - in a method involving two classes, should the method go on class 1 or class 2? The answer to the conundrum is "It depends" - it depends what the two classes are. I think in this example the method fits nicely on Customer because in real life Customers place Orders, Orders don't place themselves. If the Order creation process was particularly complex I could see the argument for an OrderService, but I'm always wary of an AnemicDomainModel. To Sum Up It's quite impressive just how bad a method with a dozen lines of code can be, and I've found it really interesting to see other peoples' take on it. Thanks very much to those who left comments :)
http://geekswithblogs.net/mrsteve/archive/2011/08/06/c-sharp-bad-code-interview-questions.aspx
CC-MAIN-2015-35
refinedweb
1,136
53.51
local imprt 'fa.less' is forbidden for security reasons. When I try to integrate a zTree widget into odoo CE10.0, I got this error. With this error messsage, the font-awesome font can not be loaded and display blank in my page. I digged into the source code and found the line report such error: def sanitize(matchobj): ref = matchobj.group(2) line = '@import "%s"%s' % (ref, matchobj.group(3)) if '.' not in ref and line not in imports and not ref.startswith(('.', '/', '~')): imports.append(line) return line msg = "Local import '%s' is forbidden for security reasons." % ref _logger.warning(msg) self.css_errors.append(msg) return '' The error case is a dot character is found in the less file name 'fa.less' and then the exception was raised. I can not understand why the dot character in file name should be forbidden and even I comment this line out it is still can not work.
https://www.odoo.com/forum/help-1/question/local-imprt-fa-less-is-forbidden-for-security-reasons-142054
CC-MAIN-2019-30
refinedweb
156
68.26
Example: Check if a Number is Positive or Negative using if else public class PositiveNegative { public static void main(String[] args) { double number = 12.3; // true if number is less than 0 if (number < 0.0) System.out.println(number + " is a negative number."); // true if number is greater than 0 else if ( number > 0.0) System.out.println(number + " is a positive number."); // if both test expression is evaluated to false else System.out.println(number + " is 0."); } } When you run the program, the output will be: 12.3 is a positive number. If you change the value of number to a negative number (say -12.3), the output will be: -12.3 is a negative number. In the above program, it is quite clear how variable number is checked to be positive or negative, by comparing it to 0. If you're not sure, here is the breakdown: - If a number is greater than zero, it is a positive number. - If a number is less than zero, it is a negative number. - If a number equals to zero, it is zero.
https://www.programiz.com/java-programming/examples/positive-negative
CC-MAIN-2020-16
refinedweb
183
69.38
In my project we use enums to namespace static variables that wrap localized strings. Eg: enum DisplayStrings { enum General { static var ok: String { “DisplayStrings.General.ok”.localized } static var cancel: String { “ DisplayStrings.General.cancel”.localized } } enum Alerts { enum Error { static var title: String {...} static var body: String {...} } enum Success { static var title: String {...} static var body: String {...} } } } However now we need to inject this into and so have it represented by a protocol so that there can multiple implementations. I can’t work out how to create a protocol that allows this even having multiple protocols and using associatedtypes doesn’t work (because then you can only use it as a generic constraint) Ideally something like this would be possible: protocol DisplayStringsType { static General { static var ok: String { get } } // etc } (Ideally it seems like var wouldn’t be needed in protocols because the protocol doesn’t care how it’s implemented, as in swift 5.3 you can use enum cases to conform to a protocol)
https://forums.swift.org/t/abstracting-nested-types-with-a-protocol/37639
CC-MAIN-2022-40
refinedweb
166
55.84
We’ll use Plotly to create our plots. You’ll notice that Plotly is already imported at the top of Form1: import plotly.graph_objs as go Let’s write a Python function to plot our first bar chart. Add a build_revenue_graph function to your client-side code: def build_revenue_graph(self): self.plot_1.data = go.Bar(y=[100,400,200,300,500]) We want to build this graph when our app first opens, so add this line to the the __init__ method of your form: # Any code you write here will run when the form opens. self.build_revenue_graph() Your ‘Form1’ should now look like this: from anvil import * import plotly.graph_objs as go class Form1(Form1Template): def __init__(self, **properties): # Set Form properties and Data Bindings. self.init_components(**properties) # Any code you write here will run when the form opens. self.build_revenue_graph() def build_revenue_graph(self): self.plot_1.data = go.Bar(y=[100,400,200,300,500])
https://anvil.works/learn/tutorials/dashboard/chapter-2/20-plot-some-data.html
CC-MAIN-2020-10
refinedweb
155
66.03
On Tue, 16 Oct 2001, Harti Brandt wrote: > since version 1.41 of newfs.c newfs fails to build 2MByte md-based > file systems. We use these file systems in our diskless pc's. Hrmphh :-). My patch for changing the default number of cylinders per group to the maximum had an (apparently broken) change related to this attached. This part should not have been committed. Backing it out fixes the problem with 2MB disks. Index: newfs.c =================================================================== RCS file: /home/ncvs/src/sbin/newfs/newfs.c,v retrieving revision 1.42 diff -u -2 -r1.42 newfs.c --- newfs.c 4 Oct 2001 12:24:18 -0000 1.42 +++ newfs.c 17 Oct 2001 04:30:46 -0000 @@ -167,5 +159,4 @@ int ntracks = NTRACKS; /* # tracks/cylinder */ int nsectors = NSECTORS; /* # sectors/track */ -int ncyls; /* # complete cylinders */ int nphyssectors; /* # sectors/track including spares */ int secpercyl; /* sectors per cylinder */ @@ -181,5 +172,5 @@ int fsize = 0; /* fragment size */ int bsize = 0; /* block size */ -int cpg = 0; /* cylinders/cylinder group */ +int cpg = DESCPG; /* cylinders/cylinder group */ int cpgflg; /* cylinders/cylinder group flag was given */ int minfree = MINFREE; /* free space threshold */ @@ -546,15 +537,4 @@ } #endif - ncyls = fssize / secpercyl; - if (ncyls == 0) - ncyls = 1; /* XXX */ - if (cpg == 0) - cpg = DESCPG < ncyls ? DESCPG : ncyls; - else if (cpg > ncyls) { - cpg = ncyls; - printf( - "Number of cylinders restricts cylinders per group to %d.\n", - cpg); - } mkfs(pp, special, fsi, fso); #ifdef tahoe > Strange enough newfs still handles 1.44MByte floppies. The (reverse of) the above patch was tested mainly with this size, but it doesn't work for me now. The problem addressed by the patch is that the default geometry of 1 track with 4096 sectors is extremly bogus for devices that don't even have 4096 sectors altogether. Users should override the default geometry for these devices, but most users don't (the release Makefiles set bad examples...), and newfs emits alarming warnings. The changes adjust the number of cylinders to 1 and the number of cylinder groups to 1 if the device size is smaller than 1 cylinder, but other parts of newfsnsist on 2 cylinders per group although this is physically impossible. Bruce To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message
https://www.mail-archive.com/freebsd-current@freebsd.org/msg32574.html
CC-MAIN-2018-26
refinedweb
376
71.34
I just started my first c++ class and we recieved our first assignment. I dont have a compiler at home and live far away from school. I have 2 questions: These are the program requirements: Program 1: Prompt the user and let them enter three integers. Store them in three variables. Print the numbers in sorted order, from smallest to largest. Sample run: (user input underlined) Input integer 1 : 34 Input integer 2 : 200 Input integer 3 : -14 Sorted : -14 <= 34 <= 200 This is the code: #include <iostream> int main ( ) { int one, two, three, first, sec, third; cout << "Enter 3 integers./n"; cout << "Input Integer 1 : "; cin >> one; cout << "/nInput Integer 2 : "; cin >> two; cout << "/nInput Integer 3 : "; cin >> three; if ( one >= two && one >= three) one == first; if ( two >= one && two >= three) two == first; if ( three >= one && three >= two) three = first; if (one <= two && one >= three) one == sec; if (one >= two && one <= three) one == sec; if (two <= one && two >= three) two == sec; if (two >= one && two <= three) two == sec; if (three <= two && three >= one) three == sec; if (three >= two && three <= one) three == sec; if (one <= two && one <= three) one = third; if (two <= one && two <= three) two = third; if (three <= one && three <= two) three = third; cout << first << ">=" << sec << ">=" << third << "."; cout << "Have a Nice Day!"; } Will it work? Any changes(Im sure there is another way without all those ifs. Program 2 requirments: Prompt the user to type an integer in the range 0 - 1000. Allow the user to input an integer (you may assume correct type of input. Whenever the integer is not in the specified range, print an error message and make the user re-enter. Once a valid input is received, compute and print out the sum of the digits of the number. Sample run 1: (user input underlined) Please input an integer between 0 and 1000: 1001 * Number not in 0-1000 range. Please re-enter: -1 * Number not in 0-1000 range. Please re-enter: 456 Sum of the digits is 15 Code: #include <iostream> int main( ) { int; cout << "Enter an integer between and including 0-1000./n"; cin >> num; if (num < 0 || num > 1000) { cout << "Number not between 0-1000./n"; cout << "Enter an integer between and including 0-1000./n"; cin >> num; } num1 = num / 100 num2 = (num % 100) / 100 num3 = (num % 100) % 10 cout << The sum of the digits is : ; cout << num1 + num2 + num3 } Thanks for any help you can provide, email me at rymade@yahoo.com
https://www.daniweb.com/programming/software-development/threads/1071/im-new-to-this-site-and-c
CC-MAIN-2017-34
refinedweb
412
76.66
This question came in from a customer (paraphrased): If I run my program from the command prompt, it works great, but if I run it from my launcher via ShellExecuteEx, it never appears. See how good your psychic powers are at solving this problem before I give you the second question that gives away the answer. Any luck? Here's a second question from a different source (but which coincidentally came in the same day). I'm trying to use ShellExecuteto open a document. The function succeeds (returns a value greater than 32), but I don't get anything on the screen.if (ShellExecute(Handle, NULL, FileName, NULL, NULL, NULL) <= (HINSTANCE)32) ... The problem the second person is having lies in the last parameter to the ShellExecute function. It's nShowCmd, which is supposed to be an SW_* value, but which this person is passing as NULL. It so happens, that the value zero corresponds to SW_HIDE, which explains why the program doesn't appear: You told it to run hidden! Now go back to the first problem. Do you see what the person most likely did wrong? The code probably went like this: SHELLEXECUTEINFO sei = { sizeof(sei) }; sei.hwnd = hwnd; sei.lpVerb = TEXT("open"); sei.lpFile = pszFile; ShellExecuteEx(&sei); Since the sei.nShow member was not explicitly set, the value was implicitly set to zero by the incomplete initializer. And as we noted above, zero means SW_HIDE. It turns out my psychic debugging was correct. That was indeed the source of the first person's problem. Now you can use your psychic powers, too. My favourite mistake I have tried to avoid! This is the reason to avoid zeroes in enum’s. I must assume your customers don’t fit the category of professional programmers. Professional: "Hmm, it doesn’t work. Let’s look at the code and check the functions we are calling." Amateur: "Hmm, it doesn’t work. Must make a support call." Doug, you couldn’t be further from the truth. Professional: "Hmm, it doesn’t work. This operating system is so buggy. Let me try this, I bet that will work." Then again, I wouldn’t exactly call those people professionals. But I run into programmers who never assume it is their fault. I often see this on message boards – folks who use NULL or zero for parameters they don’t (yet) understand, and expect that the API will "do the right thing" and use good defaults. The trouble is, that works in some cases (like the lpDirectory param to ShellExec) but not others (like nShowCmd). A quick RTFMsdn would solve the problem, and hopefully the person learns this after getting their answer of "you need to set nShowCmd correctly, see the docs for the legal values". If SW_SHOW would be 0 and SW_HIDE would be 5 this problem wouldn’t exist. NULL/0 should always be a safe default. Both SW_FORCEMINIMIZE and SW_MAX are = 11, so I think the hope of not be treated as a moronic developer is already lost. SW_MAX == largest value supported by ShowWindow, for easy range checking. So yes, it is the same as SW_FORCEMINIMIZE, the largest value. If you just read the MSDN you would know what value to put there instead of NULL and the problem wouldn’t even exist so you won’t have to solve it. Because of (possible) problems like the one with SHELLEXECUTEINFO I always wondered should I subclass any old structure and add a couple of constructors to it, only to avoid thinking about what to put in every member it might have every time I use it. struct ShellExecuteInfo : public SHELLEXECUTE { ShellExecuteInfo(LPCTSTR File, LPCTSTR Verb bool show); // … }; Yeah, it’s not great, but may be a small improvement ( and a huge pollution of the namespace :-( ). Because I’m not sure about it, I seldom do it (i.e. when I get really fed up with RTFing MSDN). :wthf: SW_* constants are used in many places, not everywhere the same value is the "sensible default". It’s your / our job to feed the correct parameters – all of them, not just those that we care about at the moment. Running something in the background is a quite sensible default (you start an application in the background for printing a document or something). It’s easy to make a great handwaving argument about how well-designed it is in that he who asks for nothing gets nothing. I won’t do that. I think you should know what to put and not to put in enumerations. Enumerations were not in K&R and did not become part of the C language until 1989, six years after the ShowWindow function was written. Let me know when you’ve perfected that time machine. -Raymond] Actually there is nothing in the MSDN documentation of ShellExecute which says NULL will hide the application, because the possible values of nShowCmd are not shown, only names. It is just another example of an API tripping up a developer by using basic scalar types and assigning meaning to them beyond what can be gathered just by reading a description of the function. Compilers should check such things, based on the types of the arguments, not people. If you use image execution options to debug the app startup is the debugger clever enough a) to show even though you specified SW_HIDE b) to pass SW_HIDE on to the app so that when you get to ::ShowWindow(hWnd, nCmdShow); you get to realise that nCmdShow is SW_HIDE? "Actually there is nothing in the MSDN documentation of ShellExecute which says NULL will hide the application, because the possible values of nShowCmd are not shown, only names." No, the possible values are shown. SW_HIDE and NULL might be equal but they are not equivalent. SW_HIDE is always a legal value while under most circumstances NULL is not. Just because the abstraction isn’t enforced doesn’t mean it can be abused. It is too bad that the MSDN doesn’t explicitly state the values of the "enumerations" (or maybe I just couldn’t find them). I was recently poking around the WinMM.DLL methods for MIDI in/out (), and in order to get the values and types to put into my C# app (via P/Invoke) I had to open up VC++, type in the value I was looking for, and hit F12 to have VC++ search for the definition. My folks sent me my old MIDI controller from high school (for me that was in the mid 90s), and I wanted to hook it up to my tablet via USB and make it use Windows to act as its synthesizer (and maybe to do a little more), so I needed a way to read messages from MIDI in on my USB to MIDI adapter and send the messages to MIDI out on the Windows side. I chose C# instead of my normal C/C++ for fun — I wanted to work with P/Invoke and delegates. Needing to use my C++ tools to find out how I needed to marshall my data/types/etc for C# was unfortunate. Some enums in windows.h (winbase.h) have a "max" defined which is 1 more than the actual largest value. Conclusion: You can’t trust it. These people may have some background in SQL. In SQL, NULL is a valid value for any datatype, distinct from any other value including zero (for some reason, Oracle treats the empty string as NULL, but that’s because Oracle’s weird). It really does make sense, to your average programmer these days (i.e., one that didn’t grow up on K&R and RAM measured in kilobytes). In truth, I think the problem stems from the fact that ShellExecute is such an old routine, that hasn’t been superceded (yet). Roie, I would make one small editorial change: If you want to use Win32 you should know what NULL is. ShowWindow was doomed once it went beyond the simple showing and hiding of windows (including one constant which goes as far as to change your z-order, activate another window according to an undocumented algorithm and trim your working set). Instead ShowWindow should have been left to do what it does well, and all the iconising and maximising should have been given to a separate function. ShowWindow()’s name is to blame for this. To call a function names ShowWindow when you really wants to hide a window is plain wrong. > I would have expected that if SW_HIDE were 5 > people would say “What a phenomenally stupid > idea. Should’ve used negative numbers (teasing here). Theoretically you could use all positive numbers and returned an error code if someone passed a zero. Obviously it’s too late but this kind of design is reasonable. > Enumerations were not in K&R … for some language versions of that book. > and did not become part of the C language > until 1989 Nothing became part of the C Standard until 1989, but a ton of stuff was added to the language by ATT and others before 1989. I’m pretty sure ATT added enums while Microsoft was selling Xenix.
https://blogs.msdn.microsoft.com/oldnewthing/20061023-04/?p=29293/
CC-MAIN-2016-30
refinedweb
1,536
71.65
11 March 2011 17:55 [Source: ICIS news] HOUSTON (ICIS)--The Hydrographic and Oceanographic Service of the Chilean navy has issued a tsunami alert for the coastal areas of the country, the local daily el Mercurio said on Friday. The Chilean government called for calm while scientists study what effects the devastating earthquake in ?xml:namespace> The tsunami alert was for both the coastal and interior regions, according to the alert. The navy’s oceanographic service estimated arrival times for different locations along the coast and said updates would be issued throughout the day. The tsunami wave is expected to hit the Easter Island at 17:47 hours ( The However, local sources did not expect major consequences of the tsunami in There were no reports of production interruptions in the refineries and chemical plants of Enap, the national oil company, has two refineries, one in A Petroquim source said that operations were normal, despite the fact that the plant site was near the coast and only about 5 feet above sea level. A Petroquim official said there was a wait-and-see attitude and many uncertainties about the magnitude of the waves. Official information about the effects of the tsunami in other areas could change things rapidly, the Petroqu.
http://www.icis.com/Articles/2011/03/11/9443232/chilean-navy-warns-of-tsunami-danger-on-pacific-coast.html
CC-MAIN-2013-20
refinedweb
209
50.6
Hanno Schlichting wrote: > On Wed, Aug 4, 2010 at 10:55 AM, Chris Withers <ch...@simplistix.co.uk> wrote: >> What does a "regular ZCML slug" look like? > > My first Google hit is this > :) Advertising yeah yeah... Okay, but what about: <include package="Products.Whatever" /> ...causes Whatever/__init__.py's initialize method to get called? >> Is this setuptools namespace package magic making things work or is there >> explicit code in Zope that does this? > > It's not really setuptools specific. It's just the same old code that > does "import Products" and then iterates over everything it finds. Yeah, but it's setuptools namespace packages magic that makes sure __path__, which the old Zope 2 code iterates over, is set up correctly, right? Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/zope-dev@zope.org/msg34317.html
CC-MAIN-2017-17
refinedweb
146
59.5
sc_BcastStateRecv man page sc::BcastStateRecv — BcastStateRecv does the receive part of a broadcast of an object to all nodes. Synopsis #include <mstate.h> Inherits sc::MsgStateRecv. Public Member Functions BcastStateRecv (const Ref< MessageGrp > &, int source=0) Create the BcastStateRecv. void source (int s) Set the source node. Protected Member Functions void next_buffer () Specializations must implement next_buffer(). Protected Attributes int source_ Detailed Description BcastStateRecv does the receive part of a broadcast of an object to all nodes. Only one node uses a BcastStateSend and the rest must use a BcastStateRecv. Author Generated automatically by Doxygen for MPQC from the source code. Info Sat Feb 11 2017 Version 2.3.1 MPQC
https://www.mankier.com/3/sc_BcastStateRecv
CC-MAIN-2017-17
refinedweb
110
60.92
In this tutorial I will introduce a class by Senocular.com that allows easy movement of game characters with minimal code. Final Result Preview In the SWF you'll see a spaceship; use your Left, Right, Up, and Down arrow keys to move it. Step 1: Explanation of KeyObject.as When ActionScript 3.0 came out we lost the functionality of AS2's Key.isDown() method. Senocular has coded a great little class that will let us emulate this functionality within actionscript 3 and that is what we will look at in the tutorial. Step 2: Setting Up the Project Go to File > New and create a new Actionscript 3.0 document, with the following properties: - Size: 550 * 400 - Background Color: White - FPS: 24 Save this file as "KeyObject.fla" Step 3: Downloading KeyObject.as Before we can code our application we need to get the "KeyObject.as" file, so head over to Senocular.com. Under the Flash Menu, click on Actionscript. Once there you will want to drill down to "KeyObject.as" and download it. Get there by going to Actionscript 3.0 > com > senocular > utils. You can right-click on the download link and save it as "KeyObject.as". Once you have done this you need to remove com.senocular.utils right after the package declaration in the file, since we are not using the com.senocular class path. Change this: package com.senocular.utils { import flash.display.Stage; import flash.events.KeyboardEvent; //Rest of Class To this: package { import flash.display.Stage; import flash.events.KeyboardEvent; //Rest of Class Step 4: Importing the Player Graphic In the download files there is a spaceship image called player.png. In Flash, import this to the stage, by going to File > Import > Import To Stage. Right-click on it and choose "Convert To Symbol", give it the symbol name "player", and make sure the registration point is set to the top left. Now give it the instance name of "player" as well. Step 5: Setting Up the Main Class Go to File > New and choose ActionScript File. Save this as Main.as and set it as your Document Class within "KeyObject.fla". Next add the following code to "Main.as": package { import flash.display.Sprite import flash.events.Event; import KeyObject; public class Main extends Sprite{ private var key:KeyObject; public function Main() { addEventListener(Event.ADDED_TO_STAGE,setupKeyObject); } function setupKeyObject(e:Event){ key = new KeyObject(stage); stage.addEventListener(Event.ENTER_FRAME,movePlayer); } function movePlayer(e:Event){ if(key.isDown(key.LEFT)){ player.x -= 5; } if(key.isDown(key.RIGHT)){ player.x +=5; } if(key.isDown(key.DOWN)){ player.y +=5; } if(key.isDown(key.UP)){ player.y -=5; } if(player.y<0){ player.y =0; } if(player .y > (stage.stageHeight - player.height)){ player.y = stage.stageHeight - player.height; } if(player.x<0){ player.x = 0; } if(player.x > (stage.stageWidth - player.width)){ player.x = stage.stageWidth - player.width; } } } } Here we set up our package and import the classes we will be using. Next we set up the key variable as type KeyObject, and within our Main constructor we add an ADDED_TO_STAGE Event Listener. This gets called when the movie is fully loaded and the stage is ready. Inside the setupKeyObject function, we set the key variable to be a new instance of the KeyObject class and add an ENTER_FRAME Event Listener to the stage. Within the movePlayer function we check which key is being pressed by using key.isDown() and move our player accordingly. Finally, we check to see whether the object has moved outside the bounds of the stage, and if it has we put it back just inside the stage. Conclusion Using Senocular's KeyObject class makes it dead simple to move your game characters! I hope this tutorial has helped; thanks for reading. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/quick-tip-easy-as3-character-movement-with-keyobjectas--active-9717
CC-MAIN-2019-43
refinedweb
654
61.33
- ChatterFeed - 5Best Answers - 0Likes Received - 0Likes Given - 3Questions - 42Replies Help with a campaign workfow formula How can I chnage the domain? Is there any way to revert the domain back to the default Salesforce one or one of our choosing? Regards Jason How to use Custom Labels in Trigger. Checkbox checked on page load in a list Guys, I am little confused. I want to set checkbox on page load in a list of records. I am rendering a list (of Engagement records) on a custom page. The first column of list is a checkbox. This checkbox should be checked on page load if Engagement record has atleast one related Potential record. I tried using inputCheckbox and tried to use selected attribute but it did not work. I tried using then HTML input checkbox field and tried to use 'checked' <input id="{!engagement.Id}" value="{!engagement.Id}" type="checkbox" class="checkbox" name="ids" {!IF( ISBLANK(engagement.Potential__r), '', 'checked')} /> but VisualForce starts giving error. I am not able to set checkbox on page load. Can anyone provide any clue of how this can be done. Are Formula fields better for performance? As far as the performance is concerned, it is better to reduve no. of joins in the query by doing a de-normalization. How can we do a de-normalization? Is using a formula of Account field in Contact a sense of de-normalization so that we do not use Account object while writing a query on the Contact object? Are formula fields stored as a data internally and updated as and when the source data gets updated? If no, are roll-up summary fields also calculated at runtime rather than storing them internally? Re-login after the session timeout on Customer Portal does not open Default Landing page We have implemented Customer Portal Authenticated Sites User for the current client. We created a custom page that we wanted to be the homepage once the user logs in to the portal. The name of this custom page is "Home Page" We assigned this page as "Default Landing Tab" using "Customize Portal Tab" button. Also for Login too, there is a custom page created called "Login" This works fine. When we log in to system using custom "Login" page, it does show our landing page - custom "Home Page". When user logs out, he is taken back to the custom "Login" page. However when user's session is time out due to inactivity, he is shown Salesforce default login page rather than the custom login page created by us. I see that the url to which it is redirected is: Now when user enters his username and password, he is redirected to the Salesforce Default Home page rather than my custom "Home Page". I see that this is due to presence of "startURL" parameter in the URL. Is there a way I can get this parameter removed from the URL when there is a session timeout. Change field value en-mass using DML Here is the change I am looking to make: If fieldA (SortType_C) from CustomObject_C = BLANK then change value of SortType_C to "Other" I am a new administrator and learning fast, I managed to access the developer console but I am new to DML. Can anyone help me with the code? Thanks, Dave The Rave How to by pass opportunity validation during lead conversion Is there a way to by pass opportunity valdiation during lead conversion. Please suggest. Thanks Sudhir Share profile to multiple users with different permitions My Question is this I have a sObject name "object1", having one profile name "profile1" which have all permitionss (create, edit, delete, view, viewAll, modifyAll) for sObject (object1), now I assigned that profile to User1, user2. Now I wants to assign same profile (profile1) to user3, but the condition is this user3 only have a create permition, how would I achive this. I know by permition set we can add new permitions but we can not degrade permitions. Please reply Once Opportunity is in Contract Signed stage should be locked for manual edition WHEN: I am log in Salesforce THEN my opportunity is in Contract Signed stage THEN my opportunity cannot be manually edited anymore PS - will be locked for manual edition but system integration will continues updating opportunity. How we can acheive this requirement? Hi all, I have a urgent requirement what is the problem is: Thanks. Unable to dynamically construct data table in visualforce using jquery/javascript Trying to construct the data table dynamically using visualforce remoting with jquery data table. Getting the table with empty values. The following is the code and output. Please help me in getting this done. <apex:page <head> <link rel="stylesheet" href=""/> <apex:includeScript <script src=""></script> </head> <script type="text/javascript"> $ = jQuery.noConflict(); function getRemoteAccount() { Visualforce.remoting.Manager.invokeAction( '{!$RemoteAction.AccountRemoter.getAccounts}', function(result, event){ console.log(result); $('#example').DataTable( { data: result, columns: [ { title: "Id" }, { title: "Name" } ] } ); }, {escape: true} ); } </script> <button onclick="getRemoteAccount()">Get Account</button> <table id="example" class="display" cellspacing="0" width="100%"> </table> </apex:page> global with sharing class AccountRemoter { public String accountName { get; set; } public static Account account { get; set; } public AccountRemoter() { } // empty constructor @RemoteAction global static List<Account> getAccounts( ) { List<Account> accnts= [SELECT Id, Name FROM Account limit 5]; return accnts; } } Any help is really appreaciated . Thanks in advance. Regards, Naveen how to display 70000 records on visualforce page below scenario.can any slove urgent and reply me Help with a campaign workfow formula How can I chnage the domain? Is there any way to revert the domain back to the default Salesforce one or one of our choosing? Regards Jason Convert from Picklist to Lookup field I want to know what are impact of converting from Picklist field to Lookup field in salesforce. Please let me know your suggestion. Thank, Vijay Changing Filed Type I want to change a field from PICKLIST field to LookUp field.. Is it Possible? How to retain the values on page refresh.? I have a vf page where I am dynamically rendering opportunity Line Items. There is a limit of 250 line items can be visible at a time. For rest one link is there like next. To select all the records at a time I have created a checkbox, onclick it will select all the records. But it works for 250 records. If there are more than 250 records, then I have to press next link , the page refreshes and checkboxes for new records don't get ticked automatically. Again I have to tick select all checkbox. Same proble occurs also when I press previous link. How is it possible to select all records at a time and retain the same even page refresshes. Any idea. Thanks. How to use Custom Labels in Trigger save multiple records at same time 1 dublicate records are creating 2 all records is save with same value apex code <apex:page <apex:form > <apex:pageBlock > <apex:pageBlockTable <apex:column <apex:outputText </apex:outputText> </apex:column> <apex:column <apex:inputText </apex:column> <apex:column <apex:inputtext </apex:column> <apex:column <apex:inputtext </apex:column> </apex:pageBlockTable> <apex:pageBlockButtons > <apex:commandButton </apex:pageBlockButtons> </apex:pageBlock> </apex:form> </apex:page> controller public class sample1 { List<String> s=new List<String>(); public String x{get;set;} public Integer a{get;set;} public Integer b{get;set;} public Integer c{get;set;} public List<String> getTask() { List<SelectOption> options = new List<SelectOption>(); Schema.DescribeFieldResult fieldResult = Account.task__c.getDescribe(); List<Schema.PicklistEntry> ple = fieldResult.getPicklistValues(); for( Schema.PicklistEntry f : ple) { //options.add(new SelectOption(f.getLabel(), f.getValue())); //System.debug(f.getLabel()); s.add(f.getValue()); } return s; } public void save() { for(String x:s) { task__c t = new task__c(Name=x,X15__c=a,X20__c=b,X30__c=c); try { insert t; } catch(Exception e) { ApexPages.addMessages(e); } } } }
https://developer.salesforce.com/forums/ForumsProfile?userId=005F0000003FiSkIAK&communityId=09aF00000004HMGIA2
CC-MAIN-2020-16
refinedweb
1,305
54.73
I was looking at the Albuquerque OpenData – it’s all in KML – and decided to make a mobile map using the public art file. The first thing I did was write a Python script to get the Long, Lat and URL of the image out as text. I renamed the KML to XML and imported it in to Excel. From there I deleted out several columns and stripped some information out by hand. Then I ran this script to get my JavaScript: from xlrd import open_workbook wb=open_workbook(‘PublicArt.xls’) sheet=wb.sheet_by_index(0) f=open(‘temp.txt’,’w+’) for rownum in range(sheet.nrows): a=”[“+str(sheet.cell_value(rownum,2))+” , “+str(sheet.cell_value(rownum,1))+” , “+'”<img src=’+”‘”+str(sheet.cell_value(rownum,3))+”‘”+’>”‘+”],\n” f.write(a) f.close() I wrote my Leaflet map and put the output in as my markers. Then I made a new PNG using the Albuquerque Vase Logo. Lastly, I grabbed the LocateControl. View the live example. If you view this page on an iPhone, it will ask for your current location. You can also click the location circle in the top left of the map – under the zoom controls. Doesnt appear to work on Droid – but only tested on 1 device. Just grab the code from the example by going to ‘page source’ in your browser – ctrl-u in Firefox. If you do not have location enabled, you can go to this map.
https://paulcrickard.wordpress.com/2012/12/05/albuquerque-public-art-a-mobile-leaflet-js-example/
CC-MAIN-2018-09
refinedweb
239
75.3
have a assignment that we have to accomplish the same work as in a previous assignment except use two dimensional arrays... this is my previous assignment A Computer Technology Instructor has a small class of 10 students. The instructor evaluates the performance of students in the class by administering 2 midterm tests and a Final Exam. Write a program that prompts the instructor to enter the 10 grades of Midterm 1 and store these numbers in an array. Next prompt for the 10 grades of Midterm2 and store these numbers in a different array. Next prompt for the 10 grades of the Final Exam and store these in a different array. Next add Midterm1 to Midterm2 to Final and store the total of grades in a different array. Next, scan the array that has the totals and identify the minimum grade and maximum grade. Inform the instructor of the minimum grade and maximum grade. Note : do not assume that the grades are in the range 0 to 100. Your program should function properly whether the grades are in the range 0 to 100 or any other range. *****and this is my next assignments instructions Use one two-dimensional array to accomplish the same work you did in the last Assignment. Think of the students as being the columns of the two-dimensional array. Think of the scores of Midterm 1 as occupying the first row, scores of Midterm 2 occupying the next row, scores of the Final Exam occupying the next row. The total of the 3 exams occupying the next row. Inform the instructor of the minimum total grade and the maximum total grade. ***and here is my code, got errors all over the place and not sure where to begin or iif im even in the ballpark Code java: import java.util.Scanner; import java.io.*; public class Assign10_Roberts{ public static void main(String[] args){ //input Scanner Scanner input=new Scanner(System.in); int midTerm1=0; int midTerm2=0; int finalExam=0; int[][]grades= new int[10][10]; System.out.println("Enter the 10 Midterm 1 grades: "); for (int i = 0; i < grades.length; i++){ for (int j = 0; j < grades.length ; j++){ System.out.println("MidTerm1 Grades "+(i+1)+": "); grades[i][j]=input.nextInt(); } System.out.print("Enter the 10 Midterm 2 grades: "); for (int i = 0; i < grades.length; i++){ for (int j = 0; j < grades.length; j++) System.out.print("Midterm2 Grades "+(i+1)+": "); grades[i][j]=input.nextInt(); } System.out.print("Enter the 10 Final Exam grades: "); for (int i = 0; i < grades.length; i++){ for (int j = 0; j < grades.length; j++) System.out.print("Final Exam Grade "+(i+1)+": "); grades[i][j]=input.nextInt(); } for (int i=0;i<10;i++) grades[i]=midterm1[i]+midterm2[i]+finalExam int minGrade=grades[0]; int maxGrade=grades[0]; for (int i=1;i<10;i++) { if (minGrade>grades[i]) minGrade=grades[i]; if (maxGrade<grades[i]) maxGrade=grades[i]; } System.out.println("The minimum grade is "+minGrade); System.out.println("The maximum grade is "+maxGrade); } } } also this is cross posted, thanks help w/ storing/scanning numbers in two dimensional arrays - Java Forums
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/6266-help-w-storing-scanning-numbers-two-dimensional-arrays-printingthethread.html
CC-MAIN-2016-26
refinedweb
528
58.69
Source django-taggable / docs / index.rst django-taggable django-taggable is a library that implements a efficient tagging implementations for the Django Web Framework 1.1+, written by Gustavo Picón and licensed under the Apache License 2.0. django-taggable is: - Flexible: Uses tagtools to choose between popular tagging styles (flickr, delicious, etc), or define your own. You can also easily have several tag fields per object or have different tag "namespaces" to be used between one, some, or all your taggable objects. Your project, your choice. - Fast: No GenericForeignKey madness. - Easy: Uses Django Model Inheritance with abstract classes to define your own models. The API isn't "magical". - Clean: Testable and well tested code base. Code/branch test coverage is 100%.
https://bitbucket.org/tabo/django-taggable/src/110e0e2a61197fde7f5737e5ddfc8c1a48a5b154/docs/index.rst
CC-MAIN-2015-32
refinedweb
122
52.56
buntdb alternatives and similar packages Based on the "Database" category. Alternatively, view buntdb alternatives based on common mentions on social networks and blogs. prometheus10.0 9.7 buntdb VS prometheusThe Prometheus monitoring system and time series database. cockroach9.9 10.0 buntdb VS cockroachCockroachDB - the open source, cloud-native distributed SQL database. tidb9.9 10.0 buntdb VS tidbTiDB is an open source distributed HTAP database compatible with the MySQL protocol influxdb9.9 9.7 buntdb VS influxdbScalable datastore for metrics, events, and real-time analytics jaeger9.8 9.3 buntdb VS jaegerCNCF Jaeger, a Distributed Tracing Platform bolt9.8 0.0 buntdb VS boltA low-level key/value database for Go. vitess9.8 10.0 buntdb VS vitessVitess is a database clustering system for horizontal scaling of MySQL. dgraph9.8 9.6 buntdb VS dgraphNative GraphQL Database with graph backend badger9.7 8.1 buntdb VS badgerFast key-value DB in Go. groupcache9.7 0.7 buntdb VS groupcachegroupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases. TinyGo9.6 9.5 buntdb VS TinyGoGo compiler for small places. Microcontrollers, WebAssembly, and command-line tools. Based on LLVM. Milvus9.6 9.9 buntdb VS MilvusAn open-source vector database for embedding similarity search and AI applications. rqlite9.6 9.8 buntdb VS rqliteThe lightweight, distributed relational database built on SQLite noms9.5 1.9 buntdb VS nomsThe versioned, forkable, syncable database Tile389.5 9.0 buntdb VS Tile38Real-time Geospatial and Geofencing pgweb9.5 5.4 buntdb VS pgwebCross-platform client for PostgreSQL databases kingshard9.5 0.0 buntdb VS kingshardA high-performance MySQL proxy migrate9.5 7.7 buntdb VS migrateDatabase migrations. CLI and Golang library. go-cache9.4 0.0 buntdb VS go-cacheAn in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications. VictoriaMetrics9.3 9.9 buntdb VS VictoriaMetricsVictoriaMetrics: fast, cost-effective monitoring solution and time series database BigCache9.3 3.1 buntdb VS BigCacheEfficient cache for gigabytes of data written in Go. goleveldb9.3 3.2 buntdb VS goleveldbLevelDB key/value database in Go. go-mysql-elasticsearch9.2 0.0 buntdb VS go-mysql-elasticsearchSync MySQL data into elasticsearch bbolt9.2 5.0 buntdb VS bboltAn embedded key/value database for Go. ledisdb9.1 0.0 buntdb VS ledisdbA high performance NoSQL Database Server powered by Go go-mysql9.1 8.1 buntdb VS go-mysqla powerful mysql toolset with Go Squirrel9.1 2.5 buntdb VS SquirrelFluent SQL generation for golang dtm8.9 9.7 buntdb VS dtm🔥A cross-language distributed transaction manager. Support xa, tcc, saga, transactional messages. go分布式事务管理器 pREST8.8 9.4 buntdb VS pRESTpREST (PostgreSQL REST), low-code, simplify and accelerate development, ⚡ instant, realtime, high-performance on any Postgres application, existing or new immudb8.8 9.9 buntdb VS immudbimmudb - world’s fastest immutable database, built on a zero trust model xo8.8 8.9 buntdb VS xoCommand line tool to generate idiomatic Go code for SQL databases supporting PostgreSQL, MySQL, SQLite, Oracle, and Microsoft SQL Server tiedot8.8 1.3 buntdb VS tiedotA rudimentary implementation of a basic document (NoSQL) database in Go go-memdb8.7 2.7 buntdb VS go-memdbGolang in-memory database built on immutable radix trees sql-migrate8.6 3.4 buntdb VS sql-migrateSQL schema migration tool for Go. cache2go8.5 2.4 buntdb VS cache2goConcurrency-safe Go caching library with expiration capabilities and access counters rosedb8.4 8.9 buntdb buntdb VS nutsdbA simple, fast, embeddable, persistent key/value store written in pure Go. It supports fully serializable transactions and many data structures such as list, set, sorted set. GCache8.3 3.0 buntdb VS GCacheAn in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC gendry8.0 2.4 buntdb VS gendrya golang library for sql builder CovenantSQL7.9 1.3 buntdb VS CovenantSQLA decentralized, trusted, high performance, SQL database with blockchain features fastcache7.8 3.2 buntdb VS fastcacheFast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead goqu7.8 7.0 buntdb VS goquSQL builder and query library for golang diskv7.7 0.0 buntdb VS diskvA disk-backed key-value store. orchestrator7.7 0.0 buntdb VS orchestratorMySQL replication topology manager/visualizer BTrDB7.6 0.0 buntdb VS BTrDBBerkeley Tree Database (BTrDB) server moss7.5 1.5 buntdb VS mossmoss - a simple, fast, ordered, persistable, key-val storage library for golang skeema7.5 7.3 buntdb VS skeemaSchema management CLI for MySQL chproxy7.5 2.1 buntdb VS chproxyClickHouse http proxy and load balancer Databunker7.4 8.9 buntdb VS DatabunkerSecure SDK/vault for personal records/PII built to comply with GDPR eliasdb7.3 2.7 buntdb VS eliasdbEliasDB a graph-based database. Scout APM: A developer's best friend. Try free for 14-days Do you think we are missing an alternative of buntdb or a related project? Popular Comparisons README. Features - In-memory database for fast reads and writes - Embeddable with a simple API - Spatial indexing for up to 20 dimensions; Useful for Geospatial data - Index fields inside JSON documents - Collate i18n Indexes using the optional collate package - Create custom indexes for any data type - Support for multi value indexes; Similar to a SQL multi column index - Built-in types that are easy to get up & running; String, Uint, Int, Float - Flexible iteration of data; ascending, descending, and ranges - Durable append-only file format for persistence - Option to evict old items with an expiration TTL - Tight codebase, under 2K loc using the cloccommand - ACID semantics with locking transactions that support rollbacks Getting Started Installing To start using BuntDB, install Go and run go get: $ go get -u github.com/tidwall/buntdb This will retrieve the library. Opening a database The primary object in BuntDB is a DB. To open or create your database, use the buntdb.Open() function: package main import ( "log" "github.com/tidwall/buntdb" ) func main() { // Open the data.db file. It will be created if it doesn't exist. db, err := buntdb.Open("data.db") if err != nil { log.Fatal(err) } defer db.Close() ... } It's also possible to open a database that does not persist to disk by using :memory: as the path of the file. buntdb.Open(":memory:") // Open a file that does not persist to disk. Transactions All reads and writes must be performed from inside a transaction. BuntDB can have one write transaction opened at a time, but can have many concurrent read transactions. Each transaction maintains a stable view of the database. In other words, once a transaction has begun, the data for that transaction cannot be changed by other transactions. Transactions run in a function that exposes a Tx object, which represents the transaction state. While inside a transaction, all database operations should be performed using this object. You should never access the origin DB object while inside a transaction. Doing so may have side-effects, such as blocking your application. When a transaction fails, it will roll back, and revert all changes that occurred to the database during that transaction. There's a single return value that you can use to close the transaction. For read/write transactions, returning an error this way will force the transaction to roll back. When a read/write transaction succeeds all changes are persisted to disk. Read-only Transactions A read-only transaction should be used when you don't need to make changes to the data. The advantage of a read-only transaction is that there can be many running concurrently. err := db.View(func(tx *buntdb.Tx) error { ... return nil }) Read/write Transactions A read/write transaction is used when you need to make changes to your data. There can only be one read/write transaction running at a time. So make sure you close it as soon as you are done with it. err := db.Update(func(tx *buntdb.Tx) error { ... return nil }) Setting and getting key/values To set a value you must open a read/write transaction: err := db.Update(func(tx *buntdb.Tx) error { _, _, err := tx.Set("mykey", "myvalue", nil) return err }) To get the value: err := db.View(func(tx *buntdb.Tx) error { val, err := tx.Get("mykey") if err != nil{ return err } fmt.Printf("value is %s\n", val) return nil }) Getting non-existent values will cause an ErrNotFound error. Iterating All keys/value pairs are ordered in the database by the key. To iterate over the keys: err := db.View(func(tx *buntdb.Tx) error { err := tx.Ascend("", func(key, value string) bool { fmt.Printf("key: %s, value: %s\n", key, value) }) return err }) There is also AscendGreaterOrEqual, AscendLessThan, AscendRange, AscendEqual, Descend, DescendLessOrEqual, DescendGreaterThan, DescendRange, and DescendEqual. Please see the documentation for more information on these functions. Custom Indexes Initially all data is stored in a single B-tree with each item having one key and one value. All of these items are ordered by the key. This is great for quickly getting a value from a key or iterating over the keys. Feel free to peruse the B-tree implementation. You can also create custom indexes that allow for ordering and iterating over values. A custom index also uses a B-tree, but it's more flexible because it allows for custom ordering. For example, let's say you want to create an index for ordering names: db.CreateIndex("names", "*", buntdb.IndexString) This will create an index named names which stores and sorts all values. The second parameter is a pattern that is used to filter on keys. A * wildcard argument means that we want to accept all keys. IndexString is a built-in function that performs case-insensitive ordering on the values Now you can add various names: db.Update(func(tx *buntdb.Tx) error { tx.Set("user:0:name", "tom", nil) tx.Set("user:1:name", "Randi", nil) tx.Set("user:2:name", "jane", nil) tx.Set("user:4:name", "Janet", nil) tx.Set("user:5:name", "Paula", nil) tx.Set("user:6:name", "peter", nil) tx.Set("user:7:name", "Terri", nil) return nil }) Finally you can iterate over the index: db.View(func(tx *buntdb.Tx) error { tx.Ascend("names", func(key, val string) bool { fmt.Printf(buf, "%s %s\n", key, val) return true }) return nil }) The output should be: user:2:name jane user:4:name Janet user:5:name Paula user:6:name peter user:1:name Randi user:7:name Terri user:0:name tom The pattern parameter can be used to filter on keys like this: db.CreateIndex("names", "user:*", buntdb.IndexString) Now only items with keys that have the prefix user: will be added to the names index. Built-in types Along with IndexString, there is also IndexInt, IndexUint, and IndexFloat. These are built-in types for indexing. You can choose to use these or create your own. So to create an index that is numerically ordered on an age key, we could use: db.CreateIndex("ages", "user:*:age", buntdb.IndexInt) And then add values: db.Update(func(tx *buntdb.Tx) error { tx.Set("user:0:age", "35", nil) tx.Set("user:1:age", "49", nil) tx.Set("user:2:age", "13", nil) tx.Set("user:4:age", "63", nil) tx.Set("user:5:age", "8", nil) tx.Set("user:6:age", "3", nil) tx.Set("user:7:age", "16", nil) return nil }) db.View(func(tx *buntdb.Tx) error { tx.Ascend("ages", func(key, val string) bool { fmt.Printf(buf, "%s %s\n", key, val) return true }) return nil }) The output should be: user:6:age 3 user:5:age 8 user:2:age 13 user:7:age 16 user:0:age 35 user:1:age 49 user:4:age 63 Spatial Indexes BuntDB has support for spatial indexes by storing rectangles in an R-tree. An R-tree is organized in a similar manner as a B-tree, and both are balanced trees. But, an R-tree is special because it can operate on data that is in multiple dimensions. This is super handy for Geospatial applications. To create a spatial index use the CreateSpatialIndex function: db.CreateSpatialIndex("fleet", "fleet:*:pos", buntdb.IndexRect) Then IndexRect is a built-in function that converts rect strings to a format that the R-tree can use. It's easy to use this function out of the box, but you might find it better to create a custom one that renders from a different format, such as Well-known text or GeoJSON. To add some lon,lat points to the fleet index: db.Update(func(tx *buntdb.Tx) error { tx.Set("fleet:0:pos", "[-115.567 33.532]", nil) tx.Set("fleet:1:pos", "[-116.671 35.735]", nil) tx.Set("fleet:2:pos", "[-113.902 31.234]", nil) return nil }) And then you can run the Intersects function on the index: db.View(func(tx *buntdb.Tx) error { tx.Intersects("fleet", "[-117 30],[-112 36]", func(key, val string) bool { ... return true }) return nil }) This will get all three positions. k-Nearest Neighbors Use the Nearby function to get all the positions in order of nearest to farthest : db.View(func(tx *buntdb.Tx) error { tx.Nearby("fleet", "[-113 33]", func(key, val string, dist float64) bool { ... return true }) return nil }) Spatial bracket syntax The bracket syntax [-117 30],[-112 36] is unique to BuntDB, and it's how the built-in rectangles are processed. But, you are not limited to this syntax. Whatever Rect function you choose to use during CreateSpatialIndex will be used to process the parameter, in this case it's IndexRect. 2D rectangle: [10 15],[20 25]Min XY: "10x15", Max XY: "20x25" 3D rectangle: [10 15 12],[20 25 18]Min XYZ: "10x15x12", Max XYZ: "20x25x18" 2D point: [10 15]XY: "10x15" LonLat point: [-112.2693 33.5123]LatLon: "33.5123 -112.2693" LonLat bounding box: [-112.26 33.51],[-112.18 33.67]Min LatLon: "33.51 -112.26", Max LatLon: "33.67 -112.18" Notice: The longitude is the Y axis and is on the left, and latitude is the X axis and is on the right. You can also represent Infinity by using -inf and +inf. For example, you might have the following points ( [X Y M] where XY is a point and M is a timestamp): [3 9 1] [3 8 2] [4 8 3] [4 7 4] [5 7 5] [5 6 6] You can then do a search for all points with M between 2-4 by calling Intersects. tx.Intersects("points", "[-inf -inf 2],[+inf +inf 4]", func(key, val string) bool { println(val) return true }) Which will return: [3 8 2] [4 8 3] [4 7 4] JSON Indexes Indexes can be created on individual fields inside JSON documents. BuntDB uses GJSON under the hood. For example: package main import ( "fmt" "github.com/tidwall/buntdb" ) func main() { db, _ := buntdb.Open(":memory:") db.CreateIndex("last_name", "*", buntdb.IndexJSON("name.last")) db.CreateIndex("age", "*",) return nil }) db.View(func(tx *buntdb.Tx) error { fmt.Println("Order by last name") tx.Ascend("last_name", func(key, value string) bool { fmt.Printf("%s: %s\n", key, value) return true }) fmt.Println("Order by age") tx.Ascend("age", func(key, value string) bool { fmt.Printf("%s: %s\n", key, value) return true }) fmt.Println("Order by age range 30-50") tx.AscendRange("age", `{"age":30}`, `{"age":50}`, func(key, value string) bool { fmt.Printf("%s: %s\n", key, value) return true }) return nil }) } Results: Order by last name 3: {"name":{"first":"Carol","last":"Anderson"},"age":52} 4: {"name":{"first":"Alan","last":"Cooper"},"age":28} 1: {"name":{"first":"Tom","last":"Johnson"},"age":38} 2: {"name":{"first":"Janet","last":"Prichard"},"age":47} Order by age 4: {"name":{"first":"Alan","last":"Cooper"},"age":28} 1: {"name":{"first":"Tom","last":"Johnson"},"age":38} 2: {"name":{"first":"Janet","last":"Prichard"},"age":47} 3: {"name":{"first":"Carol","last":"Anderson"},"age":52} Order by age range 30-50 1: {"name":{"first":"Tom","last":"Johnson"},"age":38} 2: {"name":{"first":"Janet","last":"Prichard"},"age":47} Multi Value Index With BuntDB it's possible to join multiple values on a single index. This is similar to a multi column index in a traditional SQL database. In this example we are creating a multi value index on "name.last" and "age": db, _ := buntdb.Open(":memory:") db.CreateIndex("last_name_age", "*", buntdb.IndexJSON("name.last"),) tx.Set("5", `{"name":{"first":"Sam","last":"Anderson"},"age":51}`, nil) tx.Set("6", `{"name":{"first":"Melinda","last":"Prichard"},"age":44}`, nil) return nil }) db.View(func(tx *buntdb.Tx) error { tx.Ascend("last_name_age", func(key, value string) bool { fmt.Printf("%s: %s\n", key, value) return true }) return nil }) // Output: // 5: {"name":{"first":"Sam","last":"Anderson"},"age":51} // 3: {"name":{"first":"Carol","last":"Anderson"},"age":52} // 4: {"name":{"first":"Alan","last":"Cooper"},"age":28} // 1: {"name":{"first":"Tom","last":"Johnson"},"age":38} // 6: {"name":{"first":"Melinda","last":"Prichard"},"age":44} // 2: {"name":{"first":"Janet","last":"Prichard"},"age":47} Descending Ordered Index Any index can be put in descending order by wrapping it's less function with buntdb.Desc. db.CreateIndex("last_name_age", "*", buntdb.IndexJSON("name.last"), buntdb.Desc(buntdb.IndexJSON("age"))) This will create a multi value index where the last name is ascending and the age is descending. Collate i18n Indexes Using the external collate package it's possible to create indexes that are sorted by the specified language. This is similar to the SQL COLLATE keyword found in traditional databases. To install: go get -u github.com/tidwall/collate For example: import "github.com/tidwall/collate" // To sort case-insensitive in French. db.CreateIndex("name", "*", collate.IndexString("FRENCH_CI")) // To specify that numbers should sort numerically ("2" < "12") // and use a comma to represent a decimal point. db.CreateIndex("amount", "*", collate.IndexString("FRENCH_NUM")) There's also support for Collation on JSON indexes: db.CreateIndex("last_name", "*", collate.IndexJSON("CHINESE_CI", "name.last")) Data Expiration Items can be automatically evicted by using the SetOptions object in the Set function to set a TTL. db.Update(func(tx *buntdb.Tx) error { tx.Set("mykey", "myval", &buntdb.SetOptions{Expires:true, TTL:time.Second}) return nil }) Now mykey will automatically be deleted after one second. You can remove the TTL by setting the value again with the same key/value, but with the options parameter set to nil. Delete while iterating BuntDB does not currently support deleting a key while in the process of iterating. As a workaround you'll need to delete keys following the completion of the iterator. var delkeys []string tx.AscendKeys("object:*", func(k, v string) bool { if someCondition(k) == true { delkeys = append(delkeys, k) } return true // continue }) for _, k := range delkeys { if _, err = tx.Delete(k); err != nil { return err } } Append-only File BuntDB uses an AOF (append-only file) which is a log of all database changes that occur from operations like Set() and Delete(). The format of this file looks like: set key:1 value1 set key:2 value2 set key:1 value3 del key:2 ... When the database opens again, it will read back the aof file and process each command in exact order. This read process happens one time when the database opens. From there on the file is only appended. As you may guess this log file can grow large over time. There's a background routine that automatically shrinks the log file when it gets too large. There is also a Shrink() function which will rewrite the aof file so that it contains only the items in the database. The shrink operation does not lock up the database so read and write transactions can continue while shrinking is in process. Durability and fsync By default BuntDB executes an fsync once every second on the aof file. Which simply means that there's a chance that up to one second of data might be lost. If you need higher durability then there's an optional database config setting Config.SyncPolicy which can be set to Always. The Config.SyncPolicy has the following options: Never- fsync is managed by the operating system, less safe EverySecond- fsync every second, fast and safer, this is the default Always- fsync after every write, very durable, slower Config Here are some configuration options that can be use to change various behaviors of the database. - SyncPolicy adjusts how often the data is synced to disk. This value can be Never, EverySecond, or Always. Default is EverySecond. - AutoShrinkPercentage is used by the background process to trigger a shrink of the aof file when the size of the file is larger than the percentage of the result of the previous shrunk file. For example, if this value is 100, and the last shrink process resulted in a 100mb file, then the new aof file must be 200mb before a shrink is triggered. Default is 100. - AutoShrinkMinSize defines the minimum size of the aof file before an automatic shrink can occur. Default is 32MB. - AutoShrinkDisabled turns off automatic background shrinking. Default is false. To update the configuration you should call ReadConfig followed by SetConfig. For example: var config buntdb.Config if err := db.ReadConfig(&config); err != nil{ log.Fatal(err) } if err := db.SetConfig(config); err != nil{ log.Fatal(err) } Performance How fast is BuntDB? Here are some example benchmarks when using BuntDB in a Raft Store implementation. You can also run the standard Go benchmark tool from the project root directory: go test --bench=. BuntDB-Benchmark There's a custom utility that was created specifically for benchmarking BuntDB. These are the results from running the benchmarks on a MacBook Pro 15" 2.8 GHz Intel Core i7: $ buntdb-benchmark -q GET: 4609604.74 operations per second SET: 248500.33 operations per second ASCEND_100: 2268998.79 operations per second ASCEND_200: 1178388.14 operations per second ASCEND_400: 679134.20 operations per second ASCEND_800: 348445.55 operations per second DESCEND_100: 2313821.69 operations per second DESCEND_200: 1292738.38 operations per second DESCEND_400: 675258.76 operations per second DESCEND_800: 337481.67 operations per second SPATIAL_SET: 134824.60 operations per second SPATIAL_INTERSECTS_100: 939491.47 operations per second SPATIAL_INTERSECTS_200: 561590.40 operations per second SPATIAL_INTERSECTS_400: 306951.15 operations per second SPATIAL_INTERSECTS_800: 159673.91 operations per second To install this utility: go get github.com/tidwall/buntdb-benchmark License BuntDB source code is available under the MIT License. *Note that all licence references and agreements mentioned in the buntdb README section above are relevant to that project's source code only.
https://go.libhunt.com/buntdb-alternatives
CC-MAIN-2021-43
refinedweb
3,757
51.24
Communities How to fasten Recon Console?Thomas Amann Jun 8, 2009 9:14 AM Hi I've got a 75 P1 Installation of ISTM Suite and CMDB on a ESX Server Win 2003 Dual Core 2.4 GHz an 4 GB Ram. The Recon Console does open successfully but needs about 20 min or longer to load Data of the existing Jobs (activities, configurations...). I tried to create a new Recon-Job but aborted after waiting for loading the menus (namespace, dataset...) for nearly half an hour. At the time it is not possible to create a Recon-Job due to the poor performance. Does anybody have an Idea how to fasten the loading of the Recon Console - configured with standard values? Thanks in prospect Tom 1. Re: How to fasten Recon Console?Lukas Kryske Jun 15, 2009 8:18 AM (in response to Thomas Amann) Hi Tom, what database you using? Oracle or MS SQL? Try to check, if indexing of database is turned on. Is HDD busy during Recon. console lagging? 2. Re: How to fasten Recon Console?Thomas Amann Jun 16, 2009 3:19 AM (in response to Lukas Kryske) hi it's a not local oracle DB and indexing is on! I don't know if the Hard Disk is working during lagging but I can check this on customer side today afternoon. We just recognised the following: On requesting the console the usertool is working with cpu upt to 70% then the server's working with up to 80% after this, both of them are waiting minutes later the usertool is getting the data. but the DB isn't working hard... 3. Re: How to fasten Recon Console?Vijay Dadi Jul 1, 2009 1:07 AM (in response to Thomas Amann) hi tamann, recon job becomes slow when there is a huge DSO backlog. This u can check using the SQL command SELECT count(*) from distributed_pending; This value should keep on decreasing. If not then it means that DSO processing is hanged. So you need to restart the DSO by killing the process "Serverds" in the task manager. Try this, it might work. Thanx Vj 4. Re: How to fasten Recon Console?Thomas Amann Jul 1, 2009 3:33 AM (in response to Vijay Dadi) hi thank's for the tipp - I'll try this when I'm able to save a schedule to a job and start one! ;-) primarily I meant that the loading of the conksole itself is very slow - for example: I can see the recon jobs on the console but not the details like number of actions or the schedule. greetinx tom
https://communities.bmc.com/message/99458
CC-MAIN-2018-22
refinedweb
441
73.68
So I have this string that has changing space in the end. For example (this isn't my actual code, just for example purposes): std::string myString = "hey!"; myString.reserve(myString.size() + MAX_BUFFER_SPACE); // ( ... making buffer and stuff ... ) size_t l = myString.size(); if (bufferLen>=MAX_BUFFER_SPACE) throw; myString.insert(l,buffer,bufferLen); // ( ... using this new string ... ) myString.resize(l); // <-- this is my problem! #include <string> #include <vector> using namespace std; #define MAX_BUFFER 30 int main(int argc, char **argv) { vector<string> myVec = { "hey","asd","haha" }; vector<string> clone; for (int i = myVec.size(); i--;) { myVec[i].reserve(myVec[i].size() + MAX_BUFFER); clone.push_back(myVec[i]); } return 0; } The capacity is not requested to be copied to be same when std::string being copied. $21.3.1.2/2 basic_string constructors and assignment operators [string.cons]: Table 49 — basic_string(const basic_string&) effects Element Value data() points at the first element of an allocated copy of the array whose first element is pointed at by str.data() size() str.size() capacity() a value at least as large as size() The only guarantee is the capacity of the string will be at least as large as its size after copying. It means you have to do this by yourself: for (int i = myVec.size(); i--;) { myVec[i].reserve(myVec[i].size() + MAX_BUFFER); clone.push_back(myVec[i]); clone.back().reserve(myVec[i].capacity()); }
https://codedump.io/share/bEFEqc0hxApc/1/c-how-to-preserve-string-capacity-when-copying
CC-MAIN-2017-51
refinedweb
227
53.07
I'm following The Code Project's tutorial on embedding Python and it's all going well until I start to make function visible to the interpreter. Like: I get the link error:I get the link error:Code: #include <iostream> #include <py/python.h> static int g_iNumArgs = 0; static PyObject* emb_NumArgs(PyObject* poSelf, PyObject* poArgs) { if (! PyArg_ParseTuple(poArgs, ":g_iNumArgs")) return 0; return Py_BuildValue("i", g_iNumArgs); } static PyMethodDef EmbMethods[] = { {"NumArgs", emb_NumArgs, METH_VARARGS, "Return the number of arguments received by the process."}, {0, 0, 0, 0} }; int main(int argc, char* argv[]) { Py_Initialize(); g_iNumArgs = argc; Py_InitModule("emb", EmbMethods); Py_Finalize(); return 0; } I'm linking python24.lib and I've tried all the others that come with the installation but nothing is working. Can anyone help?I'm linking python24.lib and I've tried all the others that come with the installation but nothing is working. Can anyone help?Code: PyEmbed.obj : error LNK2019: unresolved external symbol __imp__Py_InitModule4TraceRefs referenced in function _main EDIT: Oh... yeah I didn't have python24_d.lib (apparently it doesn't come with the installation) so I just made a copy of python24.lib and renamed it... like an idiot. I just ran a Release and it works fine. Off hunting for the _d version. EDIT2: Well that's not happening (getting hold of it) so I'll just #undef _DEBUG. I don't like it though...
https://cboard.cprogramming.com/cplusplus-programming/79957-embedding-python-link-error-printable-thread.html
CC-MAIN-2017-22
refinedweb
230
60.72
Workaround for PPSChannel error when running QNX PlayBook specific APIs on the Desktop First thing is to note this is a workaround and might not work in the future or might not cover all the errors you might see. Take the simple use case of creating an AIR application for PlayBook and with a qnx.ui.text.TextInput component in the display list. Because the qnx.ui.text.TextInput class allows you to define which PlayBook KeyboardType is shown on the device it must call out to a PlayBook specific AIR API. If you try and run this application on the desktop you will see this error: This error is because the AIR runtime on the desktop does not have a qnx.pps.PPSChannel class, this class is specific to the PlayBook version of the AIR runtime that RIM has extended in their BlackBerry Tablet OS. Ok, so what if you still want to run it on the desktop to work on UI layout an not necessarily full functionality? This is the workaround to do just that, and again this might not work for all PlayBook specific API’s and for sure some features of QNX UI components might not work correctly. But its useful for UI design work without having to publish out to the simulator a bunch of times. We’ll be working with this ActionScript based PlayBook application that just adds a qnx.ui.text.TextInput to the display list. { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import qnx.ui.text.TextInput; public class PPSChannelTest extends Sprite { public function PPSChannelTest() { super(); stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; var input:TextInput = new TextInput(); addChild(input); } } } First thing to do is in your Flash Builder project go into Project -> Properties -> Build Path. And change the qnx-air.swc to be “merged into code” instead of “external”. See the images below: Now that the classes are merged into the application SWF we can monkey patch the qnx.pps.PPSChannel class. To do this go to New -> Package and fill with “qnx.pps” as seen below: The next step is to create an ActionScript file by going to New -> ActionScript File. You can’t create a New -> ActionScript Class because the dialog will error out saying the class is already there in qnx-air.swc. So create a new ActionScript blank file in the qnx.pps package folder and call it PPSChannel. Then open up the file and copy this code into it: { public class PPSChannel { public function PPSChannel() { } } } Now save the files and project make sure it does a new compile and run it on the desktop. You should be able to see the qnx.ui.text.TextInput box on the desktop. Pingback: Tweets that mention @renaun posts: Workaround for PPSChannel error when running QNX PlayBook specific APIs on the Desktop -- Topsy.com Pingback: Working with PlayBook QNX UI components : Mihai Corlan Pingback: Working with PlayBook QNX UI components (Adobe Flash Platform Blog) Pingback: Cool Stuff with the Flash Platform – 2/10/11 | Finding Out About Pingback: @renaun posts: PlayBook QNXApplication SWIPE_DOWN in a multi platform SWF
http://renaun.com/blog/2011/02/workaround-for-ppschannel-error-when-running-qnx-playbook-specific-apis-on-the-desktop/
CC-MAIN-2016-36
refinedweb
524
63.7
This Great stuff – What are the chances of CSS-in-JS (like styled-components) support landing in Webstorm? No, sorry, no plans for that at the moment: it’s not possible to implement a general solution that will work for any CSS-in-JS library. You can configure code highlighting for Styled Components via TextMate Bundles (Preferences | Editor | TextMate Bundles). The required files are in the PR mentioned here: Ekaterina, is there a place where I can upvote this feature? This is a major issue for out team Thanks If you’re using Styled Components, feel free to vote for this feature request: If you’re using a different CSS-in-JS library, please submit a new request. Thank you! As always, awesome features :) Will the webpack support include support for: @import “~normalize.css/normalize.css” etc. where the tilde is used? can be expected in 2017.3 – see and linked tickets The webpack support for aliases doesn’t seem to work when you use webpack configuration by env. For example if your config file is just: module.exports = function(env) { return require( ./webpack.${env}.js)(env); } Can you please try specify the path to the configuration file you want to use currently in Preferences | Languages & Frameworks | webpack. WebStorm needs to run a specific webpack configuration file to build a project model. great work. Problem with module directory support is that it doesn’t pick up the resolve/alias directories for module paths unless you change the webpack file. So on starting the editor I have to hit an empty space at the end of webpack.config.js file or remove one to trigger webpack configuration change update. Which then picks up the module directories. What does your webpack config look like (is it a single file, or a merged one)? Can you provide your config(s) plus your idea.log ()? Unfortunately webpack support doesnt work for “composed” configurations like they recommend in a book example repo: :( Thanks for the feedback! The support for such configurations will land in WebStorm 2017.3 (the early access preview will start quite soon). As a workaround please try the suggestion in the comment: Thank for reply! Unfortunately solution from comment doesn’t work as expected for me. It still underlines imports with information about module not being installed. However I made another file only to help webstorm: idea.resolve.webpack.js const projectAliases = require(‘./webpack.parts’).resolveProjectDependencies; module.exports = projectAliases; Resolving Webpack alias works fine in Javascript file. However, when trying to import a file in typescript file by using alias, Intellij still doesn’t understand these alias. Could you please fix this? Thanks Can you please report us an issue about that on Would really appreciate if you attach a sample project that uses webpack with TypeScript. Thank you! Hey, Any chance of adding an option to set roots for webpack configs? because in our project we have different bundles with different webpack configs but PhpStorm only looks at the root webpack config Not sure I follow you… WebStorm looks at the config you select in Settings | Languages & Frameworks | JavaScript | Webpack. Do you miss a possibility to choose multiple configs instead of a single one? Hello, I want choose multiple configs instead of a single one. Could you leave a reply? Not currently possible, please vote for to be notified on any progress with it Webpack is not under Settings | Languages & Frameworks | JavaScript. What WebStorm version do you use? Make sure that you are in (project) Preferences and not in the Default Preferences. Alias seems not work when webpack config file return a promise like what vue-cli generate… That has been already fixed: The fix will land in the upcoming WebStorm 2017.3.3. Sorry for the inconvenience! But this doesn’t seem to work on 2018.1 MacOsx in Settings | Languages & Frameworks | JavaScript | Webpack the right webpack file is selected … const defaults = { devtool: ‘source-maps’, entry: ‘./app/app.js’, output: { path: path.join(__dirname, ‘public’), filename: ‘main.js’ }, resolve: { root: path.resolve(__dirname), alias: { ‘@app’: ‘app’, ‘@actions’: ‘app/actions’, ‘@api’: ‘app/api’, ‘@components’: ‘app/components’, @common’: ‘app/components/common’, ‘@stylesheets’: ‘stylesheets’ }, extensions: [”, ‘.js’, ‘.jsx’] }, …. When I include on the code, the IDE is not able to find the source import MyTable from ‘@common/table.js’; Hello Marco, Can you please share with us your webpack config file? You can send it to our tech support using this form: Thank you! Sorry to post on a closed issue but for the record: I managed to solve this by creating a webpack.config.js file separately like: const path = require('path') const webpack = require('webpack') module.exports = { ... resolve: { extensions: ['.js', '.json', '.vue'], alias: { '~': path.resolve(__dirname, './resources/assets/js') } }, ... } And then importing it in the webpack.mix.js like: const config = require('./webpack.config') ... mix.webpackConfig(config) Make sure the webpack configuration file is pointed correctly in the configuration of the PhpStorm in: Settings > Languages & Frameworks > Javascript > Webpack Hi Ignacio, Thank you for your comment! What was the initial issue? I setting the webpack for a vue project, but I got an error like this: “Can’t analyse webpack.base.conf.js: coding assistance will ignore module resolution rules in this file. Possible reasons: this file is not a valid webpack configuration file or its format is not currently supported by the IDE.” What can I do? Please contact our tech support and provide your webpack configuration files and the IDE version: Thank you! I know this is an old post, but hoping to get a response before opening a more detailed ticket. Is there a way to enable the enhanced module resolution for .scss @imports? I have a web pack config with: resolve: { modules: [ path.resolve(__dirname, ‘node_modules’), path.resolve(__dirname, ‘./’), ] When I import something in a JS file like: import ‘node_modules/ag-grid/src/styles/ag-grid.scss’; everything works, and the software recognizes that this is a valid path. But the same import in a .scss: @import “node_modules/ag-grid/src/styles/ag-grid.scss”; file promptsa “Cannot resolve directory” error. I’m using PyCharm 2018.1.4, but I have the same issue in WebStorm. Please try using @import "~ag-grid/src/styles/ag-grid.scss"in your SCSS files instead. This is the recommended way for the webpack’s sass-loader. More info here and here That worked, perfectly. Thanks! Is there a plan to include support for webpack config written using es6? My webpack config is written using es6 syntax, therefore when I open webstorm, I am greeted with the following message: Can’t analyse webpack.config.js: coding assistance will ignore module resolution rules in this file. Possible reasons: this file is not a valid webpack configuration file or its format is not currently supported by the IDE. Error details: Unexpected identifier. I have setup a babel watcher which transpiles the webpack config in the IDE, but webstorm seems to not take the watcher into consideration. Not at the moment, sorry. Right now WebStorm runs the config directly with the node version selected in the project settings. Please vote for this issue: So WebStorm 2018.2 also can’t handle webpack.config.babel.js? I have webpack.config.babel.js which contains some ES6, and I get a message in the event log: “Module resolution rules from webpack.config.babel.js are now used for coding assistance.”, but in fact, the resolution is not working I got it working. The reason webstorm didn’t handle this config was that the webpack config function was async. I think it’s probably a bug It can if the config name has *.babel.* in it and babel is set up correctly. Can you please share your webpack.config.babel.js with us, as well as .babelrc? webpack.config.babel.js: babel.config.js: Without asyncit works. With asyncthere is an error in the event log: Can't analyse webpack.config.babel.js: coding assistance will ignore module resolution rules in this file. Possible reasons: this file is not a valid webpack configuration file or its format is not currently supported by the IDE. Error details: regeneratorRuntime is not defined If there is no .babel-part in the config name there is always an ok-message in the event log: Module resolution rules from webpack.config.babel.js are now used for coding assistance.But alias resolution still doesn’t work with asyncin the config require('@babel/polyfill');at the top of webpack.config.babel.jsyields Module resolution rules from webpack.config.babel.js are now used for coding assistance.in the event log. But resolve.aliasstill not recognized by webstorm while webpack config function is async Hello Andrey, We’ve reproduced the problem and reported an issue that you can follow: const path = require('path'); module.export = { resolve: { alias: { '~': path.resolve(__dirname, './src'), }, }, }; Actually, there is no webpack work in my project(I use parcel), and I create a webpack.config.js with this code. Unfortunally I was jumped failed. You have a mistake in your config file: it should be exportsand not export. Is there any plans to support webpack enhanced module resolution within regular CSS imports? I.e. @import url(~alias/file.css); In the current version of Webstorm (2018.2) it doesn’t seem to work, while sass imports are working We have reported an issue that you can follow for the updates: The alias resolving for auto imports used to work for me but no longer does. In webpack.config.dev.js: resolve: { alias: { common: path.resolve(path.resolve(__dirname, “..”, “common”)) } } And marked /home/john/ProjectName/webpack.config.dev.js as the webpack configuration file in Webstorm. When using the auto import feature the import still resolves to: import Component from “../../../common/src/components/Component”; What IDE version do you use? Currently I use 2018.3.4 but this broke a couple of versions back for me. Can you please send the IDE logs (menu Help – Compress Logs) to our tech support for the investigation. Thanks! Done. I imported settings from a colleague who is working on the same code as I am, but my Webstorm (2018.3.5) is unable to analyse the webpack.config.js, rendering all the aliases useless (as well as throwing lots of “unused exports” in the code). This had been plaguing me for a while, but I was unable to resolve on my own, so I hoped that importing the colleague’s settings would fix the issue but it did not. Please let me know what to provide and where so that I can move past this annoyance. Thanks! Hi Serge, Can you please try running Invalidate caches and restart in the File menu and then reopening the project. If it doesn’t help to successfully analyse the config file, please send the IDE logs (menu Help – Compress Logs) to our tech support team – it might contain some clues why the analysis has not passed. Thanks! Thanks for the help. Unfortunately that did not resolve the issue so I’ll reach out to tech support. Thanks again
https://blog.jetbrains.com/webstorm/2017/06/webstorm-2017-2-eap-172-2827/?replytocom=327818
CC-MAIN-2020-24
refinedweb
1,849
58.79
glbind includes a full implementation of the OpenGL headers (auto-generated from the OpenGL spec) so there's no need for the offical headers or SDK. Unlike the official headers, the platform-specific sections are all contained within the same file. glbind is a single file library with no dependencies. There's no need to link to any libraries, nor do you need to include any other headers. Everything you need is included in glbind.h. #define GLBIND_IMPLEMENTATION #include "glbind.h" int main() { GLenum result = glbInit(NULL, NULL); if (result != GL_NO_ERROR) { printf("Failed to initialize glbind."); return -1; } ... glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT); ... glbUninit(); return 0; } The example above binds everything to global scope and uses default settings for the internal rendering context. You can also initialize glbind like the code below. GLBapi gl; GLBconfig config = glbConfigInit(); config.singleBuffered = GL_TRUE; /* Don't use double-buffering on the internal rendering context. */ GLenum result = glbInit(&gl, &config); if (result != GL_NO_ERROR) { ... error initializing glbind ... } #if defined(GLBIND_WGL) HGLRC hRC = glbGetRC(); ... do something with hRC ... #endif #if defined(GLBIND_GLX) GLXContext rc = glbGetRC(); ... do something with rc ... #endif /* Draw something using local function pointers in the "gl" object instead of global scope. */ gl.glClearColor(0, 0, 0, 0); gl.glClear(GL_COLOR_BUFFER_BIT); Since OpenGL requires a rendering context in order to retrieve function pointers, it makes sense to give the client access to it so they can avoid wasting time and memory creating their own rendering context unnecessarily. Therefore, glbind allows you to configure the internal rendering context and retrieve a handle to it so the application can make use of it. You can also initialize a GLBapi object against the current context (previously set with wglMakeCurrent or glXMakeCurrent) using glbInitContextAPI() or glbInitCurrentContextAPI(). Note, however, that before calling these functions you must have previously called glbInit(). These also do not automatically bind anything to global scope. You can explicitly bind the function pointers in a GLBapi object to global scope by using glbBindAPI(). Public domain or MIT-0 (No Attribution). Choose whichever you prefer.
https://awesomeopensource.com/project/mackron/glbind
CC-MAIN-2021-31
refinedweb
340
51.85
I'm declaring a variable within int main(), and then, in one line, altering its value by passing it by reference to a function, then outputting it via cout <<. Only problem is, cout <<is printing the original initialized value of the variable to the screen, rather than the newly altered version. I have come up with a solution, but I don't like it. Actually, the code is simple enough that I can post it (below). It's a simple Half Adder function. Take two binary inputs, output the result and the carry. I pass the carry by reference, and have the function return the result of the addition. #include <iostream> using namespace std; bool HalfAdder(bool, bool, bool&); int main(int argc, char ** argv){ bool input[] = {true, true}; bool carry = false; cout << "Result is " << HalfAdder(input[0], input[1], carry) << " carry is " << carry << endl; system("PAUSE"); return 0; }//end of main bool HalfAdder(bool a, bool b, bool& carry){ if( a & b ){ //1 + 1 = 0, carry = 1 carry = true; return false; }//end of if else{ //1 + 0 = 1, no carry. No carry with 0 + 0 either carry = false; //1 + 0 = 1; 0 + 0 = 0 return (a || b) ? true : false; }//end of else }//end of HalfAdder() If you run this, unfortunately, carry is always 0. False. Because it's initialized as false. This, of course, despite the fact that I pass it by reference in the call. While I'm sure the C/C++ veterans have probably already scrolled down to the reply box by now, for those of you still reading, I'd like to offer a theory for some feedback (I'll do the research later tonight). I believe that the problem is that I'm passing HalfAdder as a parameter to the << member function of cout on the same line that I pass carry to another instance of <<. So while I expected this sort of thing to happen: cout.operator<<("Result is"); cout.operator<<(HalfAdder()); cout.operator<<("carry is"); cout.operator<<(carry); What really happened is something like... cout.operator<<("Result is", HalfAdder(), "carry is", carry); Not that exactly, but that basic concept. What I mean is, maybe all of the parameters I passed to << were copied immediately, and then HalfAdder() was executed, so that even though carry has been altered, it's too late; the original value of carry was copied already. I tested this out by running the same code but adding an extra line before system("PAUSE"). cout << "Result is " << HalfAdder(input[0], input[1], carry) << " carry is " << carry << endl; cout << "But the true carry is " << carry << endl; system("PAUSE"); And, hey, it says the true carry is 1, as expected. That's actually pretty amazing. I never even thought of that. Oddly enough, I compiled this in Dev-C++ before handing it in for class, and I assume it worked fine, otherwise I probably wouldn't have handed it in. Is it even remotely possible this is a compiler-specific thing? My most recent build of it where I noticed the error was in MSVC++2012. Anyway, thanks for your time. Sorry if this was too much too read. I just think it's a pretty cool error to have made.
https://www.gamedev.net/topic/632599-a-strange-issue-with-the-scope-of-the-operator/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2017-22
refinedweb
538
62.98
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. On Sun, Jun 29, 2003 at 11:31:58PM +0200, Gabriel Dos Reis wrote: > | ! return __unique_copy(__first, __last, __result, __binary_pred, _IterType()); > > This should also be qualified -- even though the identifier is in the > implementor namespace. I don't agree that this is necessary. No conforming user program is affected by such a qualification. Certainly the patch should not be held back waiting for such qualifications. > | ! std::swap(*__first++, *__first2++); > > This swap needs not be qualified. > > swap has sort of become to have an operator-like status: It is > regarded as a fundamental operator. In the EWG, we're exploring the > notion of "regular types", and swap is considered one of the > fundamental operations on those type. We need to have them work > through ADL. I don't agree. The standard swap() is in std::, and that's the one we want to call. Herb Sutter would argue otherwise, but he also argues otherwise about every other algorithm that users are encouraged to specialize. Users are certainly allowed to declare a global swap, but if they expect it to be used by standard algorithms, they need to go the extra distance and overload the standard one. Nathan Myers ncm-nospam@cantrip.org
http://gcc.gnu.org/ml/libstdc++/2003-06/msg00395.html
crawl-001
refinedweb
218
58.69
Revision history for Test-HTML-Spelling v0.3.6 2004-06-18 - Clean namespace. v0.3.5 2014-06-04 - Fixed bug in distribution that required the missing xt directory. v0.3.4 2014-06-04 - Tweaked MANIFEST.SKIP rules. - Updated bugtracker URL in metadata. - Removed QA tests from the distribution. - Removed unnecessary dependency from Makefile.PL. v0.3.3 2014-02-18 - Updated README. - Minor POD markup tweaks. v0.3.2 2014-02-18 - Made Moose attributes lazy [Rusty Conover]. - Ignore numbers separated by digits, e.g. "2013-2014" [Rusty Conover and Robert Rothenberg]. v0.3.1 2014-02-03 - Updated POD formatting. - Fixed outdated POD that referred to self as a requirement. - Changed dependency from Readonly to Const::Fast and updated POD accordingly (GitHub#4). - Cleaned up Changes file formatting (GitHub#3). v0.3.0 2013-12-30 - Removed dependency on the "self" module (GitHub#2) - Tweaks to the POD spelling test - Reformatted Changes file (GitHub#3) - Updated README accordignly. v0.2.2 2013-12-30 - Removed MYMETA files from distribution (RT#89125) - Added SEE ALSO section to POD (GitHub#1) v0.2.1 2012-12-16 - Removed example tests that were causing problems for some users. (These tests were meant for developers anyway.) v0.2.0 2012-12-15 - Added tests. - Added REQUIREMENTS section to POD for Pod::Readme and marked sections to be skipped for Pod::Readme. - Added support for multiple spellers paramaterized by language. - Added checks for the lang attribute to use the appropriate speller for the language. - Updated POD appropriately. v0.1.1 2012-11-29 - Added test for POD spelling. - Moved QA tests into 'xt' directory. - QA tests all require RELEASE_TESTING variable to be set. - Developer tests require DEVELOPER_TESTING variable to be set. - Reformatted Changes. - Changed module to use Test::Builder::Module as a base as is recommended in Test::Builder. Updated Makefile.PL requirements accordingly. (Thanks to Murray Walker for noticing this.) - Added check_spelling method. - Updated POD. v0.1.0 2012-11-24 - First prototype released on GitHub.
https://metacpan.org/changes/distribution/Test-HTML-Spelling
CC-MAIN-2014-23
refinedweb
336
63.46
In this article I will continue with explanations how to implement higly functional web 2.0 tables in J2EE using the JQuery. In the previous article "JQuery Data Tables in Java Web Applications" I have explained how you can easily convert a plain HTML table to fully functional Web 2.0 table. Here I will explain how you can add additional data management functionalities in web tables. The following example shows a web table that enables the user to edit cells using an inline editor. You can see a lot of functionality in this table for data browsing (filtering by keyword, ordering by header cell, pagination, changing number of records that will be shown per page) and data management (editing cells inline, deleting rows, and adding new ones). In the first article about this topic "JQuery Data Tables in Java Web Applications", I have explained how to integrate basic JQuery DataTables functionalities with Java web tables. Enhancing HTML tables with search, ordering, pagination, changing the number of records shown per page, etc., is an easy task if you are using the jQuery DataTables plug-in. Using this plug-in, you can add all the above mentioned functionalities to a plain HTML table placed in the source of a web page, using a single JavaScript call: $("myTable").dataTable(); In this example, "myTable" is the ID of an HTML table that will be enhanced using the DataTables plug-in. The only requirement is to have a valid HTML table in the source of the web page. myTable The goal of this article is to show how you can add additional functionalities using the JQuery DataTables Editable plug-in. Using this plug-in, you can add features for inline editing, deleting rows, and adding new records using the following code: $("myTable").dataTable().makeEditable(); This call will add data management functionalities on top of the standard DataTable functionalities. These functionalities handle complete interaction with the user related to editing cells, deleting and adding rows. On each action, the plug-in sends an AJAX request to the server-side with information about what should be modified, deleted, or added. If you are using a J2EE application, you will need to implement server-side functionalities that perform ADD, DELETE, and UPDATE that will be called by the DataTables Editable plug-in. In this article, you can find detailed instructions about the implementation of these data management functionalities. This article will explain how you can convert plain tables into fully functional tables with data management functionalities. I will show you how you can implement the following: These functionalities will be added directly on the client side using jQuery. The jQuery plug-in that enables these functionalities will handle the complete interaction with the user and send AJAX requests to the server-side. The advantage of this approach is that the jQuery plug-in is server-side platform independent, i.e., it can be applied regardless of what J2EE technology you are using (servlets, JSP, JSF, Struts, etc.). The only thing that is important is that your server-side component produces a valid HTML table as the one shown in the following example: <table id="myDataTable"> <thead> <tr> <th>ID</th><th>Column 1</th><th>Column 2</th><th>Column 3</th> </tr> </thead> <tbody> <tr> <td>1</td><td>a</td><td>b</td><td>c</td> </tr> <tr> <td>2</td><td>e</td><td>f</td><td>g</td> </tr> </tbody> </table> If you generate that kind of HTML table on the server-side and send it to the client-side, you can decorate it with the jQuery DataTable and jQuery DataTable Editable plug-ins using the following line of code: $("myDataTable").dataTable().makeEditable(); This call will add filtering, ordering, and pagination to your table (this will be done by the .dataTables() call), and on top of these functionalities will be added features that enable the user to add, edit, and delete records in the table. The DataTables Editable plug-in will handle all client side interaction and send AJAX requests to the server side depending on the user action. An example of an AJAX call sent to the server-side is shown in the following figure: .dataTables() In the figure, you can see that CompanyAjaxDataSource is called when information should be reloaded into the table (e.g., when the current page or sort order is changed). UpdateData is called when a cell is edited, DeleteData is called when a row is deleted, and AddData is called when a new record is added. You can also see the parameters that are sent to the server-side when the UpdateData AJAX call is performed. To integrate the plug-in with the server-side code, you will need to create three servlets that will handle add, delete, and update AJAX calls. Examples of the servlets that can handle these AJAX requests are shown in the following listing: CompanyAjaxDataSource UpdateData DeleteData AddData @WebServlet("/AddData") public class AddData extends HttpServlet { protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } } @WebServlet("/DeleteData") public class DeleteData extends HttpServlet { protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } } @WebServlet("/UpdateData") public class UpdateData extends HttpServlet { protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } } You will need to implement these classes/methods in order to make the DataTables Editable plug-in functional. The following sections will describe in detail what should be done to implement each of these features. The starting point is to create a J2EE application that generates a plain HTML table. This example will use a simple JSP page that generates a list of companies in an HTML table. Then, you will need to apply the DataTables plug-in to this table to enable adding basic data table enhancements. This is the second article in the group that describes how you can use jQuery DataTables to enhance your J2EE applications, and it will not explain how you can add the basic DataTables functionalities. The focus of this article will be on the data management functionalities only. If you have not read the previous article jQuery Data Tables and J2EE applications integration, I will recommend that you read that article first because it explains how you can integrate the DataTables plug-in with a J2EE application. This article will assume that the code for the integration of the jQuery DataTables plug-in is implemented, and only the code required for integration of the DataTables Editable plug-in will be explained here. This section explains the implementation of the following features: The following sections explain how to implement these features. As described above, the prerequisite for this code is that you integrate the jQuery DataTable plug-in into the Java application. You can find detailed instructions here: JQuery Data Tables in Java Web Applications, but I will include a short description in this article. Also, in this article, I will describe the major difference between the standard DataTables integration and integration with DataTables in CRUD mode. If you want to update and delete rows, you need to have some information that tells the plug-in what is the ID of a row. This ID will be sent to the server-side so it can be determined what record should be updated/deleted. The jQuery DataTables plug-in works in two major modes: <TBODY> In the client-side processing mode, the table is generated on the server side (in some JSP page) and the ID of each record should be placed as the ID attribute of the <TR> element. Part of the JSP code that generates this table is shown in the example below: ID <TR> <table id="companies"> <thead> <tr> <th>Company name</th> <th>Address</th> <th>Town</th> </tr> </thead> <tbody> <% for(Company c: DataRepository.GetCompanies()){ %> <tr id="<%=c.getId()%>"> <td><%=c.getName()%></td> <td><%=c.getAddress()%></td> <td><%=c.getTown()%></td> </tr> <% } %> </tbody> </table> Each time the user edits or deletes some row/cell, the plug-in will take this attribute and send it as an ID. In the server-side processing mode, only a plain table template is returned as HTML, and it is dynamically populated via an AJAX call when the page is loaded. An example of the plain table template is shown in the following listing: <table id="companies"> <thead> <tr> <th>ID</th> <th>Company name</th> <th>Address</th> <th>Town</th> </tr> </thead> <tbody> </tbody> </table> In this case, nothing is generated in the body of the table and the row will be dynamically loaded by the DataTables plug-in. In this case, the ID of the record is placed in the first column (this column is usually hidden in the DataTables configuration). In the code example that can be downloaded in this article, you can find a table integrated in the server-side processing mode. For more details about integrating DataTables with a Java web application, see the JQuery Data Tables in Java Web Applications article. Cell content is edited using the jEditable inline editor, and validation is implemented using the jQuery validation plug-in. Therefore, these scripts should be included in the head section of the HTML page where the editable data table plug-in is used. The example above shows how data table/editable plug-ins are applied to the table without parameters. In the default mode, each cell in the table is replaced with a text box that can be used for editing. When the user finishes editing, an AJAX request is sent to the server. Editors applied on the columns can be customized. The following example shows how you can change the URL of the server-side page that will be called when a cell is updated and how you can use different editors in different columns. $('#myDataTable').dataTable().makeEditable({ "sUpdateURL": "/Company/UpdateCompanyData" '}" } ] }); Each of the elements of the aoColumns array defines an editor that will be used in one of the table columns. In the example above, an empty object is set to the first column, null to the second (to make the second column read-only), and the third column uses a select list for editing. aoColumns null Regardless of what configuration you use, the DataTables Editable plug-in will send the same format of AJAX request to the server-side. The AJAX request sends the following information: id <tr> value columnName rowId columnPosition columnId You will also need a servlet that will accept the request described above, receive information sent from the plug-in, update the actual data, and return a response. Servlet code that is used in this example is shown here: protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int id = Integer.parseInt(request.getParameter("id")); //int columnId = Integer.parseInt(request.getParameter("columnId")); int columnPosition = Integer.parseInt(request.getParameter("columnPosition")); //int rowId = Integer.parseInt(request.getParameter("rowId")); String value = request.getParameter("value"); //String columnName = request.getParameter("columnName"); for(Company company: DataRepository.GetCompanies()) { if(company.getId()==id) { switch (columnPosition) { case 0: company.setName(value); break; case 1: company.setAddress(value); break; case 2: company.setTown(value); break; default: break; } response.getWriter().print(value); return; } } response.getWriter().print("Error - company cannot be found"); } The servlet reads the ID of the record that should be updated, a column that will determine the property of the object that will be updated, and a value that should be set. If nothing is returned, the plug-in will assume that the record was successfully updated on the server-side. Any other message that is returned will be shown as an error message and the updated cell will be reverted to the original value. The DataTables Editable plug-in enables users to select and delete rows in a table. The first thing you need to do is to place a plain HTML button that will be used for deleting rows somewhere in the form. An example of this button is shown in the following listing: <button id="btnDeleteRow">Delete selected company</button> The only thing that is required is to set the ID of the button to the value btnDeleteRow (this ID is used by the DataTables Editable plug-in to add delete handlers to the button). The DataTables Editable plug-in will disable the button initially and when the user select a row in the table, the button will be enabled again. If a row is unselected, the button will be disabled. If the delete button is pressed while a row is selected, the DataTables Editable plug-in will take the ID of the selected row and send an AJAX request to the server side. The AJAX request has a singe parameter, the ID of the record that should be deleted, as shown in the following figure: btnDeleteRow The servlet that handles this delete request is shown in the following listing: protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int id = Integer.parseInt(request.getParameter("id")); for(Company c: DataRepository.GetCompanies()) { if(c.getId()==id) { DataRepository.GetCompanies().remove(c); return; } } response.getWriter().print("Company cannot be found"); } This servlet takes the ID of the record that should be deleted and removes it from the collection. If nothing is returned, the DataTables Editable plug-in will assume that the delete was successful and the selected row will be removed from the table. Any text that is returned by the servlet will be recognized as an error message, shown to the user, and the delete action will be aborted. The DataTables Editable initialization call that is used in the example above do not need to have any parameters. However, if you want, you can customize the behavior of the delete functionality as shown in the following example: $('#myDataTable').dataTable().makeEditable({ sDeleteHttpMethod: "GET", sDeleteURL: "/Company/DeleteCompany", sDeleteRowButtonId: "btnDeleteCompany", }); This call sets the HTTP method that will be used for the AJAX delete call (e.g., "POST", "GET", "DELETE"), the URL that will be called, and the ID of the button that should be used for delete (this is useful if you do not want to put a default ID for the delete button or if you have two delete buttons for two tables on the same page, as in the example on the live demo site). To enable adding new records, you will need to add a few items on the client side. In the DataTables Editable plug-in, new rows are added using a custom dialog that you will need to define. This dialog is shown in the following figure: The form for adding new records will always be custom because it will depend on the fields you want to enter while you are adding, the type of elements you want to use for entering data (text boxes, select lists, check boxes, etc.), the required fields, and design. Therefore I have left it to you to define what form should be used for adding new records. However, it is not a complex task because the only things you would need to add are plain HTML buttons for adding new records, and a plain empty form that will be used as the template for adding new records. An example of plain HTML elements that are added in this example is shown in the following listing: <button id="btnAddNewRow">Add new company...</button> /> <button id="btnAddNewRowOk">Add</button> <button id="btnAddNewRowCancel">Cancel</button> </form> Similar to the delete functionality, there should be placed a button that will be used for adding new records - this button should have the ID "btnAddNewRow". The form for adding new records should have the ID "formAddNewRow" and should have OK and Cancel buttons with IDs "btnAddNewRowOk" and "btnAddNewRowCancel". The DataTables Editable plug-in will find the add new row button by ID, attach the event handler for opening the form in the dialog, and attach event handlers for submitting and canceling adding new records to the OK and Cancel buttons. You can place any input field in the form - all values that are entered in the form will be posted to the server. The AJAX call that will be sent to the server-side is shown in the figure below: btnAddNewRow formAddNewRow btnAddNewRowOk btnAddNewRowCancel You can see that all values of the input fields are sent to the server-side. On the server side, you need to have a servlet that handles this AJAX call and adds a new record. The code for the servlet method that handles the AJAX call is shown in the following listing: protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String name = request.getParameter("name"); String address = request.getParameter("address"); String town = request.getParameter("town"); int country = Integer.parseInt(request.getParameter("country")); Company c = new Company(name, address, town, country); DataRepository.GetCompanies().add(c); response.getWriter().print(c.getId()); } This code takes parameters sent in the AJAX call, and creates and adds a new company record. The method must return the ID of the new record because the plug-in will set this value as the ID of the added row in the table. When the AJAX call is finished, the DataTables Editable plug-in adds a new row to the table. Values that are added in the table columns are mapped using the rel attributes in the form elements. You can see that the elements id, Name, Address, and Town have rel attributes 0, 1, 2, and 3 - these values will be used to map the new record to the columns in the table. rel Name Address Town Similar to the previous cases, adding a behavior can be configured via parameters passed to the makeEditable() function. An example is shown in the following listing: makeEditable() $('", sAddURL: "/Company/AddNewCompany", sAddHttpMethod: "POST", }); In this example, we change the default IDs of the form for adding a new record, the Add button, OK/Cancel buttons used in the add new record pop-up, URL that will be called when a new row should be added, and the HTTP method that should be used in the AJAX call. This article described how you can create a fully featured client side web table that enables the client to perform all important data management actions (creating new records, deleting records, editing cells inline etc.). You can integrate this client-side code with a Java web application that will accept AJAX calls from the client side. I believe that this article can help you create effective user interfaces using jQuery plug-ins. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) $(document).ready(function () { $.ajaxSetup({ scriptCharset: "utf-8" , contentType: "application/json; charset=utf-8", cache: false }); var anOpen = []; var sImageUrl = ""; oTable = $('#example').dataTable( { "bProcessing": true, "sPaginationType": "full_numbers", "bJQueryUI": true, "sAjaxSource": "", "aoColumns": [ { "mDataProp": null, "sClass": "control center", "sDefaultContent": '<img src="" alt="" />' }, { "mDataProp": "HOT_CODCOBOL" }, { "mDataProp": "HOT_NOMBRE" }, { "mDataProp": "POBNOM" }, { "mDataProp": "NOMPROV" }, { "mDataProp": "AFILIACION" }, { "mDataProp": "PROVEEDOR" }, { "mDataProp": "PRIORIDAD" } ] } ).makeEditable({ sUpdateURL: "", "aoColumns": [ null, null, null, null, null, null, null, { //para poner que sea editable este campo. } ] }); $('>DIRECCION:</td><td>'+oData.DIRECCION+'</td></tr>'+ '<tr><td>CODIGO POSTAL:</td><td>'+oData.CODIGOPOSTAL+'</td></tr>'+ '<tr><td>FAX:</td><td>'+oData.FAX+'</td></tr>'+ '<tr><td>PAIS:</td><td>'+oData.PAIS+'</td></tr>'+ '<tr><td>TELEFONO:</td><td>'+oData.TELEFONO+'</td></tr>'+ '<tr><td>DESCRIPCION:</td><td>'+oData.DESCRIPCION+'</td></tr>'+ '</table>'+ '</div>'; return sOut; } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("entra DOPOST "); String id = request.getParameter("id"); String value = request.getParameter("value"); String columnName = request.getParameter("columnName"); String columnId = request.getParameter("columnId"); String columnPosition = request.getParameter("columnPosition"); String type = request.getParameter("type"); System.out.println("valor de id: "+type); response.getWriter().print("Error - no funciona"); } <form id="formulariop"> <div style="margin-top: 20px;vertical-align: center"> <table cellpadding="0" cellspacing="0" border="0" class="display" id="example"> <thead> <tr> <th></th> <th>COBOL</th> <th>NOMBRE</th> <th>POBLACION</th> <th>PROV.</th> <th>AFILIACION</th> <th>PROVEEDOR</th> <th>PRIORIDAD</th> </tr> </thead> <tbody></tbody> </table> </div> </form> General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/193068/Adding-data-management-CRUD-functionalities-to-the?PageFlow=FixedWidth
CC-MAIN-2014-35
refinedweb
3,361
50.06
import "github.com/kr/fs" Package fs provides filesystem-related functions. type FileSystem interface { // ReadDir reads the directory named by dirname and returns a // list of directory entries. ReadDir(dirname string) ([]os.FileInfo, error) // Lstat returns a FileInfo describing the named file. If the file is a // symbolic link, the returned FileInfo describes the symbolic link. Lstat // makes no attempt to follow the link. Lstat(name string) (os.FileInfo, error) // Join joins any number of path elements into a single path, adding a // separator if necessary. The result is Cleaned; in particular, all // empty strings are ignored. // // The separator is FileSystem specific. Join(elem ...string) string } FileSystem defines the methods of an abstract filesystem. Walker provides a convenient interface for iterating over the descendants of a filesystem path. Successive calls to the Step method will step through each file or directory in the tree, including the root. The files are walked in lexical order, which makes the output deterministic but means that for very large directories Walker can be inefficient. Walker does not follow symbolic links. Walk returns a new Walker rooted at root. func WalkFS(root string, fs FileSystem) *Walker WalkFS returns a new Walker rooted at root on the FileSystem fs. Err returns the error, if any, for the most recent attempt by Step to visit a file or directory. If a directory has an error, w will not descend into that directory. Path returns the path to the most recent file or directory visited by a call to Step. It contains the argument to Walk as a prefix; that is, if Walk is called with "dir", which is a directory containing the file "a", Path will return "dir/a". SkipDir causes the currently visited directory to be skipped. If w is not on a directory, SkipDir has no effect. Stat returns info for the most recent file or directory visited by a call to Step. Step advances the Walker to the next file or directory, which will then be available through the Path, Stat, and Err methods. It returns false when the walk stops at the end of the tree. Package fs imports 3 packages (graph) and is imported by 144 packages. Updated 2020-07-09. Refresh now. Tools for package owners.
https://godoc.org/github.com/kr/fs
CC-MAIN-2020-40
refinedweb
375
66.84
On 14/10/2016 15:44, Anil Madhavapeddy wrote: > On 14 Oct. >> >> Initially I kept minting my own infix operators (like `>>*=` `>>|=`) but it >> quickly got confusing for me and I probably didn't name them uniformly >> across projects. Recently I've stuck to `>>=` and have put different ones in >> different modules like this: >> >> ``` >> module LwtResult = struct >> let (>>=) m f = >> end >> ``` >> >> and now my code looks like (sometimes with extra newlines, but I don't have >> strong opinions about that) >> ``` >> let open Lwt.Infix in >> f () >>= fun () -> >> let open LwtResult in >> g () >>= fun () -> >> Lwt.return (Result.Ok ()) (* possibly should define `return` in the module >> too *) >> ``` >> Part of my rationale for using the same bind was a hope that one day modular >> implicits might automatically choose the right version and I could lose the >> `let open` but maybe this isn't possible? I'd find the code without the `let open __ in` even more confusing (and sometimes your `let open __ in` is already many lines away from its usage). > Good point. To go over to the "overload the operators" camp despite my > previous mail, I did remember that ppx_let supports arbitrary monadic and > applicative syntax: As said in my earlier mail, I find local `let open __ in` rather hard to In the end, how many different binds do we need? I'd think two or three, the Lwt one (for effectful non-error-raising code), the rresult one (for pure code), and the combined one >>=? : (a, b) result Lwt.t -> (a -> (c, b) result Lwt.t) -> (c, b) result Lwt.t And there will likely be combinations of >>= and >>=? hann.
https://lists.xenproject.org/archives/html/mirageos-devel/2016-10/msg00053.html
CC-MAIN-2019-18
refinedweb
271
66.88
From the IBM Canada web site ... General information Machine type: 5160 Announce date: March 8, 1983 List price: $7545.00 (5160-087) Excerpt from the original announcement letter; "The XT System Unit is the heart of the computer system. The XT System Unit and its companion keyboard are rugged and easy-to-use, and control a variety of input/output devices. Each XT System Unit comes with 128KB (131,078 bytes) of memory, a dual-sided (368,640 bytes) Diskette Drive, and a 10MB (10,240,000 bytes) Fixed Disk Drive. Asynchronous Communications (Async) is also standard on each XT System Unit. This provides an EIA-RS232C interface, which has a variety of uses. Based on a high performance Intel 8088 microprocessor, each XT computer includes an enhanced version of the popular BASIC language and provisions for attachment of a video display. The system can be further expanded through options that are customer installed." System Characteristics Physical description Power supply: 130w Weight: 12.7 - 13.9 kg Dimensions (HxWxD): (147 x 498 x 410mm) Mass storage Drive bays: 2 full height Floppy Disk Size: 5.25 inch full height Capacity: 360KB Hard disk Size: 5.25 inch full-height Capacity: 10MB or 20MB Access time: >85ms Interface: ST-506 Display Size: 11.5 inch (diagonal) Display type: Optional IBM Monochrome Display (5151) - TTL green phosphor screen Graphics modes supported: MDA (text 25 rows x 80 characters) Keyboard Type: 83 key or 101 key enhanced Software Operating system: IBM PC DOS Versions 2.0 and later for models 68,78,87, and 86 and Version 3.1 or later for models 267,277,88,268,278, and 89 And XT stood for eXtended Technology The XT was Subaru's first attempt at producing a sports car. They launched this arrow shaped 4 wheel drive beast in 1985. The most notable thing about the XT was that it broke the record for air resistance with a .29 coefficient of drag. This record may still be standing today, (I can't find any records of it being broken). The XT featured 4 wheel drive, (later changed to all wheel drive), 4 wheel independent adjustable suspension, (this used airbags like big diesel trucks have), and a turbocharger. The XT was eventually replaced by the Subaru SVX. But the SVX was in a different class. The XT was a Camaro fighter. While the SVX competed on Corvette level in the automotive arena. Xt, or "X Toolkit Intrinsics", assists in the construction and use of widget sets under the X Window System. Xt doesn't directly provide any widget functionality; instead, it provides some depth of abstraction above Xlib, and above C itself, for creating and using high-level widgets. Xt widgets are written using Xlib with complicated Xt functions and data structures, and can be used with Xt calls or wrapper functions (a la Motif). There are a few major Xt widget sets still in major use, namely Athena (Xaw) and Motif, and a few others that are obsolete. Most modern X widget sets are not made with Xt, partly because other languages and styles have proved to be a better foundation, especially as regards object orientation. GTK+, for instance, is built on it's own Xlib wrapper called GDK. In fact, the abstractive parts of Xt are analogous to those parts of GDK and GObject. QT uses C++ classes and its extension of C++ for similar purposes. (As a side not, GTK+ and QT completely hide XLib; XT does not.) The original generation of X applications and utilities were created using the Xt widget sets Athena and Motif, and occasionally something obscure like OLIT. Some of these applications, like xterm and xclock, are still in use today. Motif and the CDE have historically been the X11 environment of choice for commercial Unix, though they are slowly being edged out by the newer GNOME, KDE, and others; and are rarely used on modern OSS systems. The Xt specification states quite clearly that it is supposed to be somewhat object-oriented. Most OO nuts would balk at this, but it is true; Xt allows definition of classes (ostensibly all widgets), instantiation of classes as widgets, inheritance, encapsulation, and polymorphism. That said, it does not have the clarity or ease of use one would expect. Thus, it is Object Orientation proper and not the proverbial language-specific easy-as-pie OO that was supposed to be All That Is Programming by now 10 years ago. A diagram of that sentence is left to the reader as an excercise, and so is trying to figure out if I'm advocating C or eastern religion. The definitive source of information about Xt is the specification, available in PostScript with the X11R6 distribution or available on the web (see below). There are a great number of books, mostly from the late 80's, with information on Xlib, Xt and Motif, most notably the Animal Books. It should be noted that the X manuals were some of O'Reilly's earliest publications. The Athena widget set is included in the X Window System distribution and is used to build the popular client programs from that package. Xterm, xedit, xman, xload and so forth are all programmed with Xt using Athena. Athena is not visually impressive, but it is free, standard and well documented. I've written my examples in athena because I like it and it mainly uses Xt calls instead of wrappers, and because you are you are guaranteed to have it if you have X. Here's an example program for your perusal. It's not really a tutorial, though I've included a simpler "Hello, world!" program below. Browse my Athena writeup for a slower intro. Real knowledge of the C language is expected (Pointers, function pointers, struct pointers...). This program will let you enter text into a text box, and echo to standard output when you press the button. This program is not very useful, as it is just a simplified 'echo' with an unwieldy X interface. (It could be used as a hideous text editor via redirection, though.) If I added CLIPBOARD support to it, it could pass as a deranged xclipboard. #include <stdio.h> #include <X11/Intrinsic.h> #include <X11/Shell.h> #include <X11/StringDefs.h> #include <X11/Xaw/Command.h> #include <X11/Xaw/Box.h> #include <X11/Xaw/AsciiText.h> // Truly, you need a great number of include files for any Athena // program of size. The three under X11 are for Xt, and the three // under X11/Xaw are each for an Athena widget. Widget window, box, command, clear_command, quit_command, ascii; /* Each top level window, widget, and widget container is a 'Widget' */ XtAppContext app; /* And one of these for each application... */ Arg args[2]; int nargs = 0; /* These two variables will be used for passing Xt arguments the 'hard' way */ String app_resources[] = { "*command.Label: Write text to stdout", "*clear_command.Label: Clear", "*quit_command.Label: Quit", "*window.Title: Hello, world in Xt/Athena", "*window.Geometry: 300x200+10+10", "*ascii.Width: 280", "*ascii.Height: 150", NULL }; /* These are the 'fallback resources'; it's naive way of initially setting some widget properties. */ void print_text(Widget w, XtPointer data, XtPointer call_data) { /* All functions used as callbacks have this signature. If you need to pass them special info, you have to do it through 'data', which is really just a (void *). It's typical to have a few well named globals so that this is trivial. */ String string = NULL; /* This is the 'interactive' way to set and get resources. * We could set more than one at a time this way. * In this case, we only need to set one value (to get). * (The easy way is the XtVaGetValues, which you can look up yourself) */ nargs = 0; XtSetArg(args[nargs], XtNstring, &string); nargs++; XtGetValues(ascii, args, nargs); printf("%s\n", string); } void clear_text(Widget w, XtPointer data, XtPointer call_data) { String string = NULL; /* This is the aforementioned 'easy way'. * It can't be used to give a dynamically varying number of * arguments, since the arguments are given at compile time. */ XtVaSetValues(ascii, XtNstring, &string, NULL); } void quit_app(Widget w, XtPointer data, XtPointer call_data) { /* The manual gives this as the "right way" to end the application. * Just exiting will work, but that may cause problems on some systems. */ XtUnmapWidget(window); XtDestroyApplicationContext(app); exit(0); } int main(int argc, char **argv) { window = XtOpenApplication(&app, "window", // AppContext represents the whole program, and // we need a name for the top level widget NULL, 0, // XrmOptions; not used in this example &argc, argv, // This allows things like the -geometry or -display options app_resources, // Pass the fallback resource array sessionShellWidgetClass, // Indicates we're making //a normal, top level window. NULL, 0); // For extra options set on the top level widget box = XtCreateManagedWidget( "box", // These names are used for setting resources via // app_resources, et cetera boxWidgetClass, // The 'type' of the widget; this symbol is // from the widget header file window, // The parent of this widget NULL, 0); // No arguments being passed, so these are empty. command = XtCreateManagedWidget("command", commandWidgetClass, box, NULL, 0); clear_command = XtCreateManagedWidget("clear_command", commandWidgetClass, box, NULL, 0); quit_command = XtCreateManagedWidget("quit_command", commandWidgetClass, box, NULL, 0); nargs = 0; XtSetArg(args[nargs], XtNeditType, XawtextEdit); nargs++; ascii = XtCreateManagedWidget("ascii", asciiTextWidgetClass, box, args, nargs); // Set some initial values on this widget XtAddCallback( command, // Widget who's callback we want XtNcallback, // Just a plain callback; there are others print_text, // Function to call NULL ); // Data pointer to send the function; in our case, none XtAddCallback(clear_command, XtNcallback, clear_text, NULL ); XtAddCallback(quit_command, XtNcallback, quit_app, NULL ); XtRealizeWidget(window); // Maps window and contained widgets XtAppMainLoop(app); // Roll the event looop return (0); } /* Save this as xt-example.c and try gcc -o xt-ex -L/usr/X11R6/lib -lX11 -lXt -lXaw xt-example.c for compiling this. */ This program is not quite trivial (by tutorial standards), but not useful. It seperates into initializing the tookit, creating basic elements, and running the event loop. Each of the buttons calls a function, and the text box takes care of itself. The non-trivial parts of the program mostly have to do with figuring out which widget resource does what, and how to use them. If you have trouble understanding this program, here is a "Hello, world!" to get you started a little slower. Understanding of the rest of the program will probably have to be derived from reading the manual and modifying the source yourself. #include <X11/Intrinsic.h> #include <X11/Xaw/Label.h> #include <X11/Shell.h> // A handy array for setting some properties initially. // Forget the NULL and you'll segfault, probably. String app_resources[] = { "*label.Label: Hello, world!", NULL }; int main(int argc, char **argv) { Widget top_level, label; XtAppContext app; top_level = XtOpenApplication(&app, "Hello, world!", NULL, 0, &argc, argv, app_resources, sessionShellWidgetClass, NULL, 0); // Consider most of this magic. // Note app_resources and &app. label = XtCreateManagedWidget("label", labelWidgetClass, top_level, NULL, 0); // No "container widget", since we just need the one XtRealizeWidget(top_level); XtAppMainLoop(app); return 0; } In contrast to a great many widget sets, Xt handles all widgets using the same set of functions. Widget sets may (and do) implement wrapper functions and special widget functionality, but they are in many cases fungible. The Motif library was for a long time the de facto standard in Unix GUIs. Motif, owned by The Open Group, is distributed with most Unices, and there are open source alternatives such as lesstif; still, it is not quite an open technology. The Motif widget set is more advanced, both technically and visually, than the Athena widget set. Motif is widely used for building legacy Unix applications, but some popular OSS programs like Netscape also use(d) it. Motif, or Xm, is undoubtedly 'prettier' and more sophisticated than Athena. It is also somewhat abstracted away from Xt using wrapper functions, so Motif code is not as ideal for demonstrating Xt. Certainly, a really good writeup would have an example of how to create a widget in Xt. Unfortunately, even simple widgets like the label require a gigantic amount of boilerplate work just to get started, so I won't bother. If you would like to see it for yourself, browse to xc/lib/Xaw in the X Window System distribution source. Here's a rundown of the basic things you need to construct an Xt program, emphasis on Athena. The first is a list of common data structures and data, the second is a list of functions, and last (which you may want to read first) is a description of passing arguments, both the easy way and the hard way. XtOpenApplication This is the function you use to start any Xt app. It takes a whole list of things and returns the top-level widget. You can also start the app by calling four other functions manually, to do different tasks. (Most old code contains the function XtAppInitialize, which is 'deprecated' in the spec.) Arguments in order: XtCreateManagedWidget: The canonical way of making a widget. You could theoretically make a widget with XtCreateWidget and then manage it later, but this is the easy way. XtSetValues, XtGetValues: These are pretty simple. They are used for typical value setting and getting, in the same fashion. XtAddCallback: The function that holds the whole thing together, XtAddCallback can actually be done with XtSetValues but you don't want to do it that way, probably. More to come. Work-in-progress. Sorry. You should learn early on how to set and get widget resources, using both the XtSetArg macro with initializers and the set/get functions, as well as with the XtVa* functions. The fallback resources are also important. The hard way is to pass normal initializers (XtOpenApplication, XtCreateManagedWidget) a special array, built up with XtSetArg macro. In contrast to regular argument passing in C functions, this is very laborious and inefficient, but it has advantages, such as being able to set or get a dynamic, arbitrary number of arguments with a single function call (since XtSetArg is a macro). Abstracting this process, though, is not a bad idea, as in the example function: /* Example function for resizing a widget */ void resize(Widget my_widget, int width, int height) { Arg arg_list10; // Allocate more than we need, in this case int arg_count = 0; if( width > 0 ) XtSetArg(arg_list[arg_count], XtNwidth, width); arg_count++; if( height > 0 ) XtSetArg(arg_list[arg_count], XtNheight, height); arg_count++; if( width > 0 || height > 0 ) XtSetValues(my_widget, arg_list, arg_count); } Log in or registerto write something here or to contact authors. Need help? accounthelp@everything2.com
http://everything2.com/title/XT
CC-MAIN-2016-50
refinedweb
2,414
54.12
The QXmlQuery class performs XQueries on XML data, or on non-XML data modeled to look like XML. More... #include <QXmlQuery> Note: All functions in this class are reentrant. This class was introduced in Qt 4.4. The QXmlQuery class performs XQueries on XML data, or on non-XML data modeled to look like XML. The QXmlQuery class compiles and executes queries written in the XQuery language. QXmlQuery is typically used to query XML data, but it can also query non-XML data that has been modeled to look like XML. Using QXmlQuery to query XML data, as in the snippet below, is simple because it can use the built-in XML data model as its delegate to the underlying query engine for traversing the data. The built-in data model is specified in XQuery 1.0 and XPath 2.0 Data Model. QXmlQuery query; query.setQuery("doc('index.html')/html/body/p[1]"); QXmlSerializer serializer(query, myOutputDevice); query.evaluateTo(&serializer); The example uses QXmlQuery to match the first paragraph of an XML document and then output the result to a device as XML. Using QXmlQuery to query non-XML data requires writing a subclass of QAbstractXmlNodeModel to use as a replacement for the built-in XML data model. The custom data model will be able to traverse the non-XML data as required by the QAbstractXmlNodeModel interface. An instance of this custom data model then becomes the delegate used by the query engine to traverse the non-XML data. For an example of how to use QXmlQuery to query non-XML data, see the documentation for QAbstractXmlNodeModel. To run a query set up with QXmlQuery, call one of the evaluation functions. The XPath language is a subset of the XQuery language, so running an XPath expression is the same as running an XQuery query. Pass the XPath expression to QXmlQuery using setQuery(). Running an XSLT stylesheet is like running an XQuery, except that when you construct your QXmlQuery, you must pass QXmlQuery::XSLT20 to tell QXmlQuery to interpret whatever it gets from setQuery() as an XSLT stylesheet instead of as an XQuery. You must also set the input document by calling setFocus(). QXmlQuery query(QXmlQuery::XSLT20); query.setFocus(QUrl("myInput.xml")); query.setQuery(QUrl("myStylesheet.xsl")); query.evaluateTo(out); Note: Currently, setFocus() must be called before setQuery() when using XSLT. Another way to run an XSLT stylesheet is to use the xmlpatterns command line utility. xmlpatterns myStylesheet.xsl myInput.xml Note: For the current release, XSLT support should be considered experimental. See section XSLT conformance for details. Stylesheet parameters are bound using bindVariable(). When a query is run on XML data, as in the snippet above, the doc() function returns the node in the built-in data model where the query evaluation will begin. But when a query is run on a custom node model containing non-XML data, one of the bindVariable() functions must be called to bind a variable name to a starting node in the custom model. A $variable reference is used in the XQuery text to access the starting node in the custom model. It is not necessary to declare the variable name external in the query. See the example in the documentation for QAbstractXmlNodeModel. QXmlQuery is reentrant but not thread-safe. It is safe to use the QxmlQuery copy constructor to create a copy of a query and run the same query multiple times. Behind the scenes, QXmlQuery will reuse resources such as opened files and compiled queries to the extent possible. But it is not safe to use the same instance of QXmlQuery in multiple threads. Errors can occur during query evaluation. Examples include type errors and file loading errors. When an error occurs: When a query runs, it parses documents, allocating internal data structures to hold them, and it may load other resources over the network. It reuses these allocated resources when possible, to avoid having to reload and reparse them. When setQuery() is called, the query text is compiled into an internal data structure and optimized. The optimized form can then be reused for multiple evaluations of the query. Since the compile-and-optimize process can be expensive, repeating it for the same query should be avoided by using a separate instance of QXmlQuery for each query text. Once a document has been parsed, its internal representation is maintained in the QXmlQuery instance and shared among multiple QXmlQuery instances. An instance of QCoreApplication must exist before QXmlQuery can be used. When QXmlQuery accesses resources (e.g., calling fn:doc() to load a file, or accessing a device via a bound variable), the event loop is used, which means events will be processed. To avoid processing events when QXmlQuery accesses resources, create your QXmlQuery instance in a separate thread. Specifies whether you want QXmlQuery to interpret the input to setQuery() as an XQuery or as an XSLT stylesheet. This enum was introduced or modified in Qt 4.5. Constructs an invalid, empty query that cannot be used until setQuery() is called. Note: This constructor must not be used if you intend to use this QXmlQuery to process XSL-T stylesheets. The other constructor must be used in that case. Constructs a QXmlQuery that is a copy of other. The new instance will share resources with the existing query to the extent possible. Constructs a query that will use np as its name pool. The query cannot be evaluated until setQuery() has been called. Constructs a query that will be used to run Xqueries or XSL-T stylesheets, depending on the value of queryLanguage. It will use np as its name pool. Note: If your QXmlQuery will process XSL-T stylesheets, this constructor must be used. The default constructor can only create instances of QXmlQuery for running XQueries. Note: The XSL-T support in this release is considered experimental. See the XSLT conformance for details. This function was introduced in Qt 4.5. See also queryLanguage(). Binds the variable name to the value so that $name can be used from within the query to refer to the value. name must not be null. name.isNull() must return false. If name has already been bound by a previous bindVariable() call, its previous binding will be overridden. If value is null so that value.isNull() returns true, and name already has a binding, the effect is to remove the existing binding for name. To bind a value of type QString or QUrl, wrap the value in a QVariant such that QXmlItem's QVariant constructor is called. All strings processed by the query must be valid XQuery strings, which means they must contain only XML 1.0 characters. However, this requirement is not checked. If the query processes an invalid string, the behavior is undefined. See also QVariant::isValid(), How QVariant maps to XQuery's Data Model, and QXmlItem::isNull(). Binds the variable name to the device so that $name can be used from within the query to refer to the device. The QIODevice device is exposed to the query as a URI of type xs:anyURI, which can be passed to the fn:doc() function to be read. E.g., this function can be used to pass an XML document in memory to fn:doc. QByteArray myDocument; QBuffer buffer(&myDocument); // This is a QIODevice. buffer.open(QIODevice::ReadOnly); QXmlQuery query; query.bindVariable("myDocument", &buffer); query.setQuery("doc($myDocument)"); The caller must ensure that device has been opened with at least QIODevice::ReadOnly prior to this binding. Otherwise, behavior is undefined. If the query will access an XML document contained in a QString, use a QBuffer as shown in the following snippet. Suppose myQString contains <document>content</document> QBuffer device; device.setData(myQString.toUtf8()); device.open(QIODevice::ReadOnly); QXmlQuery query; query.bindVariable("inputDocument", &device); query.setQuery("doc($inputDocument)/query[theDocument]"); name must not be null. name.isNull() must return false. If name has already been bound, its previous binding will be overridden. The URI that name evaluates to is arbitrary and may change. If the type of the variable binding changes (e.g., if a previous binding by the same name was a QVariant, or if there was no previous binding), isValid() will return false, and recompilation of the query text is required. To recompile the query, call setQuery(). For this reason, bindVariable() should be called before setQuery(), if possible. Note: device must not be deleted while this QXmlQuery exists. Binds the result of the query query, to a variable by name name. Evaluation of query will be commenced when this function is called. If query is invalid, behavior is undefined. query will be copied. This function was introduced in Qt 4.5. This is an overloaded function. This function constructs a QXmlName from localName using the query's namespace. The function then behaves as the overloaded function. It is equivalent to the following snippet. QXmlNamePool namePool(query.namePool()); query.bindVariable(QXmlName(namePool, localName), value); This is an overloaded function. If localName is a valid NCName, this function is equivalent to the following snippet. QXmlNamePool namePool(query.namePool()); query.bindVariable(QXmlName(namePool, localName), device); A QXmlName is constructed from localName, and is passed to the appropriate overload along with device. See also QXmlName::isNCName(). This is an overloaded function. Has the same behavior and effects as the function being overloaded, but takes the variable name localName as a QString. query is used as in the overloaded function. This function was introduced in Qt 4.5. Starts the evaluation and makes it available in result. If result is null, the behavior is undefined. The evaluation takes place incrementally (lazy evaluation), as the caller uses QXmlResultItems::next() to get the next result. See also QXmlResultItems::next(). Evaluates this query and sends the result as a sequence of callbacks to the receiver callback. QXmlQuery does not take ownership of callback. If an error occurs during the evaluation, error messages are sent to messageHandler() and false is returned. If this query is invalid, false is returned and the behavior is undefined. If callback is null, behavior is undefined. See also QAbstractXmlReceiver and isValid(). Attempts to evaluate the query and returns the results in the target string list. If the query is valid and the evaluation succeeds, true is returned. Otherwise, false is returned and the contents of target are undefined. The query must evaluate to a sequence of xs:string values. If the query does not evaluate to a sequence of strings, the values can often be converted by adding a call to string() at the end of the XQuery. If target is null, the behavior is undefined. Evaluates the query, and serializes the output as XML to output. If an error occurs during the evaluation, error messages are sent to messageHandler(), the content of output is undefined and false is returned, otherwise true is returned. If output is null behavior is undefined. QXmlQuery does not take ownership of output. Internally, the class QXmlFormatter is used for this. This function was introduced in Qt 4.5. Evaluates the query or stylesheet, and writes the output to target. QXmlSerializer is used to write the output to target. In a future release, it is expected that this function will be changed to respect serialization options set in the stylesheet. If an error occurs during the evaluation, error messages are sent to messageHandler() and false is returned. If target is null, or is not opened in at least QIODevice::WriteOnly mode, the behavior is undefined. QXmlQuery does not take ownership of target. This is an overloaded function. This function was introduced in Qt 4.5. Returns the name of the XSL-T stylesheet template that the processor will call first when running an XSL-T stylesheet. This function only applies when using QXmlQuery to process XSL-T stylesheets. By default, no initial template is set. In that case, a default constructed QXmlName is returned. This function was introduced in Qt 4.5. See also setInitialTemplateName(). Returns true if this query is valid. Examples of invalid queries are ones that contain syntax errors or that have not had setQuery() called for them yet. Returns the message handler that handles compile and runtime messages for this QXmlQuery. See also setMessageHandler(). Returns the name pool used by this QXmlQuery for constructing names. There is no setter for the name pool, because mixing name pools causes errors due to name confusion. Returns the network manager, or 0 if it has not been set. This function was introduced in Qt 4.5. See also setNetworkAccessManager(). Returns a value indicating what this QXmlQuery is being used for. The default is QXmlQuery::XQuery10, which means the QXmlQuery is being used for running XQuery and XPath queries. QXmlQuery::XSLT20 can also be returned, which indicates the QXmlQuery is for running XSL-T spreadsheets. This function was introduced in Qt 4.5. Sets the focus to item. The focus is the set of items that the context item expression and path expressions navigate from. For example, in the expression p/span, the element that p evaluates to is the focus for the following expression, span. The focus can be accessed using the context item expression, i.e., dot ("."). By default, the focus is not set and is undefined. It will therefore result in a dynamic error, XPDY0002, if the focus is attempted to be accessed. The focus must be set before the query is set with setQuery(). There is no behavior defined for setting an item which is null. This is an overloaded function. Sets the focus to be the document located at documentURI and returns true. If documentURI cannot be loaded, false is returned. It is undefined at what time the document may be loaded. When loading the document, the message handler and URI resolver set on this QXmlQuery are used. If documentURI is empty or is not a valid URI, the behavior of this function is undefined. This function was introduced in Qt 4.5. Sets the focus to be the document read from the QIODevice and returns true. If document cannot be loaded, false is returned. QXmlQuery does not take ownership of document. The user guarantees that a document is available from the document device and that the document is not empty. The device must be opened in at least read-only mode. document must stay in scope as long as the current query is active. This is an overloaded function. This function was introduced in Qt 4.5. This function behaves identically to calling the setFocus() overload with a QIODevice whose content is focus encoded as UTF-8. That is, focus is treated as if it contained an XML document. Returns the same result as the overload. This is an overloaded function. This function was introduced in Qt 4.6. Sets the name of the initial template. the stylesheet has no template named name, the processor will use the standard order of template invocation. This function was introduced in Qt 4.5. See also initialTemplateName(). This is an overloaded function. Sets the name of the initial template to localName, which must be a valid local name. localName is not a valid local name, the effect is undefined. If the stylesheet has no template named localName, the processor will use the standard order of template invocation. This function was introduced in Qt 4.5. See also initialTemplateName(). Changes the message handler for this QXmlQuery to aMessageHandler. The query sends all compile and runtime messages to this message handler. QXmlQuery does not take ownership of aMessageHandler. Normally, the default message handler is sufficient. It writes compile and runtime messages to stderr. The default message handler includes color codes if stderr can render colors. Note that changing the message handler after the query has been compiled has no effect, i.e. the query uses the same message handler at runtime that it uses at compile time. When QXmlQuery calls QAbstractMessageHandler::message(), the arguments are as follows: See also messageHandler(). Sets the network manager to newManager. QXmlQuery does not take ownership of newManager. This function was introduced in Qt 4.5. See also networkAccessManager(). Sets this QXmlQuery to an XQuery read from the sourceCode device. The device must have been opened with at least QIODevice::ReadOnly. documentURI represents the query obtained from the sourceCode device. It is the base URI of the static context, as defined in the XQuery language. It is used internally to resolve relative URIs that appear in the query, and for message reporting. documentURI can be empty. If it is empty, the application file path is used. If it is not empty, it may be either relative or absolute. If it is relative, it is resolved itself against the application file path before it is used. If documentURI is neither a valid URI nor empty, the result is undefined. If the query contains a static error (e.g. syntax error), an error message is sent to the messageHandler(), and isValid() will return false. Variables must be bound before setQuery() is called. The encoding of the XQuery in sourceCode is detected internally using the rules for setting and detecting encoding of XQuery files, which are explained in the XQuery language. If sourceCode is null or not readable, or if documentURI is not a valid URI, behavior is undefined. Sets this QXmlQuery to the XQuery read from the queryURI. Use isValid() after calling this function. If an error occurred reading queryURI, e.g., the query does not exist, cannot be read, or is invalid, isValid() will return false. The supported URI schemes are the same as those in the XQuery function fn:doc, except that queryURI can be the object of a variable binding. baseURI is the Base URI of the static context, as defined in the XQuery language. It is used internally to resolve relative URIs that appear in the query, and for message reporting. If baseURI is empty, queryURI is used. Otherwise, baseURI is used, and it is resolved against the application file path if it is relative. If queryURI is empty or invalid, or if baseURI is invalid, the behavior of this function is undefined. This is an overloaded function. The behavior and requirements of this function are the same as for setQuery(QIODevice*, const QUrl&), after the XQuery has been read from the IO device into a string. Because sourceCode is already a Unicode string, detection of its encoding is unnecessary. Sets the URI resolver to resolver. QXmlQuery does not take ownership of resolver. See also uriResolver(). Returns the query's URI resolver. If no URI resolver has been set, QtXmlPatterns will use the URIs in queries as they are. The URI resolver provides a level of abstraction, or polymorphic URIs. A resolver can rewrite logical URIs to physical ones, or it can translate obsolete or invalid URIs to valid ones. QtXmlPatterns calls the URI resolver for all URIs it encounters, except for namespaces. Specifically, all builtin functions that deal with URIs (fn:doc(), and fn:doc-available()). In the case of fn:doc(), the absolute URI is the base URI in the static context (which most likely is the location of the query). Rather than use the URI the user specified, the return value of QAbstractUriResolver::resolve() will be used. When QtXmlPatterns calls QAbstractUriResolver::resolve() the absolute URI is the URI mandated by the XQuery language, and the relative URI is the URI specified by the user. See also setUriResolver(). Assigns other to this QXmlQuery instance.
https://doc-snapshots.qt.io/4.8/qxmlquery.html
CC-MAIN-2019-26
refinedweb
3,241
59.19
You do not need to be following along our Tkinter series to participate in this tutorial. If you are here purely to learn Object Oriented Programming, that is fine. With our program in mind, it has become time that we consider Object Oriented Programming (OOP) to achieve our back-end goals. Up until this point, my tutorials have excluded object oriented programming for the most part. It's just not generally my style, and often times complicates the learning of the specific topic by adding another layer of complexity in the form of obfuscation to the task. For those of you who are unfamiliar with, or confused by, what object oriented programming actually is, you are not alone. Even people who use it do not usually fully understand the inner workings sometimes. I do not consider myself an OOP expert, especially since I rarely use it, but I know enough to know it will help us out here in the long run with our Tkinter application and I can share what I do know for you fine folks! So, Object Oriented Programming is a programming paradigm, or better put: a structure. That's it. It's just a structure with which we build a program. Python is often treated purely as a scripting language, but it is fundamentally an OOP language, actually. With OOP, you basically state the structure of your program, and your classes quite literally return "objects," which is why it is called "object" oriented. The objects serve as "instances" of your classes. That's about all I want to say on the matter before we just jump into an example. I think a practical example goes a long way in helping to learn, so let's get into it! I will share the code in chunks, explaining each step of the way. If you get lost, I will post the "full" version of this series' code at the very bottom, so fear not! import tkinter as tk class SeaofBTCapp(tk.Tk): We begin with a simple import of tkinter as tk. If you are using Python 2, keep in mind that tkinter is called Tkinter. After that, we define our SeaofBTCapp class, and we pass tk.Tk as what looks to be what we know as a parameter. First off, there is no need for any parenthesis at all with classes. I could do class SeaofBTCapp: and that would not be a syntax error. So what is tk.Tk then? When you see something in parenthesis like this, what it means is that class is inheriting from another class. In our case, we're inheriting everything from the tk.Tk class. Think of it kind of like how you import modules to use them. That's basically what's happening when you inherit, only at the local class level. Now, within our class, we have: def __init__(self, *args, **kwargs): While not required, you will often see the first "function" in classes as __init__. First off, these are not functions, even though they act just like functions. They are actually called "methods." __init__ is a very special method, in that this method always runs. Init is short of initialize, and whatever you put in here is going to always run whenever the class is called upon. The other methods will only run when you specifically call upon them to run. Think of this like the various startup processes that run on your computer when you have booted up. You want some things to always start when your computer turns on. You want your mouse drivers to come online, you need your keyboard to work, you want your graphics drivers pumping out your desktop, and so on. The other programs you might have, you just want them to start when you click on their icon. These are like the other methods. Now that we have that down, we see the first parameter in "init" is "self." This is done purely out of standards, and is actually not required. You do not need it, and you could call it something else entirely, like "burrito." It's a good idea to just call it "self" since that is the accepted practice. If your goal is obfuscation, however, you might rename it. So, self is just the first argument of all class methods. Then you see we're calling these "*args" and "**kwargs." Like "self," actually typing out "args" and "kwargs" is not necessary, the asterisks to the trick. It is just common to add the "args" and "kwargs." So what are these? These are used to pass a variable, unknown, amount of arguments through the method. The difference between them is that args are used to pass non-keyworded arguments, where kwargs are keyword arguments (hence the meshing in the name to make it kwargs). Args are your typical parameters. Kwargs, will basically be dictionaries. You can get by just thinking of kwargs as dictionaries that are being passed. So, in theory, you could have a method or function that was something like def example(farg, *, **). Fargs are required, as you likely know by now, and will throw an error if there is nothing assigned for them. Next up, we have the following line under the def __init___: tk.Tk.__init__(self, *args, **kwargs) Here we're initializing the inherited class. Now for some quick Tkinter-specific code: container = tk.Frame(self) container.pack(side="top", fill="both", expand=True) container.grid_rowconfigure(0, weight=1) container.grid_columnconfigure(0, weight=1) We've defined this container, which will be filled with a bunch of frames to be accessed later on. Next, we have container.pack. In Tkinter, there are two major ways of populating and situating your widgets that you create within the frame. One way is pack, the other is grid. Depending on your needs and what you're comfortable with, you will likely use one more than the other. For the most part, I find that grid gives me the most control over my application. Grid allows you to create a sort of grid, which is used for orienting where things go in your application. Pack allows some control, but mostly feels to me like you're just stuffing things into a pillow case, just trying your best to pick a side, but it doesn't always work as intended. The grid_configure options are just some simple config settings that we're setting early on. self.frames = {} frame = StartPage(container, self) self.frames[StartPage] = frame We pre-defined a dictionary, which is empty for now. Remember earlier about dictionaries and kwargs? Where do you think this dictionary will go? Next, we define what the frame will be. Eventually, we're going to pack self.frames with a bunch of possible frames, with the "top" frame being the current frame. For now, we're just going to have one page, which is "StartPage" (not yet defined). Next, still under __init__, we have: frame.grid(row=0, column=0, sticky="nsew") Here, we're using grid to place our widget. Row and column are both 0 for this widget. Then we have sticky being nsew. "nsew" corresponds to directions (north, south, east, west). The idea of sticky is like alignment, with a slight change of stretching. So if you aligned something e, then the widget would be to the right. If you said sticky="ew," then the widget would stretch from the left to right side. If you sticky "nsew" like we have, then the widget will be encouraged to fill the entire space allotted. self.show_frame(StartPage) Finally, we call show_frame, which is a method we've yet to define, but it will be used to bring a frame of our choosing, so let's create that method: def show_frame(self, cont): frame = self.frames[cont] frame.tkraise() Another method, with self and an argument of cont for controller. Then, we define frame as being the self.frame (which is that dictionary above), followed by the controller, which is the key to the value in our dictionary that is our frame. Finally, we do frame.tkraise(), which will bring our frame to the top for the user to see. Great, our back-end is basically all set. Now let's make that starting page. class StartPage(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) label = tk.Label(self, text="This is the start page", font=LARGE_FONT) label.pack(pady=10,padx=10) Here, we've got the StartPage class, inheriting from tk.Frame. We then have a typical __init__, with the init of tk.Frame as well. Then we define a label widget, which is Tkinter code. You'll see that for "font," we're calling Large_Font. This is a constant that we'll just put at the top of our application. So, after you've called tkinter to be imported, add: LARGE_FONT= ("Verdana", 12) Now, back to our StartPage class. We are using .pack, with some padding on the y and x. Padding just adds some empty spaces on the edges of things to help things not look so cluttered. Finally, at the end of our script, we just need: app = SeaofBTCapp() app.mainloop() App is the object of the SeaofBTCapp class, then we run .mainloop(), which is a tkinter functionality, yet we can use it due to inheritance. And you're all done! Go ahead and run the script and you should see: The full script: import tkinter as tk LARGE_FONT= ("Verdana", 12) class SeaofBTCapp(tk.Tk): def __init__(self, *args, **kwargs): tk.Tk.__init__(self, *args, **kwargs) container = tk.Frame(self) container.pack(side="top", fill="both", expand = True) container.grid_rowconfigure(0, weight=1) container.grid_columnconfigure(0, weight=1) self.frames = {} frame = StartPage(container, self) self.frames[StartPage] = frame frame.grid(row=0, column=0, sticky="nsew") self.show_frame(StartPage) def show_frame(self, cont): frame = self.frames[cont] frame.tkraise() class StartPage(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self,parent) label = tk.Label(self, text="Start Page", font=LARGE_FONT) label.pack(pady=10,padx=10) app = SeaofBTCapp() app.mainloop() I will admit, at this point, it might feel like Tkinter is going to be super hard to work with, and we could have created the window that we just created MUCH more easily. I want to stress that what we mostly did here is lay some foundation for expansion. We've now got a back-end that will very easily allow us to add more and more pages, so, in the future, adding pages is as simple as creating another Class just like StartPage, adding navigation to it, and you're set.
https://pythonprogramming.net/object-oriented-programming-crash-course-tkinter/
CC-MAIN-2021-39
refinedweb
1,783
75.71
NAME debconf - developers guide DESCRIPTION This is a guide for developing packages that use debconf. This manual assumes that you are familiar with debconf as a user, and are familiar with the basics of debian package construction. This manual begins by explaining two new files that are added to debian packages that use debconf. Then it explains how the debconf protocol works, and points you at some libraries that will let your programs speak the protocol. It discusses other maintainer scripts that debconf is typically used in: the postinst and postrm scripts. Then moves on to more advanced topics like shared debconf templates, debugging, and some common techniques and pitfalls of programming with debconf. It closes with a discussion of debconf's current shortcomings.. Note: It is a little confusing that dpkg refers to running a package's postinst script as "configuring" the package, since a package that uses debconf is often fully pre-configured, by its config script, before the postinst ever runs. Oh well. Like the postinst, the config script is passed two parameters when it is run. The first tells what action is being performed, and the second is the version of the package that is currently installed. So, like in a postinst, you can use dpkg --compare-versions on $2 to make some behavior happen only on upgrade from a particular version of a package, and things like that. The config script can be run in one of three ways: 1 If a package is pre-configured, with dpkg-preconfigure, its config script is run, and is passed the parameters "configure", and installed-version. 2 When a package's postinst is run, debconf will try to run the config script then too, and it will be passed the same parameters it was passed when it is pre-configured. This is necessary because the package might not have been pre- configured, and the config script still needs to get a chance to run. See HACKS for details. 3 If a package is reconfigured, with dpkg-reconfigure, its config script it run, and is passed the parameters "reconfigure" and installed-version. Note that since a typical package install or upgrade using apt runs steps 1 and 2, the config script will typically be run twice. It should do nothing the second time (to ask questions twice in a row is annoying), and it should definitely be idempotent. Luckily, debconf avoids repeating questions by default, so this is generally easy to accomplish. Note that the config script is run before the package is unpacked. It should only use commands that are in essential packages. The only dependency of your package that is guaranteed to be met when its config script is run is a dependency (possibly versioned) on debconf itself. The config script should not need to modify the filesystem at all. It just examines the state of the system, and asks questions, and debconf stores the answers to be acted on later by the postinst script. Conversely, the postinst script should almost never use debconf to ask questions, but should instead act on the answers to questions asked by the config script. THE TEMPLATES FILE A package that uses debconf probably wants to ask some questions. These questions are stored, in template form, in the templates file. Like the config script, the templates file is put in the control.tar.gz section of a deb. Its format is similar to a debian control file; a set of stanzas separated by blank lines, with each stanza having a RFC822-like form: Template: foo/bar Type: string Default: foo Description: This is a sample string question. This is its extended description. . Notice that: - Like in a debian package description, a dot on its own line sets off a new paragraph. - Most text is word-wrapped, but doubly-indented text is left alone, so you can use it for lists of items, like this list. Be careful, since it is not word-wrapped, if it's too wide it will look bad. Using it for short items is best (so this is a bad example). Template: foo/baz Type: boolean Description: Clear enough, no? This is another question, of boolean type. For some real-life examples of templates files, see /var/lib/dpkg/info/debconf.templates, and other .templates files in that directory. Let's look at each of the fields in turn... It's best to use these only for warning about very serious problems, and the error datatype is often more suitable. This datatype can be used for fragments of text, such as labels, that can be used for cosmetic reasons in the displays of some frontends. Other frontends will not use it at all. There is no point in using this datatype yet, since no frontends support it well. It may even be removed in the future. Default The 'Default' field tells debconf what the default value should be. For multiselect, it can be a list of choices, separated by commas and spaces, similar to the 'Choices' field. For select, it should be one of the choices. For boolean, it is "true" or "false", while it can be anything for a string, and it is ignored for passwords. Don't make the mistake of thinking that the default field contains the "value" of the question, or that it can be used to change the value of the question. It does not, and cannot, it just provides a default value for the first time the question is displayed. To provide a default that changes on the fly, you'd have to use the SET command to change the value of a question. Description The 'Description' field, like the description of a Debian package, has two parts: A short description and an extended description. Note that some debconf frontends don't display the long description, or might only display it if the user asks for help. So the short description should be able to stand on its own. If you can't think up a long description, then first, think some more. Post to debian-devel. Ask for help. Take a writing class! That extended description is important. If after all that you still can't come up with anything, leave it blank. There is no point in duplicating the short description. Text in the extended description will be word-wrapped, unless it is prefixed by additional whitespace (beyond the one required space). You can break it up into separate paragraphs by putting " ." on a line by itself between them. QUESTIONS A question is an instantiated template. By asking debconf to display a question, your config script can interact with the user. When debconf loads a templates file (this happens whenever a config or postinst script is run), it automatically instantiates a question from each template. It is actually possible to instantiate several independent questions from the same template (using the REGISTER command), but that is rarely necessary. Templates are static data that comes from the templates file, while questions are used to store dynamic data, like the current value of the question, whether a user has seen a question, and so on. Keep the distinction between a template and a question in mind, but don't worry too much about it. SHARED TEMPLATES It's actually possible to have a template and a question that are shared among a set of packages. All the packages have to provide an identical copy of the template in their templates files. This can be useful if a bunch of packages need to ask the same question, and you only want to bother the user with it once. Shared templates are generally put in the shared/ pseudo-directory in the debconf template namespace. THE DEBCONF PROTOCOL Config scripts communicate with debconf using the debconf protocol. This is a simple line-oriented protocol, similar to common internet protocols such as SMTP. The config script sends debconf a command by writing the command to standard output. Then it can read debconf's reply from standard input. Debconf's reply can be broken down into two parts: A numeric result code (the first word of the reply), and an optional extended result code (the remainder of the reply). The numeric code uses 0 to indicate success, and other numbers to indicate various kinds of failure. For full details, see the table in Debian policy's debconf specification document. The extended return code is generally free form and unspecified, so you should generally ignore it, and should certainly not try to parse it in a program to work out what debconf is doing. The exception is commands like GET, that cause a value to be returned in the extended return code. Generally you'll want to use a language-specific library that handles the nuts and bolts of setting up these connections to debconf and communicating with it. For now, here are the commands in the protocol. This is not the definitive definition, see Debian policy's debconf specification document for that. VERSION number You generally don't need to use this command. It exchanges with debconf the protocol version number that is being used. The current protocol version is 2.0, and versions in the 2.x series will be backwards-compatible. You may specify the protocol version number you are speaking and debconf will return the version of the protocol it speaks in the extended result code. If the version you specify is too low, debconf will reply with numeric code 30. CAPB capabilities You generally don't need to use this command. It exchanges with debconf a list of supported capabilities. Capabilities that both you and debconf support will be used, and debconf will reply with all the capabilities it supports. If 'escape' is found among your capabilities, debconf will expect commands you send it to have backslashes and newlines escaped (as \\ and \n respectively) and will in turn escape backslashes and newlines in its replies. This can be used, for example, to substitute multi-line strings into templates, or to get multi-line extended descriptions reliably using METAGET. In this mode, you must escape input text yourself (you can use debconf-escape(1) to help with this if you want), but the confmodule libraries will unescape replies for you. SETTITLE question This sets the title debconf displays to the user, using the short description of the template for the specified question. The template should be of type title. You rarely need to use this command since debconf can automatically generate a title based on your package's name. Setting the title from a template means they are stored in the same place as the rest of the debconf questions, and allows them to be translated. TITLE string This sets the title debconf displays to the user to the specified string. Use of the SETTITLE command is normally to be preferred as it allows for translation of the title. Some debconf frontends can display a number of questions to the user at once. Maybe in the future a frontend will even be able to group these questions into blocks on screen. BEGINBLOCK and ENDBLOCK can be placed around a set of INPUT commands to indicate blocks of questions (and blocks can even be nested). Since no debconf frontend is so sophisticated yet, these commands are ignored for now. Questions can have substitutions embedded in their 'Description' and 'Choices' fields (use of substitutions in 'Choices' fields is a bit of a hack though; a better mechanism will eventually be developed). These substitutions look like "${key}". When the question is displayed, the substitutions are replaced with their values. This command can be used to set the value of a substitution. This is useful if you need to display some message to the user that you can't hard-code in the templates file.". One common flag is the "seen" flag. It is normally only set if a user has already seen a question. Debconf usually only displays questions to users if they have the seen flag set to "false" (or if it is reconfiguring a package). Sometimes you want the user to see a question again -- in these cases you can set the seen flag to false to force debconf to redisplay it.. Here is a simple example of the debconf protocol in action. INPUT medium debconf/frontend 30 question skipped FSET debconf/frontend seen false 0 false INPUT high debconf/frontend 0 question will be asked GO [ Here debconf displays a question to the user. ] 0 ok GET no/such/question 10 no/such/question doesn't exist GET debconf/frontend 0 Dialog LIBRARIES Setting things up so you can talk to debconf, and speaking the debconf protocol by hand is a little too much work, so some thin libraries exist to relieve this minor drudgery. For shell programming, there is the /usr/share/debconf/confmodule library, which you can source at the top of a shell script, and talk to debconf in a fairly natural way, using lower-case versions of the debconf protocol commands, that are prefixed with "db_" (ie, "db_input" and "db_go"). For details, see confmodule(3). Perl programmers can use the Debconf::Client::ConfModule(3) perl module, and python programmers can use the debconf python module. The rest of this manual will use the /usr/share/debconf/confmodule library in example shell scripts. Here is an example config script using that library, that just asks a question: #!/bin/sh set -e . /usr/share/debconf/confmodule db_set mypackage/reboot-now false db_input high mypackage/reboot-now || true db_go || true Notice the uses of "|| true" to prevent the script from dying if debconf decides it can't display a question, or the user tries to back up. In those situations, debconf returns a non-zero exit code, and since this shell script is set -e, an untrapped exit code would make it abort. And here is a corresponding postinst script, that uses the user's answer to the question to see if the system should be rebooted (a rather absurd example..): #!/bin/sh set -e . /usr/share/debconf/confmodule db_get mypackage/reboot-now if [ "$RET" = true ]; then shutdown -r now fi Notice the use of the $RET variable to get at the extended return code from the GET command, which holds the user's answer to the question. Besides the config script and postinst, you can use debconf in any of the other maintainer scripts. Most commonly, you'll be using debconf in your postrm, to call the PURGE command when your package is removed, to clean out its entries in the debconf database. (This is automatically set up for you by dh_installdebconf(1), by the way.) A more involved use of debconf would be if you want to use it in the postrm when your package is purged, to ask a question about deleting something. Or maybe you find you need to use it in the preinst or prerm for some reason. All of these uses will work, though they'll probably involve asking questions and acting on the answers in the same program, rather than separating the two activities as is done in the config and postinst scripts. Note that if your package's sole use of debconf is in the postrm, you should make your package's postinst source /usr/share/debconf/confmodule, to give debconf a chance to load up your templates file into its database. Then the templates will be available when your package is being purged. You can also use debconf in other, standalone programs. The issue to watch out for here is that debconf is not intended to be, and must not be used as a registry. This is unix after all, and programs are configured by files in /etc, not by some nebulous debconf database (that is only a cache anyway and might get blown away). So think long and hard before using debconf in a standalone program. There are times when it can make sense, as in the apt-setup program which uses debconf to prompt the user in a manner consistent with the rest of the debian install process, and immediately acts on their answers to set up apt's sources.list. LOCALIZATION Debconf supports localization of templates files. This is accomplished by adding more fields, with translated text in them. Any of the fields can be translated. For example, you might want to translate the description into Spanish. Just make a field named 'Description-es' that holds the translation. If a translated field is not available, debconf falls back to the normal English field. Besides the 'Description' field, you should translate the 'Choices' field of a select or multiselect template. Be sure to list the translated choices in the same order as they appear in the main 'Choices' field. You do not need to translate the 'Default' field of a select or multiselect question, and the value of the question will be automatically returned in English. You will find it easier to manage translations if you keep them in separate files; one file per translation. In the past, the debconf- getlang(1) and debconf-mergetemplate(1) programs were used to manage debian/template.ll files. This has been superseded by the po-debconf(7) package, which lets you deal with debconf translations in .po files, just like any other translations. Your translators will thank you for using this new improved mechanism. For the details on po-debconf, see its man So you have a package that's supposed to use debconf, but it doesn't quite work. Maybe debconf is just not asking that question you set up. Or maybe something weirder is happening; it spins forever in some kind of loop, or worse. Luckily, debconf has plenty of debugging facilities. DEBCONF_DEBUG The first thing to reach for is the DEBCONF_DEBUG environment variable. If you set and export DEBCONF_DEBUG=developer, debconf will output to stderr a dump of the debconf protocol as your program runs. It'll look something like this -- the typo is made clear: debconf (developer): <-- input high debconf/frontand debconf (developer): --> 10 "debconf/frontand" doesn't exist debconf (developer): <-- go debconf (developer): --> 0 ok It's rather useful to use debconf's readline frontend when you're debugging (in the author's opinion), as the questions don't get in the way, and all the debugging output is easily preserved and logged. DEBCONF_C_VALUES If this environment variable is set to 'true', the frontend will display the values in Choices-C field (if present) of select and multiselect templates rather than the descriptive values.. It turns out that if you set up a ~/.debconfrc file for a normal user, pointing at a personal config.dat and template.dat for the user, you can load up templates and run config scripts all you like, without any root access. If you want to start over with a clean database, just blow away the *.dat files. For details about setting this up, see debconf.conf(5), and note that /etc/debconf.conf makes a good template for a personal ~/.debconfrc file. ADVANCED PROGRAMMING WITH DEBCONF Config file handling Many of you seem to want to use debconf to help manage config files that are part of your package. Perhaps there is no good default to ship in a conffile, and so you want to use debconf to prompt the user, and write out a config file based on their answers. That seems easy enough to do, but then you consider upgrades, and what to do when someone modifies the config file you generate, and dpkg-reconfigure, and ... There are a lot of ways to do this, and most of them are wrong, and will often earn you annoyed bug reports. Here is one right way to do it. It assumes that your config file is really just a series of shell variables being set, with comments in between, and so you can just source the file to "load" it. If you have a more complicated format, reading (and writing) it becomes a bit trickier. Your config script will look something like this: #!/bin/sh CONFIGFILE=/etc/foo.conf set -e . /usr/share/debconf/confmodule # Load config file, if it exists. if [ -e $CONFIGFILE ]; then . $CONFIGFILE || true # Store values from config file into # debconf db. db_set mypackage/foo "$FOO" db_set mypackage/bar "$BAR" fi # Ask questions. db_input medium mypackage/foo || true db_input medium mypackage/bar || true db_go || true And the postinst will look something like this: #!/bin/sh CONFIGFILE=/etc/foo.conf set -e . /usr/share/debconf/confmodule # Generate config file, if it doesn't exist. # An alternative is to copy in a template # file from elsewhere. if [ ! -e $CONFIGFILE ]; then echo "# Config file for my package" > $CONFIGFILE echo "FOO=" >> $CONFIGFILE echo "BAR=" >> $CONFIGFILE fi # Substitute in the values from the debconf db. # There are obvious optimizations possible here. # The cp before the sed ensures we do not mess up # the config file's ownership and permissions. db_get mypackage/foo> $CONFIGFILE sed -e "s/^ *FOO=.*/FOO=\"$FOO\"/" \ -e "s/^ *BAR=.*/BAR=\"$BAR\"/" \ < $CONFIGFILE > $CONFIGFILE.tmp mv -f $CONFIGFILE.tmp $CONFIGFILE Consider how these two scripts handle all the cases. On fresh installs the questions are asked by the config script, and a new config file generated by the postinst. On upgrades and reconfigures, the config file is read in, and the values in it are used to change the values in the debconf database, so the admin's manual changes are not lost. The questions are asked again (and may or may not be displayed). Then the postinst substitutes the values back into the config file, leaving the rest of it unchanged. Letting the user back up Few things are more frustrating when using a system like debconf than being asked a question, and answering it, then moving on to another screen with a new question on it, and realizing that hey, you made a mistake, with that last question, and you want to go back to it, and discovering that you can't. Since debconf is driven by your config script, it can't jump back to a previous question on its own but with a little help from you, it can accomplish this feat. The first step is to make your config script let debconf know it is capable of handling the user pressing a back button. You use the CAPB command to do this, passing backup as a parameter. Then after each GO command, you must test to see if the user asked to back up (debconf returns a code of 30), and if so jump back to the previous question. There are several ways to write the control structures of your program so it can jump back to previous questions when necessary. You can write goto-laden spaghetti code. Or you can create several functions and use recursion. But perhaps the cleanest and easiest way is to construct a state machine. Here is a skeleton of a state machine that you can fill out and expand. #!/bin/sh set -e . /usr/share/debconf/confmodule db_capb backup STATE=1 while true; do case "$STATE" in 1) # Two unrelated questions. db_input medium my/question || true db_input medium my/other_question || true ;; 2) # Only ask this question if the # first question was answered in # the affirmative. db_get my/question if [ "$RET" = "true" ]; then db_input medium my/dep_question || true fi ;; *) # The default case catches when $STATE is greater than the # last implemented state, and breaks out of the loop. This # requires that states be numbered consecutively from 1 # with no gaps, as the default case will also be entered # if there is a break in the numbering break # exits the enclosing "while" loop ;; esac if db_go; then STATE=$(($STATE + 1)) else STATE=$(($STATE - 1)) fi done if [ $STATE -eq 0 ]; then # The user has asked to back up from the first # question. This case is problematical. Regular # dpkg and apt package installation isn't capable # of backing up questions between packages as this # is written, so this will exit leaving the package # unconfigured - probably the best way to handle # the situation. exit 10 fi Note that if all your config script does is ask a few unrelated questions, then there is no need for the state machine. Just ask them all, and GO; debconf will do its best to present them all in one screen, and the user won't need to back up. Preventing infinite loops One gotcha with debconf comes up if you have a loop in your config script. Suppose you're asking for input and validating it, and looping if it's not valid: ok='' do while [ ! "$ok" ]; db_input low foo/bar || true db_go || true db_get foo/bar if [ "$RET" ]; then ok=1 fi done This looks ok at first glance. But consider what happens if the value of foo/bar is "" when this loop is entered, and the user has their priority set high, or is using a non-interactive frontend, and so they are not really asked for input. The value of foo/bar is not changed by the db_input, and so it fails the test and loops. And loops ... One fix for this is to make sure that before the loop is entered, the value of foo/bar is set to something that will pass the test in the loop. So for example if the default value of foo/bar is "1", then you could RESET foo/bar just before entering the loop. Another fix is to check the return code of the INPUT command. If it is 30 then the user is not being shown the question you asked them, and you should break out of the loop. Choosing among related packages Sometimes a set of related packages can be installed, and you want to prompt the user which of the set should be used by default. Examples of such sets are window managers, or ispell dictionary files. While it would be possible for each package in the set to simply prompt, "Should this package be default?", this leads to a lot of repetitive questions if several of the packages are installed. It's possible with debconf to present a list of all the packages in the set and allow the user to choose between them. Here's how. Make all the packages in the set use a shared template. Something like this: Template: shared/window-manager Type: select Choices: ${choices} Description: Select the default window manager. Select the window manager that will be started by default when X starts. Each package should include a copy of the template. Then it should include some code like this in its config script: db_metaget shared/window-manager owners OWNERS=$RET db_metaget shared/window-manager choices CHOICES=$RET if [ "$OWNERS" != "$CHOICES" ]; then db_subst shared/window-manager choices $OWNERS db_fset shared/window-manager seen false fi db_input medium shared/window-manager || true db_go || true A bit of an explanation is called for. By the time your config script runs, debconf has already read in all the templates for the packages that are being installed. Since the set of packages share a question, debconf records that fact in the owners field. By a strange coincidence, the format of the owners field is the same as that of the choices field (a comma and space delimited list of values). The METAGET command can be used to get the list of owners and the list of choices. If they are different, then a new package has been installed. So use the SUBST command to change the list of choices to be the same as the list of owners, and ask the question. When a package is removed, you probably want to see if that package is the currently selected choice, and if so, prompt the user to select a different package to replace it. This can be accomplished by adding something like this to the prerm scripts of all related packages (replacing <package> with the package name): if [ -e /usr/share/debconf/confmodule ]; then . /usr/share/debconf/confmodule # I no longer claim this question. db_unregister shared/window-manager # See if the shared question still exists. if db_get shared/window-manager; then db_metaget shared/window-manager owners db_subst shared/window-manager choices $RET db_metaget shared/window-manager value if [ "<package>" = "$RET" ] ; then db_fset shared/window-manage seen false db_input high shared/window-manager || true db_go || true fi # Now do whatever the postinst script did # to update the window manager symlink. fi fi HACKS Debconf is currently not fully integrated into dpkg (but I want to change this in the future), and so some messy hacks are currently called for. The worst of these involves getting the config script to run. The way that works now is the config script will be run when the package is pre-configured. Then, when the postinst script runs, it starts up debconf again. Debconf notices it is being used by the postinst script, and so it goes off and runs the config script. This can only work if your postinst loads up one of the debconf libraries though, so postinsts always have to take care to do that. We hope to address this later by adding explicit support to dpkg for debconf. The debconf(1) program is a step in this direction. A related hack is getting debconf running when a config script, postinst, or other program that uses it starts up. After all, they expect to be able to talk to debconf right away. The way this is accomplished for now is that when such a script loads a debconf library (like /usr/share/debconf/confmodule), and debconf is not already running, it is started up, and a new copy of the script is re-execed. The only noticeable result is that you need to put the line that loads a debconf library at the very top of the script, or weird things will happen. We hope to address this later by changing how debconf is invoked, and turning it into something more like a transient daemon. It's rather hackish how debconf figures out what templates files to load, and when it loads them. When the config, preinst, and postinst scripts invoke debconf, it will automatically figure out where the templates file is, and load it. Standalone programs that use debconf will cause debconf to look for templates files in /usr/share/debconf/templates/progname.templates. And if a postrm wants to use debconf at purge time, the templates won't be available unless debconf had a chance to load them in its postinst. This is messy, but rather unavoidable. In the future some of these programs may be able to use debconf-loadtemplate by hand though. /usr/share/debconf/confmodule's historic behavior of playing with file descriptors and setting up a fd #3 that talks to debconf, can cause all sorts of trouble when a postinst runs a daemon, since the daemon ends up talking to debconf, and debconf can't figure out when the script terminates. The STOP command can work around this. In the future, we are considering making debconf communication happen over a socket or some other mechanism than stdio. Debconf sets DEBCONF_RECONFIGURE=1 before running postinst scripts, so a postinst script that needs to avoid some expensive operation when reconfigured can look at that variable. This is a hack because the right thing would be to pass $1 = "reconfigure", but doing so without breaking all the postinsts that use debconf is difficult. The migration plan away from this hack is to encourage people to write postinsts that accept "reconfigure", and once they all do, begin passing that variable.)
http://manpages.ubuntu.com/manpages/maverick/man7/debconf-devel.7.html
CC-MAIN-2013-48
refinedweb
5,287
61.77
2017-09-15 Aliases: wordfree(3), wordfree(3), wordfree(3), wordfree(3), wordfree(3), wordfree: RETURN VALUE In case of success 0 is returned. In case of error one of the following five values is returned. VERSIONS wordexp() and wordfree() are provided in glibc since version 2.1.() calls those functions, so we use race:utent to remind users. CONFORMING TO POSIX.1-2001, POSIX.1-2008. EXAMPLE The output of the following example program is approximately that of "ls [a-c]*.c". #include <stdio.h> #include <stdlib.h> #include <wordexp.h> int main(int argc, char **argv) { wordexp_t p; char **w; int i; wordexp_t p; char **w; int i; wordexp("[a-c]*.c", &p, 0); w = p.we_wordv; for (i = 0; i < p.we_wordc; i++) printf("%s\n", w[i]); wordfree(&p); exit(EXIT_SUCCESS); } w = p.we_wordv; for (i = 0; i < p.we_wordc; i++) printf("%s\n", w[i]); wordfree(&p); exit(EXIT_SUCCESS); }
https://reposcope.com/man/en/3/wordexp
CC-MAIN-2020-29
refinedweb
154
62.95
audio_engine_channels(9E) audio_engine_playahead(9E) - display a driver message on system console #include <sys/types.h> #include <sys/errno.h> #include <sys/ddi.h> #include <sys/sunddi.h> int prefixprint(dev_t dev, char *str); Architecture independent level 1 (DDI/DKI). This entry point is required for block devices. Device number. Pointer to a character string describing the problem. The print() routine is called by the kernel when it has detected an exceptional condition (such as out of space) in the device. To display the message on the console, the driver should use the cmn_err(9F) kernel function. The driver should print the message along with any driver specific information. The print() routine should return 0 for success, or the appropriate error number. The print routine can fail if the driver implemented a non-standard print() routine that attempted to perform error logging, but was unable to complete the logging for whatever reason.
http://docs.oracle.com/cd/E19963-01/html/821-1476/print-9e.html
CC-MAIN-2017-17
refinedweb
151
51.44
Editor's note: If you missed last week's batch of recipes from O'Reilly's Eclipse Cookbook, be sure to check them out. This week, we offer two more sample recipes from the book. Both offer glimpses of Eclipse in action -- the first with CVS, and the second with Swing. Enjoy. Related Reading Eclipse Cookbook By Steve Holzner You want to connect Eclipse to a CVS repository. In Eclipse, open the Repositories view, right-click that view, and select New? Repository Location, opening the Add CVS Repository dialog. Enter the required information, and click OK. You have to establish a connection from Eclipse through the CVS server to the CVS repository before working with that repository. First, make sure your CVS server is running. To connect Eclipse to the CVS repository, select Window? Open Perspective? Other, and select the CVS Repository Exploring perspective. After you do this the first time, Eclipse adds this perspective to the Window? Open Perspective submenu and also adds a shortcut for this perspective to the other perspective shortcuts at the extreme left of the Eclipse window. When the CVS Repository Exploring perspective opens, right-click the blank CVS repositories view at left, and select New? Repository Location, opening the Add CVS Repository dialog shown in Figure 6-3. Figure 6-3. Connecting Eclipse to a CVS repository In the Add CVS Repository dialog, enter the name of the CVS server, often the name of the machine hosting the CVS server, and the path to the CVS repository. To connect to the CVS server, you'll also need to supply a username and password, as shown in Figure 6-3 (in this case we're using integrated Windows NT security, so no password is needed). You can use two connection protocols with CVS servers, SSH (secure shell) and pserver. We'll use pserver here. pserver TIP: pserver is a CVS client/server protocol that uses its own password files and connections. It's more efficient than SSH but less secure. If security is an issue, go with SSH. Click Finish after configuring the connection. The new connection to the CVS server should appear in the CVS Repositories view, as shown in Figure 6-4. Figure 6-4. A new repository created in the CVS Repositories view TIP: A public CVS server is available that gives you access to the code for Eclipse; go to :pserver:anonymous@dev.eclipse.org:/home/eclipse. If you wish, you can see what commands Eclipse sends to the CVS server. To do so, open the CVS console by selecting Window→ Show View→ Other→ CVS→ CVS Console. The CVS Console view will appear (this view overlaps the standard Console view). Eclipse 3.0 also supports CVS SSH2 in addition to the pserver and SSH protocols. You can enable SSH2 in the SSH2 Connection Method preference page (right-click a project and select Team→ CVS→ SSH2 Connection Method). All CVS server connections of type extssh will use SSH2 from that point on. extssh Chapter 4 of Eclipse (O'Reilly). You want to use Swing or AWT graphical elements inside an SWT application. SWT (in Eclipse 3.0) supports embedding Swing/AWT widgets inside SWT widgets. Before Version 3.0, such support worked only on Windows. In Eclipse 3.0, it's now working in Windows for JDK 1.4 and later, as well as in GTK and Motif with early-access JDK 1.5. To work with AWT and Swing elements in SWT, you use the SWT_AWT class. As an example (SwingAWTApp at this book's site), we'll create an application with an AWT frame and panel, as well as a Swing button and text control. We'll start with a new SWT composite widget: SWT_AWT Composite composite = new Composite(shell, SWT.EMBEDDED); composite.setBounds(20, 20, 300, 200); composite.setLayout(new RowLayout( )); Now add an AWT Frame window object to the composite using the SWT_AWT.new_Frame method, and add an AWT Panel object to the frame: Frame SWT_AWT.new_Frame Panel java.awt.Frame frame = SWT_AWT.new_Frame(composite); java.awt.Panel panel = new java.awt.Panel( ); frame.add(panel); We can now work with Swing controls. In this example, we'll add a Swing button and a Swing text control to the AWT panel: final javax.swing.JButton button = new javax.swing.JButton("Click Me"); final javax.swing.JTextField text = new javax.swing.JTextField(20); panel.add(button); panel.add(text); All that's left is to connect the button to a listener to display a message when that button is clicked. You can see how that works in Example 9-5. Example 9-5. Using Swing and AWT in SWT package org.cookbook.ch09; import java.awt.event.*; import org.eclipse.swt.*; import org.eclipse.swt.widgets.*; import org.eclipse.swt.layout.*; import org.eclipse.swt.awt.SWT_AWT; public class SwingAWTClass { public static void main(String[] args) { final Display display = new Display( ); final Shell shell = new Shell(display); shell.setText("Using Swing and AWT"); shell.setSize(350, 280); Composite composite = new Composite(shell, SWT.EMBEDDED); composite.setBounds(20, 20, 300, 200); composite.setLayout(new RowLayout( )); java.awt.Frame frame = SWT_AWT.new_Frame(composite); java.awt.Panel panel = new java.awt.Panel( ); frame.add(panel); final javax.swing.JButton button = new javax.swing.JButton("Click Me"); final javax.swing.JTextField text = new javax.swing.JTextField(20); panel.add(button); panel.add(text); button.addActionListener(new ActionListener( ) { public void actionPerformed(ActionEvent event) { text.setText("Yep, it works."); } }); shell.open( ); while(!shell.isDisposed( )) { if (!display.readAndDispatch( )) display.sleep( ); } display.dispose( ); } } Running this application gives you the results shown in Figure 9-12; a Swing button and text control at work in an AWT frame in an SWT application. Cool. Figure 9-12. Mixing AWT, Swing, and SWT Steve Holzner is the author of O'Reilly's upcoming Eclipse: A Java Developer's.
http://www.onjava.com/pub/a/onjava/excerpt/eclipseckbk_chap1/index1.html?page=last&x-maxdepth=0
CC-MAIN-2013-20
refinedweb
978
59.8
Apache OpenOffice (AOO) Bugzilla – Issue 37905 Mixed Hebrew/English text rendered incorrectly (text direction "context") Last modified: 2013-08-07 15:12:27 UTC After importing an Excel spreadsheet with mixed Hebrew/English text, the Hebrew text which should be on the right appears on the left, and vice versa. Note the English letter "k" in the middle of the text. The Hebrew word to its right in Excel has moved to the left, and the space between the word and the "k" has disappeared. Created attachment 19692 [details] Spreadsheet as it appears in Excel Created attachment 19693 [details] Spreadsheet as it appears in Calc Created attachment 19694 [details] The Excel spreadsheet Hi Henning, Daniel told me that this seems to be a problem of the Edit engine. So it's your turn now. Frank set target FME->DR: As discussed, the Excel setting "text direction: context" is not equivalent to "text direction: environment" in Calc. *** Issue 30158 has been marked as a duplicate of this issue. *** Needs a new entry in the Cell format dialog -> incompatible from SVX -> must be set to OOo Later ayaniger -> dr: I'm checking in m113 on Windows, and this appears to be fixed. Could you take a look? Looks better now, but not perfect. Calc still cannot handle the text direction "Context" known form Excel. This setting simply sets the cell to right-to-left, if the string starts with a character from a CTL script (hebrew, arabic), and sets the cell to left-to-right, if the string starts with another character. I will attach one of my test documents. Note the 2 cells that have "Context" set as text direction. Created attachment 28664 [details] Test document for CTL text direction In the attached Excel spreadsheet tmp_dan.xls, cells which should be aligned to left are aligned to right. This seems is similar to the problem of Issue 30158, which was closed as a duplicate of this issue. This is a problem that RTL users see very frequently. Since over a year has passed since this issue was filed, and almost two years since Issue 30158 was filed, can we raise the priority and the target milestone? Created attachment 33101 [details] OOo incorrectly aligns cells to the left in this spreadsheet With the attached patch, all of the sample documents attached to this issue and to issue 30158 import correctly. The code makes two changes on OO's behavior: 1. The default alignment of cells which begin with a Hebrew letter is to the right, like Excel, and unlike the distributed OOo. I don't know whether Excel behaves this way for all RTL languages, but this is Excel's behavior for Hebrew, so the patch tests only for Hebrew. 2. In an imported Excel document, mixed-text cells beginning with a non-Hebrew char and with "Context" text-direction will have the text direction set to LTR. This is the case in cell C5 of the sample document "textdirection_import_biff8.xls". There is still a problem - when editing a cell with default alignment, the text in the grid becomes left-aligned even for Hebrew, for the duration of the edit. (This is only a problem on the grid, not in the input edit box above the grid.) Once the editing is finished, the alignment reverts back to the correct form. For more discussion, see the thread at: Created attachment 40078 [details] Imports Hebrew default-aligned cells correctly I received a comment asking for justification in changing OOo's default alignment based on content. As "dr" pointed out in his comment of April 2005, what is needed is a new type of SVX alignment, and this is not a simple matter to implement. Since there hasn't been movement on the issue since then, and the target remains "OOo Later", I assume that the change is not likely to happen soon. Under the current circumstances, any Hebrew Excel spreadsheet that uses Excel's default alignment (which is probably most of them) will be distorted if not unreadable. This is why I thought that changing the behavior of "ENVIRONMENT" for RTL was the best way to go. I would appreciate it if you would review and integrate the patch, or implement an alternative, since at present import of Hebrew Excel spreadsheets is usually broken. I reviewed my own patch, and decide that it's better not to change the default alignment of RTL cells. Here's a simpler patch, which solves the import problem, but doesn't change the behavior of RTL cells in Calc. Created attachment 40336 [details] Solves import problem, doesn't change anything else Created attachment 40337 [details] Solves import problem, doesn't change anything else (mime type corrected) @dr: Did you check ayaniger's patch ? It here for a year and a half... Look OK for me with OpenOffice.org 3.2.1 (OOO320m19) from Debian. Can anyone else confirm? @kaplan: on Mandriva Linux Cooker with openoffice.org-common-3.2-4mdv2010.1 , I'm getting the offending behaviour upon typing: $ ooffice3.2 Mixed\ Text.xls I.e: I'm getting "He{Eivarim} La{k} He{Behirath}" instead of "He{Behirath} La{k} He{Eivarim}". This can be fixed by pressing the "Right-To-Left" button. Regards, -- Shlomi Fish Update: Checking Mixed Text.xls which oo.org 3.2.0 (windows) on windows and 2.3.1 (debian) both look as excel 2003 on windows. Checking textdirection_import_biff8.xls doesn't looks the same comparing to Excel 2003. The funny thing is that 3.2.0 on windows looks different from 3.2.1 on debian. And they both differ from Excel. Weird. Created attachment 71237 [details] textdirection_import_biff8.xls with 3.2.0 on windows Created attachment 71238 [details] textdirection_import_biff8.xls with 3.2.1 on debian Created attachment 71239 [details] textdirection_import_biff8.xls with excel 2003 The screenshot of textdirection_import_biff8.xls with 3.2.0 on windows is also the same for OO3.3-Beta1. According to the screenshots, "OS:" has to be changed from "Windows XP" to "all".
https://bz.apache.org/ooo/show_bug.cgi?id=37905
CC-MAIN-2020-16
refinedweb
1,008
65.22
#include <wx/filectrl.h> A file control event holds information about events associated with wxFileCtrl objects. The following event handler macros redirect the events to member function handlers 'func' with prototypes like: Event macros: Constructor. Returns the current directory. In case of a EVT_FILECTRL_FOLDERCHANGED, this method returns the new directory. Returns the file selected (assuming it is only one file). Returns the files selected. In case of a EVT_FILECTRL_SELECTIONCHANGED, this method returns the files selected after the event. Returns the current file filter index. For a EVT_FILECTRL_FILTERCHANGED event, this method returns the new file filter index. Sets the directory of this event. Sets the files changed by this event. Sets the filter index changed by this event.
http://docs.wxwidgets.org/trunk/classwx_file_ctrl_event.html
CC-MAIN-2017-43
refinedweb
117
53.27
New firmware release 1.6.12.b1 Hello, A new firmware version (1.6.12.b1) has been released. Here's the change log: - General fixes and improvements to time.sleep(), time.sleep_ms() and time.sleep_us() - Improve pin interrupts timing. - Fix memory fragmentation issue caused by the timer alarms. - Fix bug caused by machine.reset(). Used a quick WDT reset instead which actually performs a full reset of the chip. - Fix bug on BLE advertisement. Prevent system hang when trying to advertise a BLE service with length != 16. - Solve race condition when reconfiguring the UART while an interrupt takes place. - Add support for the LoPy OEM and WiPy OEM. - Allow BLE characteristic read callbacks to return a value. This value will be the one received by the BLE client performing the read operation. - General improvements to the docs. Cheers, Daniel I have noticed since the previous firmware that ftp and telnet performance has dipped for the lopy. It is very slow especially while using the telnet REPL prompt. UART REPL prompt shows no issues at all though. Does anyone face a similar issue? @robert-hh Thank you . Now it seems to work. I do a stupid mistake :( . Who knows why it worked with the old firmware version? :D @Innocenzo What I see is that you use "str" both as built-in function and as local variable. That will cause confusion. Although it should not give the error message you show. Something strange happens with my code that worked well with previous firmware! import usocket as socket from network import WLAN def connectWeb(server_websocket): webaddr = socket.getaddrinfo("0.0.0.0", 80)[0][-1] print("Bind address info:", webaddr) server_websocket.bind(webaddr) server_websocket.listen(5) print("Listening, connect your browser to while True: res = server_websocket.accept() client_sock = res[0] client_addr = res[1] print("Client address:", client_addr) print("Client socket:", client_sock) print("Request:") req = str(client_sock.readline()) print(req) while True: h = client_sock.readline() if h == b"" or h == b"\r\n": break print(h) ib = req.find(':') if ib > 0: str = req.split(":") ssid = str[1] pwd = str[2] security = ["None","WEP","WPA","WPA2"] sec = security[int(str[3])] wlan.init(mode=WLAN.STA) wlan.connect(ssid, auth=(sec, pwd), timeout=5000 utime.sleep_ms(5000) if not wlan.isconnected() f = open('connect.txt','w') f.write("0") f.close() machine.reset() else: f = open('connect.txt', 'w') f.write("1") f.close() f = open('ssid.txt', 'w') f.write(ssid) f.close() f = open('pass.txt', 'w') f.write(pwd) f.close() f = open('sec.txt', 'w') f.write(sec) f.close() break else: with open('index.html', 'r') as html: client_sock.send(html.read()) client_sock.close() print("INIT") wlan = WLAN() wlan.init(mode=WLAN.AP, ssid = 'WiPy', auth=(WLAN.WPA2,'testwipy2'), channel=7, antenna=WLAN.INT_ANT) wlan.ifconfig(1, config=('192.168.4.1', '255.255.255.0', '192.168.4.1', '0.0.0.0')) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) connectWeb(s) return Client address: ('192.168.4.2', 40725) Client socket: <socket> Request: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "test.py", line 80, in <module> File "test.py", line 23, in connectWeb NameError: local variable referenced before assignment @brotherdust As far as I know (and I'm not Pycom employee) development is still ongoing on at but yes sources aren't in sync with the latest version, just slightly ahead of 1.6.9.b1 This has happened a few times in the past, sometimes they're really good at it sometimes it takes a bit longer. Not to be a stalker or anything :-) but I saw Pycom recently tweet about hackaton in Shanghai and Daniel was there so that may be the reason. They have started some integration with the official MicroPython and there's been a few pull request, Daniel explained in another thread - brotherdust last edited by @jmarcelino, can you confirm where development is actually happening these days? The source repos are out of date. Pycom used to be really good at keeping the sources in-sync with the current firmware release, but it seems to be a low priority now. I do remember an announcement that the sources will be merged with the official micropython tree, but I'm not seeing any commits or pull requests that would show this is actually happening. Any thoughts? Thanks! - this.wiederkehr last edited by @jmarcelino: Thanks for your answer although that repo isn't very up to date either ( last commit 24 days ago). @this.wiederkehr That's the old source tree, you should use now but it hasn't got the very latest updates yet - this.wiederkehr last edited by Where can I find the sources in order to rebuild myself? Where is the development actually happening? Last commit to was on 7th of march.. Thanks for clarifying. Regards This This update seems to be a good one, thanks Daniel. I no longer am running out of memory and so far have not experienced any mysterious application hanging. The sporadic and temporary halting/hiccupping of my application has also vanished. I suspect I must have been bumping my head against the "memory fragmentation issue caused by the timer alarms" and "General fixes and improvements to time.sleep(), time.sleep_ms() and time.sleep_us()" @daniel This version does not run .mpy files from the files system any more. It looks like you have the wrong setting in mpconfigport.py, or the line: #define MICROPY_PERSISTENT_CODE_LOAD (1) is missing. @daniel Hello Daniel, pin interrupt response times did not really improve. I had still rare cases of about 1 ms resposne time, with most responses within 150 µs, and 50 µs fastest response. Multiple triggers on slow pulses got fewer. I assume these cannot be avoided by firmware. B.T.W.: When do you plan to update the github repository? I have the habit of putting some code into frozen bytecode. Very Nice for BLE ! I think that also bugs related to SSL Socket Server Side ( must be considered in the next firmware release. Good Works an Thanks. Nice update! BLE should be simpler with that change, will try it now :-) Thanks!
https://forum.pycom.io/topic/1024/new-firmware-release-1-6-12-b1
CC-MAIN-2022-21
refinedweb
1,029
69.79
This preview shows pages 1–5. Sign up to view the full content. L ECTURES 8 9 I NTRODUCTION TO N ET P RESENT V ALUE AND C APITAL B UDGETING Reading: Chapter 7, Section 7.1 Chapter 8, Sections 8.1-8.4 Homework: Online problems, supplemental readings, study for midterm. Objectives: Understand the Net Present Value Criterion and its importance in evaluating corporate investments Identify the information that is required to compute NPV Identify the relevant, incremental cash flows of an investment proposal Evaluate an investment proposal using the NPV criterion given a discount rate View Full Document This preview has intentionally blurred sections. W HAT IS C APITAL B UDGETING ? Capital budgeting is the decision making process for investment decisions, i.e., deciding how much and what to invest in. The most important decisions a corporation can make involve the acquisition of long-term assets. (Sometimes called capital budgeting or strategic asset allocation.) In what lines of business or activities should we invest? What types of equipment should we acquire? Should we replace our plant or equipment? What other companies should be buy? To make these decisions, we apply the valuation principle and law of one price. What is the market value of the project’s benefits? What are the costs? If the benefit is bigger than the cost, we do it. Net Present Value is a dollar measure of the market value of an investment's benefits less its costs. Put another way, the net present value is the amount by which an investment will increase the market value of the firm. If a project has an NPV of $100,000, doing it would increase the market value of the firm by $100,000. W HAT DOES N ET P RESENT V ALUE MEASURE ? The market value of all financial claims on a firm equals the present value of all current and future free cash flow. Free Cash Flow is the cash flow the firm generates that is available to pay out to all investors, including stockholders, bondholders, banks and other creditors. ... ) 1 ( ) 1 ( ) 1 ( 3 3 3 2 2 2 1 1 1 0 + + + + + + + = r FCF r FCF r FCF FCF Value Firm Net present value measures the present value all of the changes in a firm's current and future free cash flow. Δ means "change in" Δ (Firm Value)= ... ) 1 ( F ) 1 ( F ) 1 ( F 3 3 3 2 2 2 1 1 1 0 + + Δ + + Δ + + Δ + Δ = r CF r CF r CF FCF NPV Δ FCF refers to incremental free cash flows. ( Δ FCF represents all the changes in the firm's free cash flows that result from the investment.) r is the required return (or discount rate or cost of capital) r is the return that could be earned in the financial market for investments with the same risk View Full Document This preview has intentionally blurred sections. EXAMPLE You own a company that operates 5 Arby’s Roast Beef Sandwich shops in Seattle. The company generates free cash flow of $100,000 per year, and you expect to close your stores in 5 years. Today This note was uploaded on 12/02/2009 for the course FIN 350 taught by Professor Schonlau during the Spring '08 term at University of Washington. - Spring '08 - SCHONLAU - Finance, Net Present Value Click to edit the document details
https://www.coursehero.com/file/5672855/Lectures8and9/
CC-MAIN-2017-17
refinedweb
566
63.59
Extreme Programming (XP) seems to be the subject of choice for software's chattering classes, so I thought it was time I went to the source. This is not a book review of Kent Beck's eXtreme Programming Explained, nor is it advocacy for the cause. This is my response to the issues and ideas raised in the book. The book is worth reading at £20 and under 200 pages it will not break the bank or take you for ever to read - I wish more computing books could be so economical with my time and my money. My reaction to this book was 75% like reading Design Patterns for the first time; "I've done that, that's the way I did it on project Y," the other 25% was scary, very scary in places. It did succeed in stirring up feelings in me though and feeling lead to thought, which is good. Some Overload readers will know of my liking for modern art, well, that 25% of the book was like seeing Tracy Emin's infamous Bed in the Turner Awards. It is easy to simply reject it out of hand, but, if you delve deeper and think about it you can obtain and understanding. You may still not like it, but you can appreciate it. Well, 25% of the XP book was like that and it is probably why I feel the need to respond with this article. Before jumping in a word about my qualifications to write this: I have never done XP, I have however, done many of the things described by Beck's XP and I therefore think I'm qualified to pass comment. Much of my personal reasoning on development processes comes from McCarthy (1995) and Maguire (1994), it's several years since I read anything that opened my ideas about the process as much as these two books, I consider Beck's XP a successor to these books[1]. One of the dirty little secrets of software development is; specifications do not work. The Victoria novel approach to software development, so beloved by out-sourcing companies is fatally flawed. To write a comprehensive spec you have to implement most of the system. Performing comprehensive analysis is a sure-fire way to get bogged down on the starting blocks. The only complete specification you will ever have is the program code. XP copes with this issue in two ways. Firstly, specifications are presented as a set of stories, a story is half use case, half feature. You have a pile (literally, your advised to write these on CRC cards) and the top priorities come at the top and the lesser ones at the bottom. This also allows XP to cope with changes and additions to the spec, you simply add and remove stories (CRC cards) as required. Secondly, XP accepts that the code is the best source of specification and documentation. Like McCarthy, Beck is a big, big fan of iterative development. The "release early, release often" school has been gaining ground for several years and is widely used in the Open Source community. I long ago came to realise that large monolithic release just do not work. They are easy to understand and fit into a water-fall development strategy but increase the risks, if you miss a release everything in that release is lost. With a short release cycle, adding a few features at a time, you minimise these risks. Short release cycles do have a downside: organisations that are change averse will require a major cultural shift. Also, every release carries over heads, these are usually fairly fixed for each release (e.g. customer sign off, release notes, source code labelling) and if you increase the number of releases you will increase the amount of time spent on these activities. Further, every release carries a risk factor, a series of small incremental releases could entail more cumulative risk than one big release. Most developers accept testing as a necessary evil. XP has an interesting take on this: write your tests before you code and write them as automated tests. I cannot say I disagree with either of these but I do not think it is as easy as Beck makes out. A large part of XP depends on you being able to write automated tests that can be run repeatedly and quickly. Sometimes this will not work: I've been on several projects where it just is not possible to automate the tests e.g. data is not repeatable because it comes from a market feed or a environment sensor; and I've also written programs with run times of several hours. Writing the tests before coding should confer some of the benefits of prototyping because you are forced to explore the problem before constructing the solution. Beck claims there are four variables in XP: Cost, time, quality and scope[2]: the objective of the planning game is to bring these into equilibrium for the next iteration cycle. Beck encourages a team approach, together with the customer, to the planning game. Beck does not pretend, as so many managers do, that requirements are independent of the time it will take to implement them. I am sure most developers have faced a high priority requirement X which will take N weeks to implement, and low priority requirement Y with will take N hours: sometimes a little jam today makes a long wait more attractive. This is not to say always we do the easy things first: Beck is right to say tackle the difficult bits first, this way you avoid hitting a show stopper later and have some easy work to look forward to. This is probably the XP idea that gets the most attention, and is undoubtedly not without merit. Personally, I am not sure its always applicable. Sometimes we just require a good old think, maybe a fiddle and a re-think. I expect working in a pair would make you more prone to get something working without necessarily giving it in depth thinking. But as XP advocates simple design, this is not an issue. In part, I suspect it depends on the temperament of developers: for many programming in pairs will bring a discipline, for others it will seem like a bind. Maybe to pair, or not to pair, for any given piece of work, should be the decision taken at the time by a developer. While I agree with the sentiment of this idea, this is where I have my biggest problem with XP. Let me take it as two items to start of with. Simple design: Beck says "produce the absolute simplest solution" and re-work it next time. For me this raises so many questions. What is the simplest design? Lets start throwing away some stuff: Why bother with namespaces? If we need one we'll add it later Why bother with virtual functions? We can make it virtual later. Why use an abstract base class? That only complicates things, put all the code in the base class, if we need to derive, we will re-work it. Exception handling: we'll add it when we need it. In fact, we do not need polymorphism or a class hierarchy at all, we can just add tags to the switch statements later. Databases: just add the tables as you need them, we'll normalise later I cannot accept that it is always in the best interest of the system to look the other way and do something very simple every time. Exception handling in particular is notorious difficult to retro-fit yet can, in the long run significantly enhance the understandability and hence maintainability of a system. I strongly believe that if we pursue the "simplest design possible" we will end up with code which lacks elegance and extendibility. Potentially, this rule violates Bertrand Meyer's "open-closed" rule: the code will not be open to extension (as this is the absolute simplest possible why should it be?), and will not be closed to other modules because if another module needs to re-use it we can just refactor it then. I am pleased Beck and others are making refactoring respectable, indeed, almost a buzzword. However, I am not sure that refactoring is possible without some semblance of change to XP principles. Faced with several thousand lines of tag-and-switch code (done that way because it is simple, and lets face it, in a small program a switch statement is easier to follow than virtual methods) I may not be able to refactor to any significant degree. I suspect that this simple design and refactor is a substitute for prototyping. It amounts to the same thing: write something then throw it away when we have learned about the problem. Compare the development cycles for both: My biggest problem with all of this is one of my own rules of thumb built up over several years. Get it right first time, because no matter how much you or your boss thinks you can come back and sort it out later chances are you will not get the time. To give Beck his due I suspect that his definition of simplest design possible may not be as absolute as mine. An individual's definition of simplest possible will depend on their experience and views. Kevlin Henny has suggested that a solution that computes but is not architecturally elegant is not a solution that works. If we moderate our definition of simplest possible we may over come many of the problems raised here. Beck is equally absolute on the matter of eliminating duplicate code as part of refactoring. It is worth noting that John Lakos makes a good case for the use of duplicate code. Any team adopting XP would be well advised to prepare for these battles. Collective ownership seems like a good idea but again I have some worries about it. Could it actually be an excuse for no-ownership? Where an entire team owns a product that is a good thing, when the new version ships the whole team have something to celebrate, if flaws occur then its a flaw in the team's baby. But when it comes to collective ownership of the entire code base I see several problems: Some areas of code can be complex. Particularly where a knotty business problem is involved. Here it is quite natural that one or two people become more involved with this aspect of the project. Sometimes they can end up understanding the problem better than the customer. Are we to hold back developers because they are moving ahead of the pack? Are we to demand that customers explain the same functionality to every developer on the team, even if this means them explaining it four, five, or even six times? Collective ownership can also be an excuse to ignore issues in the code. Suppose we have a bit of ugly code, suppose we have a real hack somewhere - which is quite possible because some piece of simple design grew like Topsy - then who is going to sort it out? You may find that each of the four collective owners has a higher priority piece of work, and after all "if it works why fix it?" Knowing that if a change comes up the developers only have a one in four chance of needing to change that piece of code the temptation is to play Russian roulette. A developer who has to live with such a piece of code and answer questions about it is far more likely to try to take preventative action. I think some degree of collective ownership is a good thing. It is worth having an under-study for each developer[3] - if only so we can take holidays. - spreading knowledge round is a good thing but I think the idea that every developer is responsible for every line of code is, erh, too extreme. This is one of the key points of XP which limits its scalability. If every developer must be able to maintain every line the maximum number of lines in the project is equal to the number of lines the least able developer can handle. Lawyers have been trying to write precise English that covers all situations for a few hundred years and yet we still have arguments in the courts about what the law means. Actually having a real live customer who you can just ask things of is a real boon. However, this requires a change in how the company operates and bills. If every time the customer tells you something you have to raise a change request you will not get anywhere. You must also be able to have a frank discussion with a customer. I remember one project where after a discussion with the "architect" I went talk to an onsite customer about a problem but had to ask questions so that they did not become aware that we had uncovered a major deficiency. If companies really want to use an on-site customer, they often have to change their way of thinking and billing. This may be impossible unless the developers and customers actually work for the same company. Most of my professional career has been spent in a production state. I have no problems with this rule, it imposes a certain discipline and is not without it's draw-backs. For example, you may not get the time to perform a major refactoring, or, to explore a new technology that is potentially useful[4]. Sometimes this is not possible. I recently worked on a system where we had six months to prepare for a new software release. Both our software and the clients went live on the same day, releasing ours before theirs was pointless, but to release later carried a very real prospect that customer would use a competitor. Beck advocates a strict 40-hour week[5]: Developers are, to some degree, brought up in a culture of long hours, all night hacks, and racing deadlines. Sometimes, these are necessary, sometimes, just sometimes they can make a difference. Overall though, doing much more than forty hours a week on a regular basis takes its toll. On the subject of holidays he is right again, a break for a week or two can do wonders for the creative process. Just because I am away from the code-face does not mean my brain is diminishing, if anything, I've used these opportunities to look at my problems in a different light and seek inspiration. Beck talks of the importance of a team which fits together, he is right about this, a team which thinks alike, shares a common vision and respects one another will be an order of magnitude more productive than a team of individuals - the whole being more than the sum of the individuals and all that. However, Beck seems too quick to fire people from the team, his method of getting a team to gel is to fire the people who do not fit. This is the wrong way of approaching the problem, albeit, it is extreme. Not only does much employment law prevent this it is not the right way to treat people. Sometimes it is the only way to deal with a situation, but generally, if you go round firing people who do not fit your practising management by terror. Beck does not talk about recruitment, surely it is better to hire people who fit in? Have the whole team involved in the interview process, is everyone happy with hiring this person? Beck recognises that best projects are all seated physically close together, and sometimes the best way to achieve this is physically move the desks yourself. However, in the bigger companies I have worked for this just is not on, period. Yes, companies with this attitude may deserve to fail. Beck also advocates developers taking responsibility for the process: ownership of the process is an important part of owning the final product. But this is not without problems. Some organisations have a process, and they can be very attached to the process, especially if confers ISO900X or CMM level N status to the organisation. Secondly, this could lead to navel gazing. Is the program broken or the process? Small companies can usually be more flexible with this kind of thing while bigger companies less flexible, after all: you may not be the only development team in the organisation and what works for one may not work for another, large companies frequently prefer lowest-common-denominator approaches. Changing the process can bring you into all sorts of office politics that potentially kill the project before you start. Finally here, I agree on his view on toys and food but I do not think you can legislate for them. Moving the furniture, owning the process, playing with toys are all part of binding a team building and empowering. The shared vision / shared goals concepts which run through Beck and McCarthy's work is where I believe the silver bullet is be found. Somewhere in the book, Beck states that XP is likened to "observing programmers in the wild." As I said at the top, 75% of the book has me saying "I've done that" so I agree with this comment. However, simply observing and documenting does not a methodology make. This is my big problem with XP as it stands. I feel it is a bunch of observations that Beck has tried to fit into an over reaching methodology. The glue that holds this together is his tales and anecdotes. I can tell a good tale myself, and I have been around software long enough to have a lot of anecdotes. I could also try to come up with my own philosophy based on this. Beck and I are not unique here, I think most developers could, Beck and I differ from most because we are prepared to put it on paper and stick our necks out. But I do not think this makes a methodology. In Jim McCarthy gives "54 rules for delivering great software." I would have much preferred Beck's thoughts if they where presented in a similar fashion. McCarthy speaks a lot of sense and injects a lot of ideas but he does not try to elevate this to a methodology. Steve Maguire takes a less itemised approach to McCarthy but has a similar approach. As Maguire and McCarthy's books come out of the same epoch of Microsoft history they are quite complementary but they never claim to be a methodology with the answers. I wish Beck could have taken more of an item by item approach, rather than try to capture the intellectual high ground of a methodology. (I picked up a copy of The Pragmatic Programmer at JaCC and although I have only just started to read it, I hope it can be added to this short list. ) One lesson I do not find in XP explained but which forms part of my personal lore is that "methodologies do not work", or rather, they do not work straight out of the box, they must be adapted to any given environment. I do not think anyone will ever prove that Beck's XP works, or does not work. Beck has provided too many "get out clauses" to prevent it being quantified. XP will appeal to the hacker in us all - if we can just ditch those pesky unit tests. These get out clauses include: Chapter 23: "the full value of XP will not come until all the practices are in place.... you can get significant benefits from the parts of XP... there is much more when you put all the pieces in place" : in other words, if you measure an "XP" project and it doesn't show the claimed benefits it may just be you missed some little bit. (I never worked out how he squares this with the preface which says "XP is lightweight... low-risk, flexible...." if it is lightweight and flexible why must I implement 100%? And how can that possibly be low-risk?) Chapter 24: "accepting responsibility for the process" sometimes you do not have control of the process. Sometimes you are not allowed to change it - ever worked for an ISO9001 approved organisation? Chapter 25 is full of "when not to use XP": I feel like I am reading a tautology, you know the "this is not true" type of thing. I think Beck sets out to define a very narrow domain space for XP and of course, if you set the parameters so tight it is going to work. So much of XP seems to centre on Kent Beck and/or Ward Cunningham's personality and personal style: I am sure they are great people to work with and real thinkers but I cannot help thinking that makes it difficult to really duplicate it in other teams[6]. XP is intended for small to medium projects. I hate to say this, but these are not the ones where you need lots of methodology, the real test of a methodology is large projects. Having said this, I actually think large projects are impossible and should not even be attempted - another rule of my lore "inside every big project is a small project struggling to get out." How applicable must a methodology be to be worthy of the name? By placing all these restrictions on XP is it a methodology or just a recipe for Beck's own projects? Beck also includes a get out clause to dismiss those, like me, who are frightened by XP: in the preface he acknowledges this and states that these are all old techniques. He's right of course, but nobody has taken them to such "extremes" before, many things which are good become bad when taken excessively never mind taken to extremes. I think there are some circumstances where XP will work. I do not think there is anywhere that XP will work exactly as Beck describes it, unless of course, you hire Beck and Cunningham. Given his get-out-clauses I think this makes it almost impossible to reproduce and measure XP. In the financial sector, it is very common to have developers and customers working in tandem. I have literally seen trading floors with developers sitting next to traders. The developer codes something, and when it does what the trader wants it is released[7]. It is possible to have a team of developers working with a team of traders. The pressure on developers can be intense, and it will never satisfy the 40-hour week rule, but I think it satisfies most of the other points: always in production, simplest possible design (because they will throw most things away), coders all being familiar with the code (because there is not a lot of it), etc. I can also see some small consultancy teams working in this fashion. The parachute-in type consultants who arrive with their kit-bags, and set up shop for six months in a strange town. Again, while they may only work forty hours a week, since they spend four hours travelling on Monday morning and Friday afternoon they must do long days. What worries me more is that some consultancies will believe that XP is another methodology they must provide to their clients; so a few people will "learn XP" and it will sit on the list of available methodologies next to RAD, SSADM, Prince and the ISO9001 badge. The same managers will be sent XP and SSADM projects, as most of the actual developers are just hired guns these people will change, and hence the consultancy will miss the entire point of XP. There are also places where XP will never work. Some organisations are change averse, if you have an organisation that will make you document every line of code you change then XP is not going to work, the reliance on refactoring means that organisations must accept that the code base will change frequently. Generally, I would prefer to cherry-pick XP in the same fashion I cherry-pick McCarthy and Maguire's books. I do not think I am alone here, I think others would agree with this. Originally, I did not believe extreme programming was extreme. It did not make any sense to me, at best it sounded like hacking, "lets be extreme... let us only code." Having read the book, my opinion has changed, there is something there, it is extreme, because "if something is good we do extreme, 100%, no prisoners, no compromise". I do not think you can develop software like that. So much software development is about judgement calls, "do we optimise now or later", "do we take the refactoring hit now or later", "do I implement a 20% solution now to keep them working or bite the bullet." But more than Extreme Programming as a technique, Extreme Programming is its own God, trying to give a select programmer the kudos of joining the Extreme Sports club. XP would never have received the hype and publicity if it was called "Beck Development Methodology", or "Chrysler style programming." Nor would the book have sold so many copies if it had the title "27 ideas to spice up your development" which I think would be a much more accurate title. (Even the abbreviation panders to our culture fascination with the letter "X", X-programming like X-Files, or Microsoft X-box, X-wing bomber, XR3i, etc.) I have come to believe that XP is all about XP, not so much about programming, more about a good name which someone thought up. Beck could have written a very interesting book about the Chrysler C3 team's experience, or about his experience in the financial programming world[8]. As a book about development I think it is good, as a bunch of tales with parables it is good, as a set of heuristics it is good. But, I feel the name came first, and having decided to hang a methodology on this name Beck fails to present any conclusive and quantifiable results. So much of this book is a collection of tales, stories about him learning to drive, imaginary conversations between programmers, and so on. Now that sounds negative, I am sorry, I actually like the sound of XP. In some ways, Beck's book reminds me of Fred Brooks Mythical Man Month: both present an Emperor's new clothes view of current software development techniques. Before I rush to adopt XP I want some changes: first, let's call the XP described in Beck's book "Beck XP", second let us define XP as "programming centric development", putting developers in the driving seat if you like. Third, I would like to relax the get-out clauses, XP must adapt to what ever environment it is in, after all "developers must control the process." With these modifications I am happy with XP, actually I am very happy. One final request: can we please recognise the books by McCarthy and Maguire as pre-XP books about XP? Jim McCarthy : Dynamics of Software Development (1 55615 823 8), Microsoft Press, 1995 Steve Maguire : Debugging the Development Process (1 55615 650 2), Microsoft Press, 1994 Steve McConnell: Code Complete (1 55615 484 4), Microsoft Press, 1993 Fred Brookes : Mythical Man-Month (0 201 83595 9), second edition, 1999 Kent Beck : eXtreme Programming Explained (0 201 61641 6), Addison-Wesley, 1999 Andrew Hunt and David Thomas: The Pragmatic Programmer (0 201 61622 X), Addison-Wesley, 1999 [1] Steve McConnell's Code Complete is also comes from the same, Microsoft, stable and although generally highly regarded I have never found it says anything that Pressman, or any other of the staple of classic software engineering writers had not alrady said, albeit in a more academic style. [2] It's interesting to contrast these with McCarthy's three: features, resources, time. I suspect a little bit of analysis would reconcile these into a single equation. [3] Fred Brooks talks about a co-pilot who works with the surgeon (don't blame me, Brooks mixed the metaphors.) - the co-pilot is not responsible for the work but can take over if need be. [4] McCarthy answers this problem by proposing "Scouts": developers who are allowed to move away from production development for a period of time to explore a new technology or experiment with a different approach. . . [8] Programming for financial markets is, I believe, a much neglected subject, The City in London, Wall Street in New York and countless financial institutions and software providers the World over service a market which is massive. Yet most of our documented software projects are drawn from, well computing itself. If anyone wants to collaborate on this give me a call, I've been round this market myself.
https://accu.org/index.php/journals/489
CC-MAIN-2017-39
refinedweb
4,840
66.98
I just noticed that microdom.Element.getElementsByTagName doesn't behave like domhelpers.getElementsByTagName; that is, it's not recursive. (This is even documented in the method's docstring.) Also, microdom.Document.getElementsByTagName has yet another implementation, and this one is recursive, but incorrect. *Neither of these implementations obeys w3c*, which domhelpers.getElementsByTagName does. It would be trivial to change both to use that code instead of doing it on their own, but microdom doesn't import domhelpers, and I assume this is intentional. Should microdom import domhelpers and use that code? We'll gain correctness, and maybe maintainability, but create a circular dependency. Alternately, the method could be moved into microdom as a top-level function, which the respective methods use. Domhelpers could import this method into its own namespace, keeping everyone happy. __________________________________________________ Do you Yahoo!? Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://twistedmatrix.com/pipermail/twisted-python/2003-February/002744.html
CC-MAIN-2015-35
refinedweb
145
53.37
To find the number occurring odd number of times, the Java code is as follows − public class Demo { static int odd_occurs(int my_arr[], int arr_size){ int i; for (i = 0; i < arr_size; i++){ int count = 0; for (int j = 0; j < arr_size; j++){ if (my_arr[i] == my_arr[j]) count++; } if (count % 2 != 0) return my_arr[i]; } return -1; } public static void main(String[] args){ int my_arr[] = new int[]{ 34, 56, 99, 34, 55, 99, 90, 11, 12, 11, 11, 34 }; int arr_size = my_arr.length; System.out.println("The number that occurs odd number of times in the array is "); System.out.println(odd_occurs(my_arr, arr_size)); } } The number that occurs odd number of times in the array is 34 A class named Demo contains a static function named ‘odd_occurs’. This function iterates through the integer array, and checks to see the number of times these numbers occur. The odd number that occurs frequently is returned as output. In the main function, an integer array is defined, and the length of the array is assigned to a variable. The function is called by passing the array, and its length as parameters. Relevant message is displayed on the console.
https://www.tutorialspoint.com/java-program-to-find-the-number-occurring-odd-number-of-times
CC-MAIN-2021-39
refinedweb
196
59.33
Python Client for Google BigQuery Python idiomatic client for Google BigQuery Quick Start $ pip install --upgrade google-cloud-bigquery Authentication With google-cloud-python we try to make authentication as painless as possible. Check out the Authentication section in our documentation to learn more. You may also find the authentication document shared by all the google-cloud-* libraries to be helpful. Using the API Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google BigQuery (BigQuery API docs) solves this problem by enabling super-fast, SQL-like queries against append-only tables, using the processing power of Google’s infrastructure. Load data from CSV import csv from google.cloud import bigquery from google.cloud google-cloud-python API BigQuery documentation to learn how to connect to BigQuery using this Client Library. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/google-cloud-bigquery/
CC-MAIN-2017-39
refinedweb
158
56.86
(Analysis by Jonathan Paulson - jonathanpaulson@gmail.com) If a faster cow is behind a slower cow, the faster cow is eventually going to catch up and join the slower cow's group (or join a group in the middle). So we want to count the number of cows who have no slower cows ahead of them; this is the number of cows who won't join another group (and hence will start their own group). The most obvious way to to go through each cow and check if any of the cows ahead of them are faster. But this is too slow: there are about $N^2$ pairs of cows, and N = $10^5$. So $N^2$ is about $10^{10}$, and computers only do about $10^9$ operations per second (which usually translates to about $10^7$ iterations of a simple loop in one second of time). Luckily, there is a faster way: start from the back, and keep track of the slowest cow as you go. This only takes about $N$ operations, which is very fast. Here is my C++ code for the fast approach: #include <cstdio> #include <vector> using namespace std; typedef long long ll; int main() { ll n; scanf("%lld", &n); vector<ll> S; for(ll i=0; i<n; i++) { ll x, s; scanf("%lld %lld\n", &x, &s); S.push_back(s); } ll ans = 1; ll slow = S[n-1]; for(ll i=n-2; i>=0; i--) { if(S[i] > slow) { // cows group up } else { ans++; } slow = min(slow, S[i]); } printf("%lld\n", ans); }
http://usaco.org/current/data/sol_cowjog_bronze.html
CC-MAIN-2018-17
refinedweb
261
73.51
On Tue, 2009-07-28 at 15:23 +0530, Harshavardhana wrote: > Hi Daniel, > > That release and above have few enhancements for VM related work, > but the Fedora maintainer has not yet updated to our latest releases. > I have communicated this to him but yet to receive a reply and when > he will update with the latest release. I think you can remove the > value its not a hard requirement. Thanks, I committed the patch below Cheers, Mark. From: Mark McLoughlin <markmc redhat com> Subject: [PATCH] Reduce glusterfs dependency to 2.0.1 * libvirt.spec.in: require glusterfs-client >= 2.0.1 --- libvirt.spec.in | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/libvirt.spec.in b/libvirt.spec.in index a5b861d..918d64c 100644 --- a/libvirt.spec.in +++ b/libvirt.spec.in @@ -97,7 +97,7 @@ BuildRequires: util-linux BuildRequires: nfs-utils Requires: nfs-utils # For glusterfs -Requires: glusterfs-client >= 2.0.2 +Requires: glusterfs-client >= 2.0.1 %endif %if %{with_qemu} # From QEMU RPMs -- 1.6.2.5
https://listman.redhat.com/archives/libvir-list/2009-July/msg00911.html
CC-MAIN-2021-21
refinedweb
171
60.31
Commandline User Tools for Input Easification Project description CUTIE Command line User Tools for Input Easification A tool for handling common user input functions in an elegant way. It supports asking yes or no questions, selecting an element from a list with arrow keys, forcing the user to input a number and secure text entry while having many customization options. For example the yes or no input supports forcing the user to match case, tab autocomplete and switching option with the arrow keys. The number input allows setting a minum and a maximum, entering floats or forcing the user to use integers. It will only return once the user inputs a number in that format, showing a warning to them if it does not conform. It should work on all major operating systems (Mac, Linux, Windows). Usage These are the main functions of cutie. example.py contains an extended version of this also showing off the select_multiple option. import cutie if cutie.prompt_yes_or_no('Are you brave enough to continue?'): # List of names to select from, including some captions names = [ 'Kings:', 'Arthur, King of the Britons', 'Knights of the Round Table:', 'Sir Lancelot the Brave', 'Sir Robin the Not-Quite-So-Brave-as-Sir-Lancelot', 'Sir Bedevere the Wise', 'Sir Galahad the Pure', 'Swedish captions:', 'Møøse'] # Names which are captions and thus not selectable captions = [0, 2, 7] # Get the name name = names[ cutie.select(names, caption_indices=captions, selected_index=8)] print(f'Welcome, {name}') # Get an integer greater or equal to 0 age = cutie.get_number( 'What is your age?', min_value=0, allow_float=False) # Get input without showing it being typed quest = cutie.secure_input('What is your quest?') print(f'{name}\'s quest (who is {age}) is {quest}.') When run, as demonstrated in the gif above it yields this output: Are you brave enough to continue? (Y/N) Yes Kings: [ ] Arthur, King of the Britons Knights of the Round Table: [ ] Sir Lancelot the Brave [x] Sir Robin the Not-Quite-So-Brave-as-Sir-Lancelot [ ] Sir Bedevere the Wise [ ] Sir Galahad the Pure Swedish captions: [ ] Møøse Welcome, Sir Robin the Not-Quite-So-Brave-as-Sir-Lancelot What is your age? 31 What is your quest? Sir Robin the Not-Quite-So-Brave-as-Sir-Lancelot's quest (who is 31) is to find the holy grail. Installation With pip from pypi: pip3 install cutie With pip from source or in a virtual environment: pip3 install -r requirements.txt Documentation All functions of cutie are explained here. If something is still unclear or you have questions about the implementation just take a look at cutie.py. The implementation is rather straight forward. get_number Get a number from user input. If an invalid number is entered the user will be prompted again. A minimum and maximum value can be supplied. They are inclusive. If the allow_float option, which is True by default is set to False it forces the user to enter an integer. Getting any three digit number for example could be done like that: number = cutie.get_number( 'Please enter a three digit number:', min_value=100, max_value=999, allow_float=False) # which is equivalent to number = cutie.get_number('Please enter a three digit number', 100, 999, False) Arguments Returns The number input by the user. secure_input Get secure input without showing it in the command line. This could be used for passwords: password = cutie.secure_input('Please enter your password:') Arguments Returns The secure input. Select an option from a list. Captions or separators can be included between options by adding them as an option and including their index in caption_indices. A preselected index can be supplied. In its simplest case it could be used like this: colors = ['red', 'green', 'blue', 'yellow'] print('What is your favorite color?') favorite_color = colors[cutie.select(colors)] With the high degree of customizability, however it is possible to do things like: print('Select server to ping') server_id = cutie.select( servers, deselected_prefix=' ', selected_prefix='PING', selected_index=default_server_ip) Arguments Returns The index that has been selected. select_multiple Select multiple options from a list. It per default shows a "confirm" button. In that case space bar and enter select a line. The button can be hidden. In that case space bar selects the line and enter confirms the selection. This is not in the example in this readme, but in example.py. packages_to_update = cutie.select_multiple( outdated_packages, deselected_unticked_prefix=' KEEP ', deselected_ticked_prefix=' UPDATE ', selected_unticked_prefix='[ KEEP ]', selected_ticked_prefix='[UPDATE]', ticked_indices=list(range(len(outdated_packages))), deselected_confirm_label=' [[[[ UPDATE ]]]] ', selected_confirm_label='[ [[[[ UPDATE ]]]] ]') Arguments Returns A list of indices that have been selected. prompt_yes_or_no Prompt the user to input yes or no. This again can range from very simple to very highly customized: if cutie.prompt_yes_or_no('Do you want to continue?'): do_continue() if cutie.prompt_yes_or_no( 'Do you want to hear ze funniest joke in ze world? Proceed at your own risk.', yes_text='JA', no_text='nein', has_to_match_case=True, # The user has to type the exact case enter_empty_confirms=False, # An answer has to be selected ) Arguments Returns The bool what has been selected. Changelog 0.2.3 [dev] - PEP8 Compliance by Christopher Bilger 0.2.2 - Fixed Python in examples 0.2.1 - Expanded readme descriptions 0.2.0 select_multiple - Tweaks to the readme 0.1.1 - Fixed pypi download not working 0.1.0 0.0.7 0.0.x - Initial upload and got everything working Contributing If you want to contribute, please feel free to suggest features or implement them yourself. Also please report any issues and bugs you might find! If you have a project that uses cutie please let me know and I'll link it here! Authors - Main project by me. - Windows support by Lhitrom. caption_indicesand tidbits by dherrada. - PEP8 Compliance by Christopher Bilger. License The project is licensed under the MIT-License. Acknowledgments GNU Terry Pratchett Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cutie/
CC-MAIN-2022-21
refinedweb
987
58.48
NAME roar_vs_buffer - Use buffered mode streams SYNOPSIS #include <roaraudio.h> int roar_vs_buffer(roar_vs_t * vss, size_t buffer, int * error); ssize_t roar_vs_get_avail_read(roar_vs_t * vss, int * error); ssize_t roar_vs_get_avail_write(roar_vs_t * vss, int * error); int roar_vs_reset_buffer(roar_vs_t * vss, int writering, int readring, int * error); DESCRIPTION These functions controls the buffered mode of the VS object. Using this mode is not recommended. roar_vs_buffer() initializes the buffered mode. It takes the size for the buffer as argument. The size should be a power of two. Common values include 2048 and 4096. roar_vs_get_avail_read() and roar_vs_get_avail_write() return the amount of free space in the read and write buffer. roar_vs_reset_buffer() resets the read and/or write buffer. This means the data in the buffers is discarded. This does not happen frame aligned and may result in broken audio. Buffers are not flushed automaically. To do this use roar_vs_iterate(3) or roar_vs_run(3). PARAMETERS vss The VS object to be used. buffer The size of the buffer to be used in bytes. writering, readring Selects the buffer to reset. Must be ROAR_VS_TRUE or ROAR_VS_FALSE.. roar_vs_get_avail_read() and roar_vs_get_avail_write() return the free space in the corresponding buffer. On error, -1 is returned. EXAMPLES FIXME SEE ALSO roarvs(7), libroar(7), RoarAudio(7).
http://manpages.ubuntu.com/manpages/precise/man3/roar_vs_buffer.3.html
CC-MAIN-2018-05
refinedweb
200
78.75
International Women's Basketball Day Here's some artwork for that day (initially designed for our basketball team) Licensed under CC BY-SA 2.0. Here's some artwork for that day (initially designed for our basketball team) Licensed under CC BY-SA 2.0. Since the death of Google Reader (may the googler who doomed it burn in non-existing hell) I've been looking for new RSS reader. What I need is:. GNU coreutils are so elusive: [b0x123a@i6n66 ~]$ rm bin/ rm: cannot remove `bin/': Is a directory [b0x123a@i6n66 ~]$ rm bin/ -r rm: cannot remove `bin': Not a directory. If you use git diff-index to get a diff to staging area, there's a saner way: git diff --cached. git: baffling users unexpectedly since 2005. make: *** No rule to make target `sense'. Stop. . How to re-format output of hg help <command> for use in Mercurial zsh_completion file. If you watch anything in foreign language, here's simple trick for you. Herr Olof han rider att möta sin brudlångt borta i främmande landskön Jungfrun hon sömmar en gyllene skrudhon väver de skira gullband.. I'm just glad I haven't decided to buy "light-less" (a bit cheaper) version of that cover, because LED light is the only thing I like about it. Oh, and the Reader is great. Just choose cover wisely, remembering what I wrote here. If your `/etc/init.d/postgresql-8.4 start` fails after timeout, but server runs in the background unharnessed, and `/etc/init.d/postgresql-8.4 restart` fails with "Socket conflict", make sure you haven't deleted "postgres" database and recreate it in that case. If you had tried to use Flask sessions and got something like that: File "/usr/lib64/python2.7/site-packages/flask/app.py", line 889, in __call__ return self.wsgi_app(environ, start_response) File "/usr/lib64/python2.7/site-packages/flask/app.py", line 871, in wsgi_app with self.request_context(environ): File "/usr/lib64/python2.7/site-packages/flask/app.py", line 836, in request_context return _RequestContext(self, environ) File "/usr/lib64/python2.7/site-packages/flask/ctx.py", line 33, in __init__ self.session = app.open_session(self.request) File "/usr/lib64/python2.7/site-packages/flask/app.py", line 431, in open_session secret_key=key) File "/usr/lib64/python2.7/site-packages/werkzeug/contrib/securecookie.py", line 308, in load_cookie return cls.unserialize(data, secret_key) File "/usr/lib64/python2.7/site-packages/werkzeug/contrib/securecookie.py", line 255, in unserialize mac = hmac(secret_key, None, cls.hash_method) File "/usr/lib64/python2.7/hmac.py", line 133, in new return HMAC(key, msg, digestmod) File "/usr/lib64/python2.7/hmac.py", line 72, in __init__ self.outer.update(key.translate(trans_5C)) ...you might be obscured. Fear not! The reason may lurk in from __future__ import unicode_literals Just declare your SECRET_KEY as `bytes` object and get happy again! SECRET_KEY = b'smthverrysekret' Here's two Mercurial hooks I wrote for my work. One is a reminder to mention Trac tickets in commit messages, and the other notifies a developer when someone (not him) touches his part of project.. Just a (probably) useful examle of nested Python generator written in Python using recursive *yield* expression. While experimenting with nvidia/nouveau I've decided to write module for app-admin/eselect to eselect required /etc/X11/xorg.conf from list. So here it is. Get it, use it, report bugs in it (here or, better, open issue on GoogleCode). Ebuilds are in rion overlay.. An interesting idea came to me several days ago. I have to write C++ code generator (sourcing data from SQL database), so I installed Jinja2, sharpened text editors and then asked myself: „Wouldn't it be beautiful to see the result just as I type Python code, portion by portion? There's a use for two monitors: the first one displays coding, the second one keeps freshly generated result always ready for. If you want to set up torrent-downloading server and control it with GUI-clients (even on Windows) read my article on Gentoo Wiki. If you're using other *nix or Linux distro, you'll need to think a little bit, of course, but nevertheless one might find it useful. Here's brief summary of one hour on **#radeon** IRC channel: Added later: described technique has no use, if you're running kernel 2.6.30 and later. Just compile DRM-modules and have fun. Couldn't find more or less official description in English. Kind of weird (-:E Vista wiped out, Gentoo Wiki article started If you has burnt some .iso-image with known md5-checksum, you can verify if disk is valid and was burnt properly dd if=<your-CD/DVD-ROM-device> | md5sum Example: # dd if=/dev/hdc | md5sum && md5sum /mnt/win_e:/software/linux/systemrescuecd-x86-1.0.4.iso 68f9c2d885d95c82bfe6c7df736ae0a3 - <dd output skipped> 68f9c2d885d95c82bfe6c7df736ae0a3 /mnt/win_e:/software/linux/systemrescuecd-x86-1.0.4.iso As you can see, md5sums are the same, so disk is valid. If someone knows how it could be done under MS Windows, please comment here. If you plan to use Matplotlib plotting library, remember, that it has vast, but not-so-well organized documentation, so when you feel yourself familiar with MPL basics, move to examples gallery to find illustrated howtos. "HowTo publish a book just for money", the third stereotype edition Anna Mhoireach - Into Indigo [1996] Let's suppose you have two related types of objects in your Django app. Call them Object and SuperObject, related ManyToMany'ly. And when you add/change SuperObject in Django admin, there's all Objects available for selection. But you need to limit the choices, and limit dynamically (perhaps, depending on current user). Here's quite straightforward but working technique. Just a note for those who looks for info on Gentoo & PostgreSQL (like I did today): Got stable now, old-style packages are masked and will be removed. Now I have a Linux-server with external IP at my (almost my) disposal. Python, Django, PostgreSQL, nginx... So I can do everything I want. All that I need now, is to decide what do I want (-:E Any suggestions for me to implement? Hardingrock - Grimen (2007) If one had learnt about the Stone of Scone, browsing pages about Discworld dwarf bread, just in several hours he would stumble across this Stone in Wikipedia, reading about a year from Heidevolk's song title Ӱлгер - Агын Хустар Brave Olav Nils III the king penguin became a Colonel-in-Chief and Norwegian knight Read more... Schelmish - Igni Gena (2004) Fresh impressions. Low-level programming trap — writing for half of a day code instead of finding suitable ready-to-use module in five minutes. High-level programming trap — looking for suitable module for a few hours instead of writing your own code in quarter of hour. Now IMified works at last, so I can "spread the word" about it. But... it's too late and I'm too lazy to type all that you can read on the off.site… (-;E So go right there and find out how do you get notes, ToDos, reminders etc in your Jabber! (-:E I am great warrior of OpenSource, indeed. On Thu, Oct 18 Kubuntu and Xubuntu 7.10 were released and in that evening I've downloaded all eight .iso-images and put it out in local network for people who cannot download them by ourselves. And today I managed to re-upload two images to other server for Far East Ubuntu users (-%E Веснянка - Ой, заграли музиченьки [2003] Raud-Ants är en rysk musikgrupp (folk metal) från Ingermanland som sjunger på votiska (!!!) << **chhwe** Ásmegin - Hin Vordende Sod & Sø [2003] The Orthoptera are the only insects considered kosher in Judaism (*Wikipedia*) The unexpected result of googling with my name: this pdf-file. Guess what? Just casual division of Swedish word *gangsterkrig* (gangster war) Jag har ett riktigt svenskt namn (-:E Google har funnit endast pdf-filen inte om mig själv 4-:E Does anybody know what font is this and where to get a similar one? An excerpt from tocharian manuscript Slutligen har vinter kommit! Nu dags att lyssna på nordisk metal (-%E Ja, det ljuder jättebra med det nya ljudsystemet! Ensamheten, tecken av kraft,Ensamheten, tecken av kraft,Att allt se, tecken av kraft,Evigt liv, tecken av kraft... Finntroll - Jaktens Tid [2001] There's a good word in Old Icelandic, hraustr (meaning brave, strong). Sounds conformably. It's believed to be derived from Indoeuropean root, and this root is presented in Russian as "крот" ([krot], a mole) Ancient Teutons and Slavs seem to me to have a little bit different conception of strength and valour (-%E
https://skrattaren.bitbucket.io/
CC-MAIN-2018-26
refinedweb
1,453
65.12
seneca-vcache versioned caching plugin for seneca Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now! npm install seneca-vcache seneca-vcache Node.js Seneca Versioned Caching module is plays nicely with multiple memcached instances, and allows Seneca apps to scale. (See chapter 8 of my book for details, or read How key-based cache expiration works) Support If you're using this module, feel free to contact me on twitter if you have any questions! :) @rjrodger Current Version: 0.2.2 Tested on: Node 0.10.6, 0.8.7, Seneca 0.5.9 Quick example This module works by wrapping the data entity actions (role:entity, cmd:save, ... etc). You just need to register it: var seneca = require('seneca')() seneca.use('memcached') seneca.use('vcache') Then just use data entities as normal. Except things will be a lot faster. Install npm install seneca npm install seneca-memcached npm install seneca-vcache You'll need the seneca-memcached plugin as a dependency. You'll also need memcached Actions plugin:vcache, cmd:stats Returns a JSON object containing the current hit/miss counts of the cache. Options Here's how to set the options (the values shown are the defaults): seneca.use('vcache',{ prefix: 'seneca-vcache', maxhot: 1111, expires: 3600 }) Where: - prefix: prefix string to namespace your cache (useful if your cache is used by other things) - maxhot: the maximum number of hot items to store in the running node process memory - expires: how long to store items (in seconds) Test cd test mocha store.test.js --seneca.log.print Also cd test memcached -vv mongod --dbpath=db node n1.js --seneca.log=type:plugin node n2.js --seneca.log=type:plugin
https://www.npmjs.org/package/seneca-vcache
CC-MAIN-2014-15
refinedweb
289
67.65
An Introductory Guide to Managing WordPress with WP-CLI Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95 This: . wp config wp config is a namespace for commands that deal with WordPress configuration. wp config listlists all the configuration variables: wp config create— as we said — creates configuration file with variables we provide, like wp config create --dbname=somedb --dbuser=someuser --dbpass=somepass, and other variables, as outlined in the docs wp config get(for example, wp config get table_prefix) fetches specific config variables from wp-config.php wp config set, similarly, sets config variables More of the wp config details can be found here. wp cap is interesting for administering user roles and capabilities. We can add and remove capabilities from particular roles. wp cron is a command namespace for testing, running and deleting WP-Cron events. wp cron event list, for example, would give us the output looking something like this: Then we could delete events with something like wp cron event delete wsal_cleanup — or reschedule them, etc. Sometimes, in the course of updating content, developing, making changes, we’ll find that refreshing a WordPress page will not show the changes we made. Many times this has resulted in a frantic search, trying to find what we did wrong. Often it’s a cache issue. WordPress Object Cache, by default, isn’t persistent, so the need to clean the object cache will be exacerbated with the use of plugins that persist object cache across requests (and this is usually the case). wp cache is a namespace that contains commands for handling the WP Object Cache. wp cache flush is a command that flushes the whole cache. It’s a no-brainer — a simple, oft-used command that doesn’t require any other parameters, and purges everything from the cache. wp cache contains other commands, as well, that can be used for very atomic management of cache items. wp cache add, wp cache delete, wp cache get, wp cache set, wp cache replace and other commands make it possible to list, inspect, add, change or delete specific values from the object cache. WordPress transients are another element of WP caching strategy, which is persistent by default, and can play a part in WordPress’s overall performance. It’s not unheard of that many plugins liberally use WordPress transients, which can get cluttered and slow down the website. The wp transient namespace contains commands to delete, get or set transients. Another element in the WordPress caching system that sometimes requires flushing, and has probably caused hours and hours of confusion for the beginners, are WordPress permalinks. wp rewrite — wp rewrite flush in particular — makes it possible to flush rewrite rules (permalinks). We can also list rewrite rules. wp db contains commands for managing a WordPress database. Insights, repair, optimization, search, various queries. We can also export or import the database. wp eval and wp eval-file can be used to execute some code in the context of our WordPress installation. wp export and wp import export and import content in WXR format. wp option contains commands for managing, getting and setting WordPress options. wp scaffold contains commands that create boilerplate or starting code for plugins, themes, child themes, Gutenberg blocks, post types, taxonomies — thus shortening the path to get them running. wp search-replace does search-replace on a database with strings we provide it as arguments. This comes very handy when we migrate the database from one website to another, and need to change URLs. For example, when we create a staging website, or move a database from staging to production website. WordPress serializes content strings in the database, so doing a raw search–replace on a database in some editor wouldn’t work; it would, in fact, break the website. wp shell is particularly interesting, as it allows us to enter a WordPress repl — a live shell environment of our WordPress installation. There we have full access to everything that some active plugin may have available. We can write code, load code from files, execute functions, observe or inspect output from functions. It makes it very easy to test out new code without browser refresh cycles. wp user is for managing, updating, deleting and changing roles of users. These are some of the default, built-in commands. Detailed documentation of all the commands is available in the WordPress developer docs. wp plugin makes it possible to list, install, activate, deactivate and delete plugins, and to write scripts that automate installation of multiple plugins in bulk. wp plugin list might give us output looking something like this: wp theme does the same, only for themes. wp package is a command namespace for managing WP-CLI packages. With wp package install somepackagename we can install third-party packages that are added to the WP-CLI Package index. Some noteworthy packages/commands are: wp usergen cli, which creates random users for testing; db checkpoint, which creates db snapshots; WP-CLI buddypress, which contains a number of BuddyPress-related commands; WP-CLI size, which shows database and table sizes; wp hook, which shows callback functions registered for a particular filter or action hook; query debug, which debugs the performance of queries; and faker, which helps us create filler posts for development purposes. There are many other packages/commands maintained by the community. The full list can be foud here. Conclusion In this guide, we introduced WP-CLI and covered its main commands. We also introduced some of its third-party packages, but this is in no way complete reference. With WP-CLI, the usage possibilities and scenarios are virtually endless. Get practical advice to start your career in programming! Master complex transitions, transformations and animations in CSS!
https://www.sitepoint.com/wp-cli-introduction/
CC-MAIN-2021-04
refinedweb
958
54.12
Redux is a prime example of a software library that trades one problem for another. While redux enables you to manage application state globally using the flux pattern, it also leads to filling your application with tedious, boilerplate code. Even the most straightforward changes require declaring types, actions, and adding another case statement to an already colossal switch statement. As state and changes continue to increase in complexity, your reducers become more complicated and convoluted. What if you could remove most of that boilerplate? Enter: Redux-Leaves Redux-Leaves is a JavaScript library that provides a new framework for how you handle state changes in your redux application. In a standard redux setup, you have one or maybe a few controllers managing different parts of the application. Instead, Redux-Leaves treats each node of data, or “leaf” in their nomenclature, as a first-class citizen. Each leaf comes with built-in reducers, so you don’t have to write them. This enables you to remove a lot of boilerplate from your application. Let’s compare the two approaches and then look at how to tackle moving from a traditional redux setup to one using Redux-Leaves. How to get started with Redux-Leaves Let’s begin by building a simple greenfield application that uses only redux and Redux-Leaves. This way, you can try out the tool before trying to add it to an existing project. Then, we’ll look at how you could approach added Redux-Leaves to an existing project. We’ll use create-react-app to set up an environment with a build chain and other tooling quickly. Starting your project npx create-react-app my-redux-leaves-demo && cd my-redux-leaves-demo yarn init yarn add redux redux-leaves For this example, we’ll use Twitter as our model. We’ll store a list of tweets and add to it. Within a store.js file, let’s take a look at at a redux case and compare that to how Redux-Leaves works. Adding a record: Redux version Typically, whenever you need to add a new mutation to state, you create: - A type constant - An action creator function - A case in the reducer’s switch statement. Here’s our redux example that adds a tweet: Adding a record: Redux-Leaves version import { createStore } from 'redux' const initialState = { tweets: [], } const types = { ADD_TWEET: 'ADD_TWEET', } const actions = { pushTweet: (tweet) => ({ type: types.ADD_TWEET, payload: tweet, }) } const reducer = (state = initialState, action) => { switch (action.type) { case 'ADD_TWEET': return { ...state, tweets: [ ...state.tweets, action.payload, ] } default: return state } } const store = createStore(reducer) store.dispatch(actions.pushTweet({ text: 'hello', likes: 0 })) With Redux-Leaves, there is no need to define a reducer function. The Redux-Leaves initialization function provides a reducer we can pass to createStore. Also, it provides an actions object that provides action creator functions, so we don’t have to worry about coding those from scratch either. With all of that taken care of, there is no need to declare type constants. Bye-bye, boilerplate! Here’s a piece of functionally equivalent code to the above, written with Redux-Leaves: import { createStore } from 'redux' import { reduxLeaves } from 'redux-leaves’ const initialState = { tweets: [], } const [reducer, actions] = reduxLeaves(initialState) const store = createStore(reducer) store.dispatch(actions.tweets.create.push({ text: 'hello', likes: 0 })) It’s much more concise than the previous example. As your requirements grow, the results are more drastic. In a standard redux application, you have to write new types and expand your reducer for every mutation. Redux-Leaves handles many cases out-of-the-box, so that isn’t the case. How do you dispatch those mutations? With Redux-Leaves built-in action creators. Each piece of data in the state is a leaf. In our example, the tweets array is a leaf. With objects, leaves can be nested. The tweet itself is considered a leaf, and each subfield of it is also a leaf, and so on. Each has action creators of their own. An overview of action creators for various data types Redux-Leaves provides three actions creators for every type of leaf, regardless of type: - Update: set the value of a leaf to anything you want - Reset: set the value of a leaf back to whatever it was in the initial state - Clear: depends on the data type. Numbers become 0. Booleans become false. Strings, arrays, and objects become empty( '', [], and {}respectively) In addition to these, Redux-Leaves provides some additional creators that are type-specific. For example, leaves of the boolean type have on, off, and toggle action creators. For a complete list, refer to the Redux-Leaves documentation. Two ways to create actions You can use the create function directly and dispatch actions that way, or you can declare actions that you can call elsewhere. The second way maps more closely to how redux currently operates, but also for that reason creates more boilerplate. I’ll leave it up to you to decide which approach works best for your needs. // method #1 store.dispatch(actions.tweets.create.push({ text: 'hello', likes: 0 })) // method #2 const addTweet = actions.tweets.create.push store.dispatch(addTweet({ text: 'hello', likes: 0 })) Creating complex actions with bundle Boilerplate code saves time, but it isn’t able to handle every real-world use case. What if you want to update more than one leaf at a time? Redux-Leaves provides a bundle function that combines many actions into one. If you wanted to keep track of the most recent timestamp when you add a tweet, it would look like this: const updateTweet = (tweet) => bundle([ actions.most_recent.create.update(Date.now()), actions.tweets.create.push(tweet), ], 'UPDATE_WITH_RECENCY_UPDATE') store.dispatch(updateTweet({ text: 'hello', likes: 0 })) The first argument is an array of actions to dispatch, and the second is an optional custom type. But even then, there are probably some cases that this won’t handle either. What if you need more logic in your reducer? What if you need to reference one part of the state while updating another? For these cases, it’s also possible to code custom leaf reducers. This extensibility is what makes Redux-Leaves shine: It provides enough built-in functionality to handle simple use cases, and the ability to expand on that functionality when needed. Creating custom reducer actions with leaf reducers When tweeting, all a user has to do is type into a text box and hit submit. They aren’t responsible for providing all of the metadata that goes with it. A better API would be one that only requires a string to create a tweet, and abstract away the actual structure. This situation is a good use case for a custom leaf reducer. The core shape of a leaf reducer is the same as with other reducers: it takes in a state and action and returns an updated version of the state. Where they differ, though, is that a leaf reducer does not relate directly to a single piece of data. Leaf reducers are callable on any leaf in your application. That’s yet another way Redux-Leaves helps you avoid repetition. Also note that the state in leaf reducer is not referencing the entire global state — only the leaf it was called on. In our example, leafState is the tweets array. If you need to reference the global state, you can pass it in as an optional 3rd argument. const pushTweet = (leafState, action) => [ ...leafState, { text: action.payload, likes: 0, last_liked: null, pinned: false, } ] Add custom leaf reducers to the reduxLeaves function. The key in the object becomes its function signature in the application. const customReducers = { pushTweet: pushTweet, } const [reducer, actions] = reduxLeaves(initialState, customReducers) const store = createStore(reducer) Then, dispatching actions for custom reducers looks just like the built-in ones: store.dispatch(actions.tweets.create.pushTweet('Hello, world!')) console.log('leaves version', store.getState()) Outputs the following: { tweets: [ { text: “Hello, World!”, likes: 0, last_liked: null, pinned: false, } ] } Migrating to Redux-Leaves If you are working on an existing project and considering moving Redux-Leaves, you probably don’t want to take the whole thing out at once. A much safer strategy would be to replace existing redux code one action at a time. If you have tests in place for your application — which you should before attempting to refactor to a library like this — then this process should be a smooth and easy one. Replace one action and run the tests. When they pass, repeat. To do this, I recommend using the reduce-reducers Redux utility. Reduce-reducers enables the combining of existing reducers with new ones. yarn add reduce-reducers With this tool, it is possible to add Redux-Leaves to your application, without rewriting any code (yet). import { createStore } from 'redux' import { reduxLeaves } from 'redux-leaves' import reduceReducers from 'reduce-reducers’ Const initialState = { // initial state } const myOldReducer = (state = initialState, action) => { // big case statement goes here } const leafReducers = {} // we’ll put custom reducers here if/when we need them const [reducer, actions] = reduxLeaves(initialState, leafReducers) const comboReducer = reduceReducers(myOldReducer, reducer) const store = createStore(comboReducer) This update should not change the behavior of your application. The store is updatable by both the old reducers and the new one. Therefore, you can remove and replace actions one-by-one instead of rewriting everything at once. Eventually, you’ll be able to make one of those tasty pull requests that make your codebase a few thousand lines shorter without changing functionality. If you like, this change enables using Redux-Leaves for new code without modifying existing cases. Conclusion Removing the complexity of one library by adding another library is a counterintuitive proposition in my book. On the one hand, you can leverage Redux-Leaves to reduce boilerplate code and increase the speed with which developers can add functionality. However, adding another library means there is another API developers on the team need to be familiar with. If you are working alone or on a small team, then the learning curve may not be an issue. Only you and your team can know if redux is the right decision for your project. Is the reduced codebase and the faster pace of development worth the added dependency and learning required? That’s up to “Reducing Redux boilerplate with Redux-Leaves” Or you can move to Hookstate – faster and simpler to use alternative to Redux. (disclaimer: I am am maintainer for the project)
https://blog.logrocket.com/reducing-redux-boilerplate-with-redux-leaves/
CC-MAIN-2022-05
refinedweb
1,736
55.34
The X-Bitmap (XBM) is an old yet versatile graphical file format that's widely compatible with many modern Web browsers. Its specifications are a component of xlib, the C code library for the X-Windows graphical interface (a popular GUI front end for UNIX and Linux). I'll explain how the XBM format works and then show you one of the more interesting ways to use it: creating on-the-fly images on the client. You can download the code for this article here. XBM basics The XBM format was originally designed to store monochrome system bitmaps, such as icons and cursors. XBM graphics are essentially C source code files that represent binary images using hex arrays. You might be asking yourself at this point: What does this file format have to do with Web browsers? In the early 1990s, the National Centre for Supercomputing Applications (NCSA) at the University of Illinois was developing one of the first widely used Web browsers, called Mosaic. The graphical support for that browser was derived from many available open source code libraries, including xlib. As a result, many browsers today can handle XBM graphics. The Mosaic project later became the basis for the development of the Netscape browser. Microsoft borrowed portions of the Mosaic code to create Internet Explorer. Microsoft continues to natively support XBM as a MIME-type registration in Internet Information Server (IIS) and as a supported image type in all current versions of Internet Explorer. From a programmer's point of view, JPEGs or GIFs differ greatly from XBM graphics. Both of those file formats are manipulated on the bit level using compression schemes. They can support a wide range of color depths. And the only way you can create those types of Web graphics on the fly is by using server-side scripts, such as a combination of GD.pm and CGI/Perl scripts, or by accessing the Graphic Design Interface (GDI+) Class Library in ASP.NET through the System.Drawing namespace. XBM graphics are created programmatically. Each bit is precisely specified, and the resulting graphics are limited to a 2-bit color depth (black and white). X-Bitmaps don't necessarily require server-side scripts; they can be generated in real-time using client-side JavaScript. Practical application of X-Bitmaps includes on-the-fly generation of charts and page counters, retro-style graphical icons, and dynamic bar charts. One of the most impressive uses of XBM graphics I've ever seen is a game called Wolfenstein 5k, which is a texture-mapped, first-person shooter written in only 5 KB of JavaScript. Anatomy of the XBM format You can easily embed XBM image files on a Web page by using the IMG element. Here is an example of the syntax: <img src="xbmsmile.xbm"> Note This format will not render on Macs or certain browsers, such as the early versions of Mozilla. Typical XBM source code looks something like that in Listing A. The #define command sets the image width and height in pixels. As an aside, you can use the x_hot and y_hot commands if you want to define a hot spot on the image. I've created an X-Bitmap to illustrate the process. To design it, I started by mapping the image using binary values. If you stare closely at the ones and zeroes, you'll notice the smiling face in this sidebar. The binary image I created is 16 digits across and seven digits high, the same width/height pixel values defined in the XBM header in our source code. The image itself is stored in a static array containing a series of binary-coded hexadecimal (BCH) values—in other words, broken down in groups of four bits. The easiest way to figure out the hex values of our smiley face is to examine the image one row at a time, splitting the binary values into four-bit segments and matching up each segment with a binary/hex conversion table. Here is the first row of the bitmap: 0001100001100000 The two code rows below represent the four bit segments and the corresponding hex values matched up from the Binary/hexadecimal conversion table for XBM, as shown in Table A: Bin: 0001 1000 0110 0000 Hex: 8 1 6 0 Table A Keep in mind that these are not standard bin/hex conversions. The values were calculated backward (left to right) as opposed to right to left. We are inverting the values because the browser will natively read and render graphics left to right, and our code has to account for that. The last step involves getting these hex values into the right format to be XBM compliant. You must prepend "0x" before each hex grouping. That's the standard C++ method of flagging a value as hexadecimal or base-16 numbering system. The values are then written out right to left (each hex pair representing a binary octet). For an example, see Listing B. Therefore, we can say that: 0001100001100000 = 0x18, 0x06 These XBM-compliant values can then easily be inserted into the image array: static unsigned char xbmsmile_bits[] = { 0x18, 0x06, etc...} Now that we've looked at the format itself, it's time to learn how to dynamically generate X-Bitmaps in our client browser. Programming XBM graphics using JavaScript I'll demonstrate the usefulness of the XBM format through a program I've designed that generates UPC-compliant bar codes on the fly. The idea originated from an interesting article that describes how UPC bar codes work. To generate a new bar code, simply change the value of the upcCode variable and refresh the browser page. Listing C provides the detailed code for the application. The program works with the help of two important JavaScript functions: - · buildBinStr takes a 12-digit decimal number and converts it into a 96-digit binary number that graphically represents the bar code image. The binary construction of the bar code roughly follows the specifications outlined by the Uniform Code Council. - · buildHexStr takes the long binary number generated by buildBinStr and converts it into XBM-compliant hex code. This function programmatically follows the same conversion steps outlined in the section explaining the anatomy of the XBM format. Listing D converts the decimal input to binary bar code. Listing E takes the XBM hex values generated by the buildHexStr function and builds the XBM code necessary to display the graphic. The hex values are reiterated 40 times in the array to create a bar code that is 40 pixels in height. One of the many methods of instantiating XBM graphics using JavaScript is by using javascript:imagename. All you need to do is create a variable that contains the XBM code and reference the variable within the image src tag. Check out Listing F for an example. Conclusion X-Bitmaps are an interesting option if you want to generate simple Web graphics on the fly using nothing but a browser. With a little imagination, you can create clever and useful applications harnessing the format's potential.
http://www.techrepublic.com/article/create-dynamic-client-side-images-using-xbm/
CC-MAIN-2017-34
refinedweb
1,180
61.46
Jo. How about dev_t to 64 bits? For "worldwide" distributed filesystems (such as OpenAFS), this would be a very nice thing. At 5:42 PM +0100 8/5/05, Hiten Pandya wrote:What about time related fields, are they currently 64-bits wide? We are stuck with various time standards so there isn't much we can do with the time fields. The standards idiots didn't fix the seconds field when they changed the microseconds field to nanoseconds. One thing I considered is coming up with a "struct_time_t" macro. This could be used to at least *reserve* 64-bit areas in a struct for any struct where a time_t value is used. That way, if you later want to have a 64-bit time_t, you'll have the room reserved for it. I'm hoping to get something like this together for FreeBSD 7.x, when I'm in a particularly optimistic mood... #include <sys/cdefs.h> #include <sys/types.h> /* This uses specific sizes, and then some other include file * would set the real time_t to either time32_t or time64_t. */ typedef int32_t time32_t; typedef int64_t time64_t; #if _BYTE_ORDER == _LITTLE_ENDIAN #define STRUCT_TIME_T(vname) \ union __aligned(8) { \ time64_t __CONCAT(vname,_64); \ struct { \ time32_t vname; \ int32_t __CONCAT(vname,_h32); \ }; \ } #elif _BYTE_ORDER == _BIG_ENDIAN #define STRUCT_TIME_T(vname) \ union __aligned(8) { \ time64_t __CONCAT(vname,_64); \ struct { \ int32_t __CONCAT(vname,_h32); \ time32_t vname; \ }; \ } #endif (which I have done some limited testing with, and it seems to do what I want it to do). You would use it like: struct test_stat { ... dev_t st_rdev; /* device type */ STRUCT_TIME_T(st_atime); /* time of last access */ long st_atimensec; /* nsec of last access */ STRUCT_TIME_T(st_mtime); /* time of last data modification */ long st_mtimensec; /* nsec of last data modification */ STRUCT_TIME_T(st_ctime); /* time of last file status change */ long st_ctimensec; /* nsec of last file status change */ char abyte; }; -- Garance Alistair Drosehn = gad@xxxxxxxxxxxxxxxxxxxx Senior Systems Programmer or gad@xxxxxxxxxxx Rensselaer Polytechnic Institute or drosih@xxxxxxx
http://leaf.dragonflybsd.org/mailarchive/users/2005-08/msg00037.html
CC-MAIN-2013-20
refinedweb
320
60.65
Steven Bethard <steven.bethard at gmail.com> wrote: > For this reason, I usually suggest declaring properties like[1]: > > py> class E(object): > ... def x(): > ... def get(self): > ... return float(self._x) > ... def set(self, x): > ... self._x = x**2 > ... return dict(fget=get, fset=set) > ... x = property(**x()) > ... def __init__(self, x): > ... self._x = x > ... > py> e = E(42) > py> e.x > 42.0 > py> e.x = 3 > py> e.x > 9.0 > > Note that by using the x = property(**x()) idiom, I don't pollute my > class namespace with get/set/del methods that aren't really useful to > instances of the class. It also makes it clear to subclasses that if > they want different behavior from the x property that they'll need to > redefine the entire property, not just a get/set/del method. > > Steve > > [1] Thanks to whoever originally suggested this! Sorry, I've forgotten > who... In the Cookbook 2nd ed, I credited Sean Ross, the author of the CB recipe proposing this (with credit also to David Niegard and Holger Krekel for important comments whose contents I merged into the recipe). Of course there are several possible variations, such as return locals() instead of return dict(&c)... Alex
https://mail.python.org/pipermail/python-list/2004-December/260635.html
CC-MAIN-2014-10
refinedweb
203
68.36
Extensible Stylesheet Language Transformations . This result tree may then be serialized into a file or written onto a stream. Documents can be transformed using a standalone program or as part of a larger program that communicates with the XSLT processor through its API. All standard XSLT elements are in the namespace. In this chapter, we assume that this URI is mapped to the xsl prefix using an appropriate xmlns:xsl declaration somewhere in the stylesheet. This mapping is normally declared on the root element like this: <xsl:stylesheet <!-- XSLT top-level elements go here --> </xsl:stylesheet> XSLT defines 37 elements, which break down into 3 overlapping categories: Two root elements: xsl:stylesheet xsl:transform 12 top-level elements. These elements 23 instruction elements. These elements:param contain reference. In practice, these are normally URLs. Relative URIs are relative to the location of the stylesheet itself. Some attributes that contain strings whether those strings are literals, expressions, names, or something else can be given as. The xsl:apply-templates instruction tells the processor to search for and apply the highest-priority template attribute, probably but not necessarily the one used in the name attribute. The contents of this element are a template whose instantiation only produces text nodes. The value of the attribute added to the result tree is determined by instantiating the template.. The xsl:choose element selects zero or one of a sequence of alternatives. This element contains one or more xsl:when elements, each of which has a test condition. The contents are output for the first xsl:when child whose test condition is true. The xsl:choose element may have an optional xsl:otherwise element whose contents are output only if none of the test conditions in any of the xsl:when elements is true. If no xsl:otherwise element exists and none of the test conditions in any of the xsl:when child elements is true, then this element will not produce output. The xsl:comment instruction inserts a comment into the result tree. The content of xsl:comment is a template that will be instantiated to form the text of the comment inserted into the result tree. The result of instantiating this template must only be text nodes that do not contain the double hyphen (--) (since comments cannot contain the double hyphen).. The use-attribute-sets attribute can be used only when the copied node is an element node. or a result-tree fragment, such as a number, then the expression is converted to its string value and the string is output. An XPath expression identifying the object to copy into the result tree.. The character that separates groups of digits (e.g., the comma that separates every three digits in English).; 0. The xsl:include top-level element copies the contents of the xsl:stylesheet or xsl:transform element found at the URI given by the href attribute. Unlike xsl:import, whether a template or other rule. The xsl:message instruction sends a message to the XSLT processor. Which messages the processor understands and what it does with messages it does understand top-level xsl:namespace-alias element declares that one namespace URI in the stylesheet should be replaced by a different namespace URI in the result tree. Aliasing is particularly useful when you're transforming XSLT into XSLT using XSLT; consequently, which names belong to the input, which belong to the output, and which belong to the stylesheet is not obvious. The prefix used inside the stylesheet itself. May be set to #default to indicate that the nonprefixed default namespace should be used. The prefix used in the result tree. May be set to #default to indicate that the nonprefixed default namespace should be used., . . . You can also change the starting point; for instance, setting the format token to 5 would create the sequence 5, 6, 7, 8, 9, . . . This is the RFC 1766 language code describing the language in which the number should be formatted (e.g., en or. This is a name token that identifies the output method's version. In practice, this has no effect on the output. This is the name of the encoding the outputter should use, such as ISO-8859-1 or UTF-16. If this attribute has the value yes, then no XML declaration is included. If it has the value no or is not present, then an XML declaration is included. This attribute sets the standalone attribute's value in the XML declaration. Like that attribute, it must have the value yes or no. This attribute sets the public identifier used in the document type declaration. This attribute sets the system identifier used in the document type declaration. This is a whitespace-separated list of qualified element names in the result tree whose contents should be emitted using CDATA sections rather than character references. If this attribute has the value yes, then the processor is allowed (but not required) to insert extra whitespace to attempt to "pretty-print" the output tree. The default is no. This is the output's MIME media type, such as text/html or text/xml. provides a default value for multiple templates. If an xsl:apply-templates or xsl:call-template passes in a parameter value using xsl:with-param when the template is invoked, then taken from the element's contents.feed). or contain a namespace prefix followed by a colon and an asterisk to indicate that whitespace should be preserved in all elements in the given namespace.. This is the key to sort by. If select is omitted, then the sort key is set to the value of the current node. By default, sorting is purely alphabetic. However, alphabetic sorting leads to strange results with numbers. For instance, 10, 100, and 1000 all sort before 2, 3, and 4. You can specify numeric sorting by setting the data-type attribute to number. Sorting is language dependent. Setting the lang attribute to an RFC 1766 language code changes the language. The default language is system dependent. This is the order by which strings are sorted. This order can be either descending or ascending. The default is ascending order. The case-order attribute can be set to upper-first or lower-first to specify whether uppercase letters sort before lowercase letters, or vice versa. The default depends on the language. The top-level xsl:strip-space element specifies which elements in the source document have whitespace stripped from them before they are transformed. Whitespace stripping removes all text nodes that contain only whitespace (the space character, the tab character, the carriage return, and the linefeed).. The xsl:stylesheet element is the root element for XSLT documents. A standard namespace declaration that maps the prefix xsl to the namespace URI. The prefix can be changed if necessary. Currently, always the value 1.0. However, XSLT 2.0 may be released in 2003 with a concurrent updating of this number. Any XML name that's unique within this document's ID type attributes. A whitespace-separated list of namespace prefixes used by this document's extension elements. A whitespace-separated list of namespace prefixes whose declarations should not be copied into the output document. Any xsl:import elements, followed by any other top-level elements in any order. The xsl:template top-level element is the key to all of XSLT. A little. A name by which this template rule can be invoked from an xsl:call-template element, rather than by node matching.. If the xsl:template element has a mode, then this template rule is matched only when the calling instruction's mode attribute matches this mode attribute's value. The template that should be instantiated when this element is matched or called by name. character or entity references such as < or <, should instead be output as the literal characters themselves. Note that the xsl:text element's content in the stylesheet must still be well-formed, and any < or & characters must be written as < or & or the equivalent character references. However, when the output document is serialized, these references are replaced by the actual represented characters rather than references that represent them.. The xsl:value-of element computes the string value of an XPath expression and inserts it into the result tree. The also take. The xsl:variable element binds a name to a value of any type (string, number, node-set, etc.). This variable can then be dereferenced elsewhere using the form $name in an expression.. XS. The current( ) function returns a node-set containing a single node, the current node. Outside of an XPath predicate, the current node and the context node (represented by a period in the abbreviated XPath syntax) are identical. However, in a predicate, the current node may change based on other contents in the predicate, while the context node stays the same. The document( ) function loads the XML document at the URI specified by the first argument and returns a node-set containing that document's root node. The URI is normally given as a string, first node (in document order) in this set is used as the base URI with which to resolve relative URIs given in the first argument. If the second argument is omitted, then base URIs are resolved relative to the stylesheet's location. element-available( ) returns true if and only if the argument identifies an XSLT element the processor recognizes. If the qualified name maps to a non-null namespace URI, then it refers to an extension element. Otherwise, it refers to a standard XSLT element. Assuming use of a fully conformant processor, you don't need to use this function to test for standard elements; just use it for extension elements.). See for full documentation of the pattern passed as the second argument. 23-1 defines the symbols used in the grammar. For instance, #,##0.### is a common decimal-format pattern. The # mark indicates any digit character except a leading or trailing zero. The comma is the grouping separator. The period is the decimal separator. The 0 is a digit that is printed even if it's a nonsignificant prefixed with a minus sign. may become 2.0 during this book's life span. A string identifying the XSLT processor's vendor; for instance, Apache Software Foundation for Xalan or SAXON 6.4.4 from Michael Kay for SAXON. A string containing a URL for the XSLT processor's vendor; for instance, for Xalan or for SAXON. Implementations may also recognize and return values for other processor-dependent properties. The unparsed-entity-uri( ) function returns the URI of the unparsed entity with the specified name declared in the source document's DTD or the empty string, if no unparsed entity with that name exists. Unfortunately, there is no standard API for XSLT that works across languages and engines: each vendor provides its own unique API. The closest thing to a standard XSLT API is TrAX (the Transformations API for XML),);
https://flylib.com/books/en/1.133.1.169/1/
CC-MAIN-2019-47
refinedweb
1,833
56.76
We are making a mapmaking robot, that drives round on the lego road elements, and builds a map. We have a (Danish) blog here regarding the project if anyone is interrested. However we think we have found a bug in the Firmware / VM implementation on the NXT. We have a class called Core, which implements some data that the robot needs to access from alot of places, so Core is implemented with a Singleton pattern: - Code: Select all public class Core { private static Core singleton; private Data data; private Core() { data = new Data(); } public Core getSingleton() { if( singleton == null) { singleton = new Core(); } return singleton; } public Data getData() { return data; } } Now if we from our main method, call - Code: Select all Core.getSingleton().getData() and do something to this data (Like adding another point or whatever). And then from another thread (Later on as this is triggered by us, so we know the main thread is done accessing the data). Then - Code: Select all Core.getSingleton().getData() returns a new Data object, instead of, as expected the already created Data object in Core. To us it looks like each thread gets its own "Singleton" which kinda ruins the idea behind Singleton pattern. We are 99% sure this isnt a thread synchronization problem, as i said earlier, the second thread dosent run until we start it manually, so its not running at all in the beginning when the main thread is accessing the Core at first. /Gof
http://www.lejos.org/forum/viewtopic.php?f=7&t=616
CC-MAIN-2014-42
refinedweb
245
55.47
Migrating telemetry and security agents from dockershim With Kubernetes 1.20 dockershim was deprecated. From the Dockershim Deprecation FAQ you might already know that most apps do not have a direct dependency on runtime hosting containers. However, there are still a lot of telemetry and security agents that has a dependency on docker to collect containers metadata, logs and metrics. This document aggregates information on how to detect these dependencies and links on how to migrate these agents to use generic tools or alternative runtimes. Telemetry and security agents There are a few ways agents may run on Kubernetes cluster. Agents may run on nodes directly or as DaemonSets. Why do telemetry agents rely on Docker? Historically, Kubernetes was built on top of Docker. Kubernetes is managing networking and scheduling, Docker was placing and operating containers on a node. So you can get scheduling-related metadata like a pod name from Kubernetes and containers state information from Docker. Over time more runtimes were created to manage containers. Also there are projects and Kubernetes features that generalize container status information extraction across many runtimes. Some agents are tied specifically to the Docker tool. The agents may run commands like docker ps or docker top to list containers and processes or docker logs to subscribe on docker logs. With the deprecating of Docker as a container runtime, these commands will not work any longer. Identify DaemonSets that depend on Docker If a pod wants to make calls to the dockerd running on the node, the pod must either: - mount the filesystem containing the Docker daemon's privileged socket, as a volume; or - mount the specific path of the Docker daemon's privileged socket directly, also as a volume. For example: on COS images, Docker exposes its Unix domain socket at /var/run/docker.sock This means that the pod spec will include a hostPath volume mount of /var/run/docker.sock. Here's a sample shell script to find Pods that have a mount directly mapping the Docker socket. This script outputs the namespace and name of the pod. You can remove the grep /var/run/docker.sock to review other mounts. kubectl get pods --all-namespaces \ -o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{":\t"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.hostPath.path}{", "}{end}{end}' \ | sort \ | grep '/var/run/docker.sock' Note: There are alternative ways for a pod to access Docker on the host. For instance, the parent directory /var/runmay be mounted instead of the full path (like in this example). The script above only detects the most common uses. Detecting Docker dependency from node agents In case your cluster nodes are customized and install additional security and telemetry agents on the node, make sure to check with the vendor of the agent whether it has dependency on Docker. Telemetry and security agent vendors We keep the work in progress version of migration instructions for various telemetry and security agent vendors in Google doc. Please contact the vendor to get up to date instructions for migrating from dockershim.
https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/
CC-MAIN-2021-31
refinedweb
508
55.95
I am looking to take an existing numpy array and create a new array from the existing array but at start and end from values from the existing array? For example: arr = np.array([1,2,3,4,5,6,7,8,9,10]) def split(array): # I am only interested in 4 thru 8 in original array return new_array >>>new_array >>> array([4,5,6,7,8]) Just do this : arr1=arr[x:y] where, x -> Start index y -> end index Example : >>> import numpy as np >>> arr = np.array([1,2,3,4,5,6,7,8,9,10]) >>> arr array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) >>> arr1=arr[3:8] >>> arr1 array([4, 5, 6, 7, 8]) In the above case we are using assignment, assignment statements in Python do not copy objects, they create bindings between a target and an object. You may use a .copy() to do a shallow copy. A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original. i.e. >>> arr1=arr[3:8].copy() >>> arr1 array([4, 5, 6, 7, 8]) You may use deepcopy() to do a deep copy. A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original. i.e. >>> arr2 = deepcopy(arr[3:8]) >>> lst2 array([4, 5, 6, 7, 8]) Further reference : copy — Shallow and deep copy operations
https://codedump.io/share/gwkx0XNuW6LR/1/how-to-create-a-numpy-array-from-an-existing-numpy-array
CC-MAIN-2018-17
refinedweb
249
59.33
Hi, i use a lot of weather sensor with RFXCOM binding. Sometimes reception is bad / battery gone. I like to show the last update date/timestamp. What would be the best way to do so? Hi, i use a lot of weather sensor with RFXCOM binding. Sometimes reception is bad / battery gone. I like to show the last update date/timestamp. What would be the best way to do so? You could use persistence and try to change the following sample to your needs. Regards Thanks. I tried this: var SimpleDateFormat df = new SimpleDateFormat( "YYYY-MM-dd HH:mm:ss" ) var String Timestamp = df.format( new Date() ) logInfo("RFX_Wind_Update", Timestamp ) RFX_Wind_Update.postUpdate(Timestamp) Logfile outputs correctly: [o.model.script.RFX_Wind_Update] - 2015-08-26 21:09:34 but the item state itself isn’t set. Item definition looks like: String RFX_Wind_Update “Wind letztes Update:” (FF_Office,Klima) Make a DateTime item: DateTime RFX_Wind_Update "[%1$tm/%1$td %1$tH:%1$tM]" Have a rule update when the “paired” item is updated: import org.openhab.core.library.types.* // not needed in OH2 rule Foo when Item RFX_Wind received update then RFX_Wind_Update.postUpdate( new DateTimeType() ) end I can’t believe I didn’t think of this. [smacks head] I was using a Text item and formatting the string itself in my rule. This is so much better. Thanks! Rich Works perfectly, thank you. Just to understand the system better: Why did the rule with the string not work? A String works just fine; it’s just too much work! And it’s hard to use for comparison later if you need that. In my sitemaps I use: Text item=Fibaro_Motion_1_LastUpdate valuecolor=[>6000="red",>600="orange",<=600="green"] Which perfectly shows me the status of the devices in color code. BR How is this item defined in your items file, so we can understand why this works? It would be very nice to have stale updates highlighted by color! Hey, I’m using a DateTime item. OH can do maths on dates. Very nice feature. I think I got this from the sample rules, but not too sure. Heres my full config stack: .item: DateTime Fibaro_Motion_1_LastUpdate "Last seen [%1$ta %1$tR]" <clock> .rule: rule "Records when device was last seen" when Item Fibaro_Motion_1 received update or Item Fibaro_Motion_1_Temp received update or Item Fibaro_Motion_1_Lux received update then postUpdate(Fibaro_Motion_1_LastUpdate, new DateTimeType()) end Text item=Fibaro_Motion_1_LastUpdate valuecolor=[>6000="red",>600="orange",<=600="green"] Hope this helps. BR I think you could also use something like Switch.lastUpdate(WindowsStart,“rrd4j”) if you have set up persistence correctly. Saves you the trouble of creating a rule. @Kai Would be nice to have a timestamp (lastUpdated) exposed via REST + org.openhab.core.items.GenericItem interface similar to state. Or put differently: State only makes sense with a timestamp nearby to avoid fishy data. I wouldn’t add it to the GenericItem, see here a previous discussion. But we can think of other ways to make it more easily available to UIs through REST (although we have to be careful not to send too many events as some bindings might create several state updates per second). Hi All I have been doing this in openHAB for a while, but recently updated to the latest build and now everything shows as Green? Has something changed that would cause this? I think same problem described here: I have the same problem, just waiting for a new stable version which will (hopefully) fix this. I use this lambda for timestamps: val Functions$Function2<GenericItem, String, String> getTimestamp = [ //function (lambda) to get a timestamp. Returns formatted string and optionally updates an item item, date_format | var date_time_format = date_format if(date_format == "" || date_format === null) date_time_format = "%1$ta %1$tT" //default format Day Hour:Minute:Seconds var String Timestamp = String::format( date_time_format, new Date() ) if(item != NULL && item !== null) { var t = new DateTimeType() if(item instanceof DateTimeItem) { postUpdate(item, t) logInfo("Last Update", item.name + " DateTimeItem updated at: " + Timestamp ) } else if(item instanceof StringItem) { postUpdate(item, Timestamp) logInfo("Last Update", item.name + " StringItem updated at: " + Timestamp ) } else logWarn("Last Update", item.name + " is not DateTime or String - not updating") } Timestamp ] You call it in a rule like this: rule "HEM Last Updated" when Item HEM_C1 received update or Item HEM_C2 received update then getTimestamp.apply(HEMLastUpdated, "") end or: rule "Aeon Labs Multisensor LA Last Updated" when Item AeonMS61LA changed or Item AeonMS62LA changed or Item AeonMS63LA changed or Item AeonMS64LA changed then getTimestamp.apply(AeonMS6LastUpdatedLA, "%1$ta %1$tR") end or: rule "Autolock Front Door" when Item virtualfrontDoorDoorContact changed to CLOSED then // Front Door Closed if(DoorTimer !== null) DoorTimer.cancel DoorTimer = null logInfo("FRONT_DOOR", "Front Door CLOSED - Master Sensor: " + FrontDoorSensorSelected.state ) postUpdate(hallway_HSM200_setcolour, 3) //RED (closed, unlocked) var String Timestamp = getTimestamp.apply(virtualfrontDoorLastUpdate, "%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS") AutoRelock.apply(" by: Door Close Auto Lock", Timestamp, LockTimers, "FrontDoor", DoorRelockTime) end This way, it doesn’t matter if the item is a DateTime type, or a String, you can optionally specify the format of the string, and you can get the Timestamp value returned as a String to use in notifications, or logs etc. I have implemented this LastUpdate method but it stopped changing the colors after updating Openhab to 2.2.0-1. The timer is still working. It seems valuecolor can not get the time in seconds from a DateTime item anymore. Any ideas? Thank you for this! I was looking for a way to date/time stamp the zWave nodes and this will do it . . . Best, Jay I´m just trying your examples. I have these PIR´s all around, and I want to know when they last fired. I thought I could just rule: rule "Spisestue PIR Last Updated" when Item spise_pir changed then getTimestamp.apply(spise_pirUpdated, "%1$ta %1$tR") end Text item=spise_pirUpdated label="Sidste åbnet [%1$tH:%1$tM %1$td.%1$tm.%1$tY]" But it doesn´t really work… Isn´t it possible to catch a timestamp from an action and create this time to an item, to show it in a sitemap? This is the error I get: 2018-07-02 21:32:34.592 [ERROR] [ntime.internal.engine.RuleEngineImpl] - Rule 'Spisestue PIR Last Updated': The name 'getTimestamp' cannot be resolved to an item or type; line 5, column 5, length 12 You are missing the code under “I use this lambda for timestamps:”
https://community.openhab.org/t/show-date-time-of-last-sensor-update/2114
CC-MAIN-2022-27
refinedweb
1,072
57.47
NAME Prima::CurvedText - fit text to path DESCRIPTION The module registers single function curved_text_out in Prima::Drawable namespace. The function plots the line of text along the path, which given as a set of points. Various options regulate behavior of the function when glyphs collide with the path boundaries and each other. SYNOPSIS use Prima qw(Application Drawable::CurvedText); $spline = [qw(100 100 150 150 200 100)]; $::application-> begin_paint; $::application-> spline($spline); $::application-> curved_text_out( 'Hello, world!', $::application-> render_spline( $spline )); curved_text_out $TEXT, $POLYLINE, %OPTIONS $TEXT is a line of text, no special treatment is given to tab and newline characters. The text is plotted over $POLYLINE path that should be an array of coordinate numeric pairs, in the same format as Prima::Drawable::polyline expects.: - bevel BOOLEAN=true If set, glyphs between two adjoining segments will be plotted with bevelled angle. Otherwise glyphs will strictly follow the angles of the segments in the path. - callback CODE($SELF, $POLYLINE, $CHUNKS) If set, the callback is called with $CHUNKSafter the calculations were made but before the text is plotted. $CHUNKSis an array of tuples where each consists of text, angle, x and y coordinates for each text. The callback is free to modify the array. - collisions INTEGER=0 If 0, collision detection is disabled, glyphs plotted along the path. If 1, no two neighbour glyphs may overlap, and no two neighbour glyph will be situated further away from each other than it is necessary. If 2, same functionality as with 1, and also two glyphs (in all text) will overlap. - nodraw BOOLEAN=false If set, calculate glyph positions but do not draw them. - offset INTEGER=0 Sets offset from the beginning of the path where the first glyph is plotted. If offset is negative, it is calculated from the end of the path. - skiptail BOOLEAN=false If set, the remainder of the text that is left after the path is completely traversed, is not shown. Otherwise (default), the tail text is shown with the angle used to plot the last glyph (if bevelling was requested) or the angle perpendicular to the last path segment (otherwise). AUTHOR Dmitry Karasik, <dmitry@karasik.eu.org>.
https://metacpan.org/pod/Prima::Drawable::CurvedText
CC-MAIN-2018-30
refinedweb
359
62.58
Hi, I'd like to use a nano instead of an uno for a project but need to use a enc28j60-based ethernet shield. Any support for this chipset yet? I'm sure you are like me and have several Nanos not being used and it makes sense to use what you have. But, if I wanted to install a smaller network capable board I would go with an ESP8266 . I have a couple of the Adafruit Huzzah ESP8266 break out boards running now. Sorry I didn't answer the real question, maybe @bestes or @rsiegel will chime in. -Ian Ian,Have to confess I'm not familiar with that board, but I might look at it if ENC28J60 is not supported (or a generic equivalent as Adafruit is not readily available in my part of the world). Which of the internet option sketches would one select to work with this board? Or is there more to using this board than I imagine (in my innocence). Thnx ...Bren Brendan, The ESP8266 is a microcontroller with built in wifi and a bunch of other stuff that I don't even pretend to understand (Espressif product page). It can be procured as a bare module, or as a full dev board. It can be programmed via the Arduino IDE and acts a lot like an Uno in practice. Arduino Uno sketches with minor tweaking Craig did a quick tutorial here on how to get the code correct. If I can do it, anyone can. Craig (@kreggly) just posted that he believes you should be able to get the ENC28J60 to work here. So maybe all this discussion is for naught. Hope this helps, Not for naught. The ESP is a great little board. Still doesn't have the flash space for all his libraries though. I bet there is a lot more baggage that can be heaved though, but that's another story. I've ordered this and will try it out myself when it comes in. In the meantime, let me know if you can get it to work! There was some discussion here about it you might want to check it out: -B Hi @bestes, so did you try ENC28J60?Or anyone else? Hi @akuljana, I have had it next to me for a week and haven't had the change to try test it out Hopefully someone else is able to test it out. @akuljana, @bestes, As a start, I'd try this: 1. Install the Arduino UIP library. 2. Create a Cayenne sketch say with a Uno and a W5100 shield. 3. Replace #include <CayenneEthernet.h> with the following: #include <UIPEthernet.h> #include <BlynkSimpleUIPEthernet.h> #include <CayenneDefines.h> #include <CayenneEthernetClient.h> Give that a try and let me know. Cheers, Craig Man!!! THAT'S WORKS FOR ME!!!I'm using A.Nano, used your code and now I'm able to connect with my ENC28J60 module! My code: #define CAYENNE_PRINT Serial // Comment this out to disable prints and save space #include <UIPEthernet.h> #include <BlynkSimpleUIPEthernet.h> #include <CayenneDefines.h> #include <CayenneEthernetClient.h> // Cayenne authentication token. This should be obtained from the Cayenne Dashboard. char token[] = "YourToken"; void setup() { Serial.begin(9600); Cayenne.begin(token); } void loop() { Cayenne.run(); } Thank you! @eteroxee, That's awesome! @bestes, another module verified. That's excellent, thanks for the feedback. I've moved this topic to the FAQs category to make it a little easier for someone to find @kreggly's solution until we have official in-platform support for this module. I'm so happy to help the community, thank you! Here is a working system with a DS18B20 temperature sensor, and an LED. It also works with the incredible If-THEN trigger, so now I can make a thermostat that is connected to the internet!!!WOW! //#include <UIPEthernet.h> //I've switched it off since the code works without this #include <BlynkSimpleUIPEthernet.h> //Interesting, why a Blink library needs :-), but the code doesn't work without this #include <CayenneDefines.h> #include <CayenneEthernetClient.h> //Include DS18B20 temperature sensor, just for example #include <OneWire.h> #include <DallasTemperature.h> //The key for Cayenne dashboard char token[] = "YourToken"; //Defining PINs for LEDs #define LED_VIRTUAL_PIN 1 #define LED_DIGITAL_PIN 6 #define VIRTUAL_PIN 2 //Define virtual pin for DS18B20 temperature sensor const int tmpPin = 7; OneWire oneWire(tmpPin); DallasTemperature sensors(&oneWire); void setup() { Serial.begin(9600); Cayenne.begin(token); sensors.begin(); } CAYENNE_IN(LED_VIRTUAL_PIN) { // get value sent from dashboard int currentValue = getValue.asInt(); // 0 to 1023 analogWrite(LED_DIGITAL_PIN, currentValue / 4); // must be from 0 to 255 })); } void loop() { Cayenne.run(); } Hi guys. i am trying with uno and enc28j60.....this way i didn't success..... please can you help me?tks.. Just a little addition: if it is about being able to use a Nano or even a Promini, as the reason to use an enc28j60 module....... there also is a w5100 module.The enc28j60 is a bit memory hungry and i reckon the great UIPEthernet lib is a bit memory hungry too, so there will not be much memory left. Having said that... I believe both the enc28j60 as well as the w5100 module cost a few bucks (say 5 usd) which is more than the ESP8266, so it is a bit of a judgement call if one wants to invest in one of those modules
http://community.mydevices.com/t/is-enc28j60-supported/1044
CC-MAIN-2017-51
refinedweb
888
76.62
I could'nt find any pages that populate Combo box values faster.........in C# This code snippet in C# pulls data from a DataBase : Modal, Table : Tbl_Area using a stored procedure : SelectDistinctStates_sp SelectDistinctStates_sp contains : CREATE PROCEDURE SelectDistinctStates_sp AS SELECT AreaName as AreaName FROM Tbl_Area GO ********************************************************************************** using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using System.Data.SqlClient; using System.Data.OleDb; namespace Shyam_Test { /// <summary> /// Summary description for Form1. /// </summary> public class Form1 : System.Windows.Forms.Form private System.Windows.Forms.Button button1; private System.Windows.Forms.TextBox textBox1; private System.Windows.Forms.ComboBox cboEmployerState; /// Required designer variable. private System.ComponentModel.Container components = null; public Form1() View Complete Post I two basic lists (Alist, and Blist). One field from Alist is a textfiled of selection type (ComboBox). I want to fill this combo with the content of Blist. When I create a field I can chose to populate the ComboBox from an static list, but I don't know how to bind to a SharePoint List. It is possible to do in easy way? Thanks for your time! Hi all I have a button "Show Report" to show some data from table what I should write under this button to populate the reports ? How can we bind a value to a combobox or dropdown? What is the property for binding a value?? i Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/25497-populate-combobox-from-sql-db.aspx
CC-MAIN-2017-13
refinedweb
247
53.68
world of JavaScript frameworks is continuously evolving. As a result, we receive requests for integration with a specific new library every day. For a while now, Vue.js has been mentioned to us more than others. The framework is growing into a serious contender in the UI space, having accumulated almost as many GitHub stars as React. Since we already have the other main players covered — with our Angular integration and the new React Wrappers —, we decided to tackle Vue.js as the next most important platform. In our upcoming release v18.1, we are introducing more than 65 Vue.js components, based on DevExtreme widgets. Here is the GitHub repository, where you can also find example code and getting-started instructions more detailed than the information in this post. On a low level, our DevExtreme Widgets run JavaScript and render HTML, so they can be used in any HTML/JS application. However, when you look at different UI frameworks, you find that an important distinguishing aspect is the way libraries implement components, instantiation mechanisms, life-cycle management and runtime features like data binding. Integrating with a certain framework therefor means adopting the specific approaches that make our widgets “feel” native to the environment. Several different Vue.js techniques can be used to instantiate a DevExtreme Vue component. First, you can include it in a single-file component: <template> <div> <dx-button <dx-button text="I'm colored" : <dx-button </div> </template> <script> import { DxButton } from "devextreme-vue"; export default { components: { DxButton } } </script> Alternatively, components can be used as part of a template when creating a Vue instance: new Vue({ el: '#app', components: { DxButton }, template: '<dx-button :', data() { return { text: 'Hello!' }; } }); A third option is to include our Vue components in your own HTML, like this live sample (alternative live sample link) does: <html> ... <body> <div id="app"> <dx-data-grid key-expr="orderId" ...> ... </dx-data-grid> </div> </body> </html> Finally, our components are also compatible with JSX render functions. All Vue.js data binding mechanisms are supported: <dx-button : <dx-text-box :value. <dx-text-box You can install listeners for all DevExtreme widget events using the standard Vue.js v-on or (shorthand) @ syntax. <dx-button v-on: Where our DevExtreme widgets support templating of components or elements, you can write these templates using Named Slots. Template data is scoped, with the slot-scope attribute pointing to a variable that can be used to access data within the template. slot-scope <div id="app"> <dx-list : <div slot="item" slot- <i>This is my template for {{data}}</i> </div> </dx-list> </div> All our DevExtreme Vue components publish Prop Validation requirement details. This means that you will receive console error messages when you are misusing component properties. The public CTP of our DevExtreme Vue Wrappers is available now. For npm, you can use the pre-release package (and check out this post if you’re not using npm). npm install devextreme@18.1-unstable devextreme-vue@18.1-unstable Of course we are very interested in any of your thoughts or comments. However, we also need some help prioritizing a few features that have not been implemented yet: Please get back to us if you have any thoughts about these features, or if you think we should prioritize something else. Feel free to comment on this post, or to participate in discussions on GitHub! If you want to see all new features introduced in our v18.1 release, sign up for our upcoming webinar “New in v18.1 - DevExtreme HTML / JS Controls”, where you’ll have a chance to ask questions about all the new features as well. Join the webinar: This is great new and we're are excited about being able to use DevExtreme with Vue! I was testing the example plunker from the github page and I noticed that when I bind to a simple property like the boolean row-alternation-enabled the grid will automatically apply the change with the value is changed but a complex property like filterRow.visible does not cause an update in the grid. Here is an example of what I was trying to do: embed.plnkr.co/Jqnd4HB7CpFp1Xd3Ngj3 . Did I miss something or is this functionality not currently supported? Thanks, Clint Here's the link to the plunker with my modifications embed.plnkr.co/J8TBf9Hn2nUBv1nNdDq6 the link in the post above is the original provided by DevExpress on github Clint, Thank you! I've created the issue (github.com/.../60) in the DevExtreme Vue repo. :D VUE! I can't wait to try this out. I've been too enthralled with Blazor lately but I'll definitely take some time to play around with it. I'm really happy to see you guys support Vue. For some reason its the only one of the major JS frameworks that I like. Please or to post comments.
https://community.devexpress.com/blogs/javascript/archive/2018/05/09/devextreme-vue-js-wrappers-v18-1.aspx
CC-MAIN-2020-05
refinedweb
816
56.35
Bug #13671 Regexp with lookbehind and case-insensitivity raises RegexpError only on strings with certain characters Description Here is a test program: def test(description) begin yield puts "#{description} is OK" rescue RegexpError puts "#{description} raises RegexpError" end end test("ass, case-insensitive, special") { /(?<!ass)/i =~ '✨' } test("bss, case-insensitive, special") { /(?<!bss)/i =~ '✨' } test("as, case-insensitive, special") { /(?<!as)/i =~ '✨' } test("ss, case-insensitive, special") { /(?<!ss)/i =~ '✨' } test("ass, case-sensitive, special") { /(?<!ass)/ =~ '✨' } test("ass, case-insensitive, regular") { /(?<!ass)/i =~ 'x' } Running the test program with Ruby 2.4.1 (macOS) gives ass, case-insensitive, special raises RegexpError bss, case-insensitive, special raises RegexpError as, case-insensitive, special is OK ss, case-insensitive, special is OK ass, case-sensitive, special is OK ass, case-insensitive, regular is OK The RegexpError is "invalid pattern in look-behind: /(?<!ass)/i (RegexpError)" Side note: in the real code in which I found this error I was able to work around the error by using (?i) after the lookbehind instead of //i. Running the test program with Ruby 2.3.4 does not report any RegexpErrors. I think this is a regression, although I might be wrong and it might be saving me from an incorrect result with certain strings. Files Related issues Updated by Hanmac (Hans Mackowiak) about 3 years ago did some checks on my windows system to check how deep the problem is. i used "ä" as variable. the same problem happens when you try to use match function too: /(?<!ass)/i.match('ä') also happen for Regexp.union(/(?<!ass)/i, /ä/) but i still don't understand why it does crash with ass, while ss works. might have something todo how regexp are stored internal Updated by naruse (Yui NARUSE) about 3 years ago I created a ticket in upstream: Updated by gotoken (Kentaro Goto) about 2 years ago Updated by znz (Kazuhiro NISHIYAMA) about 2 years ago You can use (?:s) instead of s for workaround. $ ruby -ve '/(?<=ast)/iu' ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-darwin17] -e:1: invalid pattern in look-behind: /(?<=ast)/i -e:1: warning: possibly useless use of a literal in void context $ ruby -ve '/(?<=a(?:s)t)/iu' ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-darwin17] -e:1: warning: possibly useless use of a literal in void context Updated by znz (Kazuhiro NISHIYAMA) about 2 years ago - Related to Bug #14838: RegexpError with double "s" in look-behind assertion in case-insensitive unicode regexp added Updated by gotoken (Kentaro Goto) about 2 years ago Thanks znz. The workaround is helpful. And I understood what was happened. shows how some combinations of letters are variable length. For example, "ss" and "st" are mapped "ß" ( "\u00DF") and "st" ( "\uFB06"). Those combinations are listed in By the way, this expansion by //i option looks over kill for me. I wish case sensitivity and SpecialCasing mapping were separated... Updated by shyouhei (Shyouhei Urabe) about 2 years ago gotoken (Kentaro Goto) wrote: By the way, this expansion by //ioption looks over kill for me. I wish case sensitivity and SpecialCasing mapping were separated... I know how you feel. Too bad we are just doing what Unicode specifies to do. See also Updated by gotoken (Kentaro Goto) about 2 years ago Thanks shyouhei for your pointing out. I imagine another Rexexp option, say //I, which is almost the same as //i except for never-applying SpecialCasing mapping. This change extends Unicode matching indeed but does not introduce incompatibilities, IMHO. A difficulty is the implementation is on the upstream library and cruby is just a user. Updated by duerst (Martin Dürst) about 2 years ago gotoken (Kentaro Goto) wrote: For example, "ss"and "st"are mapped "ß"( "\u00DF") and "st"( "\uFB06"). Those combinations are listed in By the way, this expansion by //ioption looks over kill for me. I wish case sensitivity and SpecialCasing mapping were separated... I still have to verify this, but currently I strongly suspect that the problem is NOT in SpecialCasing, but in how Onigmo (/Oniguruma?) implement it. Updated by mauromorales (Mauro Morales) 5 months ago FYI The issue has been addressed in Onigmo and has already been released in version 6.2.0. I tried it by applying the changes using Ruby 2.6.6 and it works as expected. Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/13671?tab=history
CC-MAIN-2020-40
refinedweb
727
57.16
!226 51 2011/02/26 16:54:31226 5.8 release for upload to 49 50 20110226 51 + update release notes, for 5.8. 52 + regenerated html manpages. 53 + change open() in _nc_read_file_entry() to fopen() for consistency 54 with write_file(). 55 + modify misc/run_tic.in to create parent directory, in case this is 56 a new install of hashed database. 57 + fix typo in Ada95/mk-1st.awk which causes error with original awk. 58 59 20110220 60 + configure script rpath fixes from xterm #269. 61 + workaround for cygwin's non-functional features.h, to force ncurses' 62 configure script to define _XOPEN_SOURCE_EXTENDED when building 63 wide-character configuration. 64 + build-fix in run_tic.sh for OS/2 EMX install 65 + add cons25-debian entry (patch by Brian M Carlson, Debian #607662). 66 67 20110212 68 + regenerated html manpages. 69 + use _tracef() in show_where() function of tic, to work correctly with 70 special case of trace configuration. 71 72 20110205 73 + add xterm-utf8 entry as a demo of the U8 feature -TD 74 + add U8 feature to denote entries for terminal emulators which do not 75 support VT100 SI/SO when processing UTF-8 encoding -TD 76 + improve the NCURSES_NO_UTF8_ACS feature by adding a check for an 77 extended terminfo capability U8 (prompted by mailing list 78 discussion). 79 80 20110122 81 + start documenting interface changes for upcoming 5.8 release. 82 + correct limit-checks in derwin(). 83 + correct limit-checks in newwin(), to ensure that windows have nonzero 84 size (report by Garrett Cooper). 85 + fix a missing "weak" declaration for pthread_kill (patch by Nicholas 86 Alcock). 87 + improve documentation of KEY_ENTER in curs_getch.3x manpage (prompted 88 by discussion with Kevin Martin). 89 90 20110115 91 + modify Ada95/configure script to make the --with-curses-dir option 92 work without requiring the --with-ncurses option. 93 + modify test programs to allow them to be built with NetBSD curses. 94 + document thick- and double-line symbols in curs_add_wch.3x manpage. 95 + document WACS_xxx constants in curs_add_wch.3x manpage. 96 + fix some warnings for clang 2.6 "--analyze" 97 + modify Ada95 makefiles to make html-documentation with the project 98 file configuration if that is used. 99 + update config.guess, config.sub 100 101 20110108 102 + regenerated html manpages. 103 + minor fixes to enable lint when trace is not enabled, e.g., with 104 clang --analyze. 105 + fix typo in man/default_colors.3x (patch by Tim van der Molen). 106 + update ncurses/llib-lncurses* 107 108 20110101 109 + fix remaining strict compiler warnings in ncurses library ABI=5, 110 except those dealing with function pointers, etc. 111 112 20101225 113 + modify nc_tparm.h, adding guards against repeated inclusion, and 114 allowing TPARM_ARG to be overridden. 115 + fix some strict compiler warnings in ncurses library. 116 117 20101211 118 + suppress ncv in screen entry, allowing underline (patch by Alejandro 119 R Sedeno). 120 + also suppress ncv in konsole-base -TD 121 + fixes in wins_nwstr() and related functions to ensure that special 122 characters, i.e., control characters are handled properly with the 123 wide-character configuration. 124 + correct a comparison in wins_nwstr() (Redhat #661506). 125 + correct help-messages in some of the test-programs, which still 126 referred to quitting with 'q'. 127 128 20101204 129 + add special case to _nc_infotocap() to recognize the setaf/setab 130 strings from xterm+256color and xterm+88color, and provide a reduced 131 version which works with termcap. 132 + remove obsolete emacs "Local Variables" section from documentation 133 (request by Sven Joachim). 134 + update doc/html/index.html to include NCURSES-Programming-HOWTO.html 135 (report by Sven Joachim). 136 137 20101128 138 + modify test/configure and test/Makefile.in to handle this special 139 case of building within a build-tree (Debian #34182): 140 mkdir -p build && cd build && ../test/configure && make 141 142 20101127 143 + miscellaneous build-fixes for Ada95 and test-directories when built 144 out-of-tree. 145 + use VPATH in makefiles to simplify out-of-tree builds (Debian #34182). 146 + fix typo in rmso for tek4106 entry -Goran Weinholt 147 148 20101120 149 + improve checks in test/configure for X libraries, from xterm #267 150 changes. 151 + modify test/configure to allow it to use the build-tree's libraries 152 e.g., when using that to configure the test-programs without the 153 rpath feature (request by Sven Joachim). 154 + repurpose "gnome" terminfo entries as "vte", retaining "gnome" items 155 for compatibility, but generally deprecating those since the VTE 156 library is what actually defines the behavior of "gnome", etc., 157 since 2003 -TD 158 159 20101113 160 + compiler warning fixes for test programs. 161 + various build-fixes for test-programs with pdcurses. 162 + updated configure checks for X packages in test/configure from xterm 163 #267 changes. 164 + add configure check to gnatmake, to accommodate cygwin. 165 166 20101106 167 + correct list of sub-directories needed in Ada95 tree for building as 168 a separate package. 169 + modify scripts in test-directory to improve builds as a separate 170 package. 171 172 20101023 173 + correct parsing of relative tab-stops in tabs program (report by 174 Philip Ganchev). 175 + adjust configure script so that "t" is not added to library suffix 176 when weak-symbols are used, allowing the pthread configuration to 177 more closely match the non-thread naming (report by Werner Fink). 178 + modify configure check for tic program, used for fallbacks, to a 179 warning if not found. This makes it simpler to use additonal 180 scripts to bootstrap the fallbacks code using tic from the build 181 tree (report by Werner Fink). 182 + fix several places in configure script using ${variable-value} form. 183 + modify configure macro CF_LDFLAGS_STATIC to accommodate some loaders 184 which do not support selectively linking against static libraries 185 (report by John P. Hartmann) 186 + fix an unescaped dash in man/tset.1 (report by Sven Joachim). 187 188 20101009 189 + correct comparison used for setting 16-colors in linux-16color 190 entry (Novell #644831) -TD 191 + improve linux-16color entry, using "dim" for color-8 which makes it 192 gray rather than black like color-0 -TD 193 + drop misc/ncu-indent and misc/jpf-indent; they are provided by an 194 external package "cindent". 195 196 20101002 197 + improve linkages in html manpages, adding references to the newer 198 pages, e.g., *_variables, curs_sp_funcs, curs_threads. 199 + add checks in tic for inconsistent cursor-movement controls, and for 200 inconsistent printer-controls. 201 + fill in no-parameter forms of cursor-movement where a parameterized 202 form is available -TD 203 + fill in missing cursor controls where the form of the controls is 204 ANSI -TD 205 + fix inconsistent punctuation in form_variables manpage (patch by 206 Sven Joachim). 207 + add parameterized cursor-controls to linux-basic (report by Dae) -TD 208 > patch by Juergen Pfeifer: 209 + document how to build 32-bit libraries in README.MinGW 210 + fixes to filename computation in mk-dlls.sh.in 211 + use POSIX locale in mk-dlls.sh.in rather than en_US (report by Sven 212 Joachim). 213 + add a check in mk-dlls.sh.in to obtain the size of a pointer to 214 distinguish between 32-bit and 64-bit hosts. The result is stored 215 in mingw_arch 216 217 20100925 218 + add "XT" capability to entries for terminals that support both 219 xterm-style mouse- and title-controls, for "screen" which 220 special-cases TERM beginning with "xterm" or "rxvt" -TD 221 > patch by Juergen Pfeifer: 222 + use 64-Bit MinGW toolchain (recommended package from TDM, see 223 README.MinGW). 224 + support pthreads when using the TDM MinGW toolchain 225 226 20100918 227 + regenerated html manpages. 228 + minor fixes for symlinks to curs_legacy.3x and curs_slk.3x manpages. 229 + add manpage for sp-funcs. 230 + add sp-funcs to test/listused.sh, for documentation aids. 231 232 20100911 233 + add manpages for summarizing public variables of curses-, terminfo- 234 and form-libraries. 235 + minor fixes to manpages for consistency (patch by Jason McIntyre). 236 + modify tic's -I/-C dump to reformat acsc strings into canonical form 237 (sorted, unique mapping) (cf: 971004). 238 + add configure check for pthread_kill(), needed for some old 239 platforms. 240 241 20100904 242 + add configure option --without-tests, to suppress building test 243 programs (request by Frederic L W Meunier). 244 245 20100828 246 + modify nsterm, xnuppc and tek4115 to make sgr/sgr0 consistent -TD 247 + add check in terminfo source-reader to provide more informative 248 message when someone attempts to run tic on a compiled terminal 249 description (prompted by Debian #593920). 250 + note in infotocap and captoinfo manpages that they read terminal 251 descriptions from text-files (Debian #593920). 252 + improve acsc string for vt52, show arrow keys (patch by Benjamin 253 Sittler). 254 255 20100814 256 + document in manpages that "mv" functions first use wmove() to check 257 the window pointer and whether the position lies within the window 258 (suggested by Poul-Henning Kamp). 259 + fixes to curs_color.3x, curs_kernel.3x and wresize.3x manpages (patch 260 by Tim van der Molen). 261 + modify configure script to transform library names for tic- and 262 tinfo-libraries so that those build properly with Mac OS X shared 263 library configuration. 264 + modify configure script to ensure that it removes conftest.dSYM 265 directory leftover on checks with Mac OS X. 266 + modify configure script to cleanup after check for symbolic links. 267 268 20100807 269 + correct a typo in mk-1st.awk (patch by Gabriele Balducci) 270 (cf: 20100724) 271 + improve configure checks for location of tic and infocmp programs 272 used for installing database and for generating fallback data, 273 e.g., for cross-compiling. 274 + add Markus Kuhn's wcwidth function for compiling MinGW 275 + add special case to CF_REGEX for cross-compiling to MinGW target. 276 277 20100731 278 + modify initialization check for win32con driver to eliminate need for 279 special case for TERM "unknown", using terminal database if available 280 (prompted by discussion with Roumen Petrov). 281 + for MinGW port, ensure that terminal driver is setup if tgetent() 282 is called (patch by Roumen Petrov). 283 + document tabs "-0" and "-8" options in manpage. 284 + fix Debian "lintian" issues with manpages reported in 285 286 287 20100724 288 + add a check in tic for missing set_tab if clear_all_tabs given. 289 + improve use of symbolic links in makefiles by using "-f" option if 290 it is supported, to eliminate temporary removal of the target 291 (prompted by) 292 + minor improvement to test/ncurses.c, reset color pairs in 'd' test 293 after exit from 'm' main-menu command. 294 + improved ncu-indent, from mawk changes, allows more than one of 295 GCC_NORETURN, GCC_PRINTFLIKE and GCC_SCANFLIKE on a single line. 296 297 20100717 298 + add hard-reset for rs2 to wsvt25 to help ensure that reset ends 299 the alternate character set (patch by Nicholas Marriott) 300 + remove tar-copy.sh and related configure/Makefile chunks, since the 301 Ada95 binding is now installed using rules in Ada95/src. 302 303 20100703 304 + continue integrating changes to use gnatmake project files in Ada95 305 + add/use configure check to turn on project rules for Ada95/src. 306 + revert the vfork change from 20100130, since it does not work. 307 308 20100626 309 + continue integrating changes to use gnatmake project files in Ada95 310 + old gnatmake (3.15) does not produce libraries using project-file; 311 work around by adding script to generate alternate makefile. 312 313 20100619 314 + continue integrating changes to use gnatmake project files in Ada95 315 + add configure --with-ada-sharedlib option, for the test_make rule. 316 + move Ada95-related logic into aclocal.m4, since additional checks 317 will be needed to distinguish old/new implementations of gnat. 318 319 20100612 320 + start integrating changes to use gnatmake project files in Ada95 tree 321 + add test_make / test_clean / test_install rules in Ada95/src 322 + change install-path for adainclude directory to /usr/share/ada (was 323 /usr/lib/ada). 324 + update Ada95/configure. 325 + add mlterm+256color entry, for mlterm 3.0.0 -TD 326 + modify test/configure to use macros to ensure consistent order 327 of updating LIBS variable. 328 329 20100605 330 + change search order of options for Solaris in CF_SHARED_OPTS, to 331 work with 64-bit compiles. 332 + correct quoting of assignment in CF_SHARED_OPTS case for aix 333 (cf: 20081227) 334 335 20100529 336 + regenerated html documentation. 337 + modify test/configure to support pkg-config for checking X libraries 338 used by PDCurses. 339 + add/use configure macro CF_ADD_LIB to force consistency of 340 assignments to $LIBS, etc. 341 + fix configure script for combining --with-pthread 342 and --enable-weak-symbols options. 343 344 20100522 345 + correct cross-compiling configure check for CF_MKSTEMP macro, by 346 adding a check cache variable set by AC_CHECK_FUNC (report by 347 Pierre Labastie). 348 + simplify include-dependencies of make_hash and make_keys, to reduce 349 the need for setting BUILD_CPPFLAGS in cross-compiling when the 350 build- and target-machines differ. 351 + repair broken-linker configuration by restoring a definition of SP 352 variable to curses.priv.h, and adjusting for cases where sp-funcs 353 are used. 354 + improve configure macro CF_AR_FLAGS, allowing ARFLAGS environment 355 variable to override (prompted by report by Pablo Cazallas). 356 357 20100515 358 + add configure option --enable-pthreads-eintr to control whether the 359 new EINTR feature is enabled. 360 + modify logic in pthread configuration to allow EINTR to interrupt 361 a read operation in wgetch() (Novell #540571, patch by Werner Fink). 362 + drop mkdirs.sh, use "mkdir -p". 363 + add configure option --disable-libtool-version, to use the 364 "-version-number" feature which was added in libtool 1.5 (report by 365 Peter Haering). The default value for the option uses the newer 366 feature, which makes libraries generated using libtool compatible 367 with the standard builds of ncurses. 368 + updated test/configure to match configure script macros. 369 + fixes for configure script from lynx changes: 370 + improve CF_FIND_LINKAGE logic for the case where a function is 371 found in predefined libraries. 372 + revert part of change to CF_HEADER (cf: 20100424) 373 374 20100501 375 + correct limit-check in wredrawln, accounting for begy/begx values 376 (patch by David Benjamin). 377 + fix most compiler warnings from clang. 378 + amend build-fix for OpenSolaris, to ensure that a system header is 379 included in curses.h before testing feature symbols, since they 380 may be defined by that route. 381 382 20100424 383 + fix some strict compiler warnings in ncurses library. 384 + modify configure macro CF_HEADER_PATH to not look for variations in 385 the predefined include directories. 386 + improve configure macros CF_GCC_VERSION and CF_GCC_WARNINGS to work 387 with gcc 4.x's c89 alias, which gives warning messages for cases 388 where older versions would produce an error. 389 390 20100417 391 + modify _nc_capcmp() to work with cancelled strings. 392 + correct translation of "^" in _nc_infotocap(), used to transform 393 terminfo to termcap strings 394 + add configure --disable-rpath-hack, to allow disabling the feature 395 which adds rpath options for libraries in unusual places. 396 + improve CF_RPATH_HACK_2 by checking if the rpath option for a given 397 directory was already added. 398 + improve CF_RPATH_HACK_2 by using ldd to provide a standard list of 399 directories (which will be ignored). 400 401 20100410 402 + improve win_driver.c handling of mouse: 403 + discard motion events 404 + avoid calling _nc_timed_wait when there is a mouse event 405 + handle 4th and "rightmost" buttons. 406 + quote substitutions in CF_RPATH_HACK_2 configure macro, needed for 407 cases where there are embedded blanks in the rpath option. 408 409 20100403 410 + add configure check for exctags vs ctags, to work around pkgsrc. 411 + simplify logic in _nc_get_screensize() to make it easier to see how 412 environment variables may override system- and terminfo-values 413 (prompted by discussion with Igor Bujna). 414 + make debug-traces for COLOR_PAIR and PAIR_NUMBER less verbose. 415 + improve handling of color-pairs embedded in attributes for the 416 extended-colors configuration. 417 + modify MKlib_gen.sh to build link_test with sp-funcs. 418 + build-fixes for OpenSolaris aka Solaris 11, for wide-character 419 configuration as well as for rpath feature in *-config scripts. 420 421 20100327 422 + refactor CF_SHARED_OPTS configure macro, making CF_RPATH_HACK more 423 reusable. 424 + improve configure CF_REGEX, similar fixes. 425 + improve configure CF_FIND_LINKAGE, adding add check between system 426 (default) and explicit paths, where we can find the entrypoint in the 427 given library. 428 + add check if Gpm_Open() returns a -2, e.g., for "xterm". This is 429 normally suppressed but can be overridden using $NCURSES_GPM_TERMS. 430 Ensure that Gpm_Close() is called in this case. 431 432 20100320 433 + rename atari and st52 terminfo entries to atari-old, st52-old, use 434 newer entries from FreeMiNT by Guido Flohr (from patch/report by Alan 435 Hourihane). 436 437 20100313 438 + modify install-rule for manpages so that *-config manpages will 439 install when building with --srcdir (report by Sven Joachim). 440 + modify CF_DISABLE_LEAKS configure macro so that the --enable-leaks 441 option is not the same as --disable-leaks (GenToo #305889). 442 + modify #define's for build-compiler to suppress cchar_t symbol from 443 compile of make_hash and make_keys, improving cross-compilation of 444 ncursesw (report by Bernhard Rosenkraenzer). 445 + modify CF_MAN_PAGES configure macro to replace all occurrences of 446 TPUT in tput.1's manpage (Debian #573597, report/analysis by Anders 447 Kaseorg). 448 449 20100306 450 + generate manpages for the *-config scripts, adapted from help2man 451 (suggested by Sven Joachim). 452 + use va_copy() in _nc_printf_string() to avoid conflicting use of 453 va_list value in _nc_printf_length() (report by Wim Lewis). 454 455 20100227 456 + add Ada95/configure script, to use in tar-file created by 457 Ada95/make-tar.sh 458 + fix typo in wresize.3x (patch by Tim van der Molen). 459 + modify screen-bce.XXX entries to exclude ech, since screen's color 460 model does not clear with color for that feature -TD 461 462 20100220 463 + add make-tar.sh scripts to Ada95 and test subdirectories to help with 464 making those separately distributable. 465 + build-fix for static libraries without dlsym (Debian #556378). 466 + fix a syntax error in man/form_field_opts.3x (patch by Ingo 467 Schwarze). 468 469 20100213 470 + add several screen-bce.XXX entries -TD 471 472 20100206 473 + update mrxvt terminfo entry -TD 474 + modify win_driver.c to support mouse single-clicks. 475 + correct name for termlib in ncurses*-config, e.g., if it is renamed 476 to provide a single file for ncurses/ncursesw libraries (patch by 477 Miroslav Lichvar). 478 479 20100130 480 + use vfork in test/ditto.c if available (request by Mike Frysinger). 481 + miscellaneous cleanup of manpages. 482 + fix typo in curs_bkgd.3x (patch by Tim van der Molen). 483 + build-fix for --srcdir (patch by Miroslav Lichvar). 484 485 20100123 486 + for term-driver configuration, ensure that the driver pointer is 487 initialized in setupterm so that terminfo/termcap programs work. 488 + amend fix for Debian #542031 to ensure that wattrset() returns only 489 OK or ERR, rather than the attribute value (report by Miroslav 490 Lichvar). 491 + reorder WINDOWLIST to put WINDOW data after SCREEN pointer, making 492 _nc_screen_of() compatible between normal/wide libraries again (patch 493 by Miroslav Lichvar) 494 + review/fix include-dependencies in modules files (report by Miroslav 495 Lichvar). 496 497 20100116 498 + modify win_driver.c to initialize acs_map for win32 console, so 499 that line-drawing works. 500 + modify win_driver.c to initialize TERMINAL struct so that programs 501 such as test/lrtest.c and test/ncurses.c which test string 502 capabilities can run. 503 + modify term-driver modules to eliminate forward-reference 504 declarations. 505 506 20100109 507 + modify configure macro CF_XOPEN_SOURCE, etc., to use CF_ADD_CFLAGS 508 consistently to add new -D's while removing duplicates. 509 + modify a few configure macros to consistently put new options 510 before older in the list. 511 + add tiparm(), based on review of X/Open Curses Issue 7. 512 + minor documentation cleanup. 513 + update config.guess, config.sub from 514 515 (caveat - its maintainer put 2010 copyright date on files dated 2009) 516 517 20100102 518 + minor improvement to tic's checking of similar SGR's to allow for the 519 most common case of SGR 0. 520 + modify getmouse() to act as its documentation implied, returning on 521 each call the preceding event until none are left. When no more 522 events remain, it will return ERR. 523 524 20091227 525 + change order of lookup in progs/tput.c, looking for terminfo data 526 first. This fixes a confusion between termcap "sg" and terminfo 527 "sgr" or "sgr0", originally from 990123 changes, but exposed by 528 20091114 fixes for hashing. With this change, only "dl" and "ed" are 529 ambiguous (Mandriva #56272). 530 531 20091226 532 + add bterm terminfo entry, based on bogl 0.1.18 -TD 533 + minor fix to rxvt+pcfkeys terminfo entry -TD 534 + build-fixes for Ada95 tree for gnat 4.4 "style". 535 536 20091219 537 + remove old check in mvderwin() which prevented moving a derived 538 window whose origin happened to coincide with its parent's origin 539 (report by Katarina Machalkova). 540 + improve test/ncurses.c to put mouse droppings in the proper window. 541 + update minix terminfo entry -TD 542 + add bw (auto-left-margin) to nsterm* entries (Benjamin Sittler) 543 544 20091212 545 + correct transfer of multicolumn characters in multirow 546 field_buffer(), which stopped at the end of the first row due to 547 filling of unused entries in a cchar_t array with nulls. 548 + updated nsterm* entries (Benjamin Sittler, Emanuele Giaquinta) 549 + modify _nc_viscbuf2() and _tracecchar_t2() to show wide-character 550 nulls. 551 + use strdup() in set_menu_mark(), restore .marklen struct member on 552 failure. 553 + eliminate clause 3 from the UCB copyrights in read_termcap.c and 554 tset.c per 555 556 (patch by Nicholas Marriott). 557 + replace a malloc in tic.c with strdup, checking for failure (patch by 558 Nicholas Marriott). 559 + update config.guess, config.sub from 560 561 562 20091205 563 + correct layout of working window used to extract data in 564 wide-character configured by set_field_buffer (patch by Rafael 565 Garrido Fernandez) 566 + improve some limit-checks related to filename length in reading and 567 writing terminfo entries. 568 + ensure that filename is always filled in when attempting to read 569 a terminfo entry, so that infocmp can report the filename (patch 570 by Nicholas Marriott). 571 572 20091128 573 + modify mk-1st.awk to allow tinfo library to be built when term-driver 574 is enabled. 575 + add error-check to configure script to ensure that sp-funcs is 576 enabled if term-driver is, since some internal interfaces rely upon 577 this. 578 579 20091121 580 + fix case where progs/tput is used while sp-funcs is configure; this 581 requires save/restore of out-character function from _nc_prescreen 582 rather than the SCREEN structure (report by Charles Wilson). 583 + fix typo in man/curs_trace.3x which caused incorrect symbolic links 584 + improved configure macros CF_GCC_ATTRIBUTES, CF_PROG_LINT. 585 586 20091114 587 588 + updated man/curs_trace.3x 589 + limit hashing for termcap-names to 2-characters (Ubuntu #481740). 590 + change a variable name in lib_newwin.c to make it clearer which 591 value is being freed on error (patch by Nicholas Marriott). 592 593 20091107 594 + improve test/ncurses.c color-cycling test by reusing attribute- 595 and color-cycling logic from the video-attributes screen. 596 + add ifdef'd with NCURSES_INTEROP_FUNCS experimental bindings in form 597 library which help make it compatible with interop applications 598 (patch by Juergen Pfeifer). 599 + add configure option --enable-interop, for integrating changes 600 for generic/interop support to form-library by Juergen Pfeifer 601 602 20091031 603 + modify use of $CC environment variable which is defined by X/Open 604 as a curses feature, to ignore it if it is not a single character 605 (prompted by discussion with Benjamin C W Sittler). 606 + add START_TRACE in slk_init 607 + fix a regression in _nc_ripoffline which made test/ncurses.c not show 608 soft-keys, broken in 20090927 merging. 609 + change initialization of "hidden" flag for soft-keys from true to 610 false, broken in 20090704 merging (Ubuntu #464274). 611 + update nsterm entries (patch by Benjamin C W Sittler, prompted by 612 discussion with Fabian Groffen in GenToo #206201). 613 + add test/xterm-256color.dat 614 615 20091024 616 + quiet some pedantic gcc warnings. 617 + modify _nc_wgetch() to check for a -1 in the fifo, e.g., after a 618 SIGWINCH, and discard that value, to avoid confusing application 619 (patch by Eygene Ryabinkin, FreeBSD bin/136223). 620 621 20091017 622 + modify handling of $PKG_CONFIG_LIBDIR to use only the first item in 623 a possibly colon-separated list (Debian #550716). 624 625 20091010 626 + supply a null-terminator to buffer in _nc_viswibuf(). 627 + fix a sign-extension bug in unget_wch() (report by Mike Gran). 628 + minor fixes to error-returns in default function for tputs, as well 629 as in lib_screen.c 630 631 20091003 632 + add WACS_xxx definitions to wide-character configuration for thick- 633 and double-lines (discussion with Slava Zanko). 634 + remove unnecessary kcan assignment to ^C from putty (Sven Joachim) 635 + add ccc and initc capabilities to xterm-16color -TD 636 > patch by Benjamin C W Sittler: 637 + add linux-16color 638 + correct initc capability of linux-c-nc end-of-range 639 + similar change for dg+ccc and dgunix+ccc 640 641 20090927 642 + move leak-checking for comp_captab.c into _nc_leaks_tinfo() since 643 that module since 20090711 is in libtinfo. 644 + add configure option --enable-term-driver, to allow compiling with 645 terminal-driver. That is used in MinGW port, and (being somewhat 646 more complicated) is an experimental alternative to the conventional 647 termlib internals. Currently, it requires the sp-funcs feature to 648 be enabled. 649 + completed integrating "sp-funcs" by Juergen Pfeifer in ncurses 650 library (some work remains for forms library). 651 652 20090919 653 + document return code from define_key (report by Mike Gran). 654 + make some symbolic links in the terminfo directory-tree shorter 655 (patch by Daniel Jacobowitz, forwarded by Sven Joachim).). 656 + fix some groff warnings in terminfo.5, etc., from recent Debian 657 changes. 658 + change ncv and op capabilities in sun-color terminfo entry to match 659 Sun's entry for this (report by Laszlo Peter). 660 + improve interix smso terminfo capability by using reverse rather than 661 bold (report by Kristof Zelechovski). 662 663 20090912 664 + add some test programs (and make these use the same special keys 665 by sharing linedata.h functions): 666 test/test_addstr.c 667 test/test_addwstr.c 668 test/test_addchstr.c 669 test/test_add_wchstr.c 670 + correct internal _nc_insert_ch() to use _nc_insert_wch() when 671 inserting wide characters, since the wins_wch() function that it used 672 did not update the cursor position (report by Ciprian Craciun). 673 674 20090906 675 + fix typo s/is_timeout/is_notimeout/ which made "man is_notimeout" not 676 work. 677 + add null-pointer checks to other opaque-functions. 678 + add is_pad() and is_subwin() functions for opaque access to WINDOW 679 (discussion with Mark Dickinson). 680 + correct merge to lib_newterm.c, which broke when sp-funcs was 681 enabled. 682 683 20090905 684 + build-fix for building outside source-tree (report by Sven Joachim). 685 + fix Debian lintian warning for man/tabs.1 by making section number 686 agree with file-suffix (report by Sven Joachim). 687 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 688 689 20090829 690 + workaround for bug in g++ 4.1-4.4 warnings for wattrset() macro on 691 amd64 (Debian #542031). 692 + fix typo in curs_mouse.3x (Debian #429198). 693 694 20090822 695 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 696 697 20090815 698 + correct use of terminfo capabilities for initializing soft-keys, 699 broken in 20090509 merging. 700 + modify wgetch() to ensure it checks SIGWINCH when it gets an error 701 in non-blocking mode (patch by Clemens Ladisch). 702 + use PATH_SEPARATOR symbol when substituting into run_tic.sh, to 703 help with builds on non-Unix platforms such as OS/2 EMX. 704 + modify scripting for misc/run_tic.sh to test configure script's 705 $cross_compiling variable directly rather than comparing host/build 706 compiler names (prompted by comment in GenToo #249363). 707 + fix configure script option --with-database, which was coded as an 708 enable-type switch. 709 + build-fixes for --srcdir (report by Frederic L W Meunier). 710 711 20090808 712 + separate _nc_find_entry() and _nc_find_type_entry() from 713 implementation details of hash function. 714 715 20090803 716 + add tabs.1 to man/man_db.renames 717 + modify lib_addch.c to compensate for removal of wide-character test 718 from unctrl() in 20090704 (Debian #539735). 719 720 20090801 721 + improve discussion in INSTALL for use of system's tic/infocmp for 722 cross-compiling and building fallbacks. 723 + modify test/demo_termcap.c to correspond better to options in 724 test/demo_terminfo.c 725 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 726 + fix logic for 'V' in test/ncurses.c tests f/F. 727 728 20090728 729 + correct logic in tigetnum(), which caused tput program to treat all 730 string capabilities as numeric (report by Rajeev V Pillai, 731 cf: 20090711). 732 733 20090725 734 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 735 736 20090718 737 + fix a null-pointer check in _nc_format_slks() in lib_slk.c, from 738 20070704 changes. 739 + modify _nc_find_type_entry() to use hashing. 740 + make CCHARW_MAX value configurable, noting that changing this would 741 change the size of cchar_t, and would be ABI-incompatible. 742 + modify test-programs, e.g,. test/view.c, to address subtle 743 differences between Tru64/Solaris and HPUX/AIX getcchar() return 744 values. 745 + modify length returned by getcchar() to count the trailing null 746 which is documented in X/Open (cf: 20020427). 747 + fixes for test programs to build/work on HPUX and AIX, etc. 748 749 20090711 750 + improve performance of tigetstr, etc., by using hashing code from tic. 751 + minor fixes for memory-leak checking. 752 + add test/demo_terminfo, for comparison with demo_termcap 753 754 20090704 755 + remove wide-character checks from unctrl() (patch by Clemens Ladisch). 756 + revise wadd_wch() and wecho_wchar() to eliminate dependency on 757 unctrl(). 758 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 759 760 20090627 761 + update llib-lncurses[wt] to use sp-funcs. 762 + various code-fixes to build/work with --disable-macros configure 763 option. 764 + add several new files from Juergen Pfeifer which will be used when 765 integration of "sp-funcs" is complete. This includes a port to 766 MinGW. 767 768 20090613 769 + move definition for NCURSES_WRAPPED_VAR back to ncurses_dll.h, to 770 make includes of term.h without curses.h work (report by "Nix"). 771 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 772 773 20090607 774 + fix a regression in lib_tputs.c, from ongoing merges. 775 776 20090606 777 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 778 779 20090530 780 + fix an infinite recursion when adding a legacy-coding 8-bit value 781 using insch() (report by Clemens Ladisch). 782 + free home-terminfo string in del_curterm() (patch by Dan Weber). 783 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 784 785 20090523 786 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 787 788 20090516 789 + work around antique BSD game's manipulation of stdscr, etc., versus 790 SCREEN's copy of the pointer (Debian #528411). 791 + add a cast to wattrset macro to avoid compiler warning when comparing 792 its result against ERR (adapted from patch by Matt Kraii, Debian 793 #528374). 794 795 20090510 796 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 797 798 20090502 799 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 800 + add vwmterm terminfo entry (patch by Bryan Christ). 801 802 20090425 803 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 804 805 20090419 806 + build fix for _nc_free_and_exit() change in 20090418 (report by 807 Christian Ebert). 808 809 20090418 810 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 811 812 20090411 813 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 814 This change finishes merging for menu and panel libraries, does 815 part of the form library. 816 817 20090404 818 + suppress configure check for static/dynamic linker flags for gcc on 819 Darwin (report by Nelson Beebe). 820 821 20090328 822 + extend ansi.sys pfkey capability from kf1-kf10 to kf1-kf48, moving 823 function key definitions from emx-base for consistency -TD 824 + correct missing final 'p' in pfkey capability of ansi.sys-old (report 825 by Kalle Olavi Niemitalo). 826 + improve test/ncurses.c 'F' test, show combining characters in color. 827 + quiet a false report by cppcheck in c++/cursesw.cc by eliminating 828 a temporary variable. 829 + use _nc_doalloc() rather than realloc() in a few places in ncurses 830 library to avoid leak in out-of-memory condition (reports by William 831 Egert and Martin Ettl based on cppcheck tool). 832 + add --with-ncurses-wrap-prefix option to test/configure (discussion 833 with Charles Wilson). 834 + use ncurses*-config scripts if available for test/configure. 835 + update test/aclocal.m4 and test/configure 836 > patches by Charles Wilson: 837 + modify CF_WITH_LIBTOOL configure check to allow unreleased libtool 838 version numbers (e.g. which include alphabetic chars, as well as 839 digits, after the final '.'). 840 + improve use of -no-undefined option for libtool by setting an 841 intermediate variable LT_UNDEF in the configure script, and then 842 using that in the libtool link-commands. 843 + fix an missing use of NCURSES_PUBLIC_VAR() in tinfo/MKcodes.awk 844 from 2009031 changes. 845 + improve mk-1st.awk script by writing separate cases for the 846 LIBTOOL_LINK command, depending on which library (ncurses, ticlib, 847 termlib) is to be linked. 848 + modify configure.in to allow broken-linker configurations, not just 849 enable-reentrant, to set public wrap prefix. 850 851 20090321 852 + add TICS_LIST and SHLIB_LIST to allow libtool 2.2.6 on Cygwin to 853 build with tic and term libraries (patch by Charles Wilson). 854 + add -no-undefined option to libtool for Cygwin, MinGW, U/Win and AIX 855 (report by Charles Wilson). 856 + fix definition for c++/Makefile.in's SHLIB_LIST, which did not list 857 the form, menu or panel libraries (patch by Charles Wilson). 858 + add configure option --with-wrap-prefix to allow setting the prefix 859 for functions used to wrap global variables to something other than 860 "_nc_" (discussion with Charles Wilson). 861 862 20090314 863 + modify scripts to generate ncurses*-config and pc-files to add 864 dependency for tinfo library (patch by Charles Wilson). 865 + improve comparison of program-names when checking for linked flavors 866 such as "reset" by ignoring the executable suffix (reports by Charles 867 Wilson, Samuel Thibault and Cedric Bretaudeau on Cygwin mailing 868 list). 869 + suppress configure check for static/dynamic linker flags for gcc on 870 Solaris 10, since gcc is confused by absence of static libc, and 871 does not switch back to dynamic mode before finishing the libraries 872 (reports by Joel Bertrand, Alan Pae). 873 + minor fixes to Intel compiler warning checks in configure script. 874 + modify _nc_leaks_tinfo() so leak-checking in test/railroad.c works. 875 + modify set_curterm() to make broken-linker configuration work with 876 changes from 20090228 (report by Charles Wilson). 877 878 20090228 879 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 880 + modify declaration of cur_term when broken-linker is used, but 881 enable-reentrant is not, to match pre-5.7 (report by Charles Wilson). 882 883 20090221 884 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 885 886 20090214 887 + add configure script --enable-sp-funcs to enable the new set of 888 extended functions. 889 + start integrating patches by Juergen Pfeifer: 890 + add extended functions which specify the SCREEN pointer for several 891 curses functions which use the global SP (these are incomplete; 892 some internals work is needed to complete these). 893 + add special cases to configure script for MinGW port. 894 895 20090207 896 + update several configure macros from lynx changes 897 + append (not prepend) to CFLAGS/CPPFLAGS 898 + change variable from PATHSEP to PATH_SEPARATOR 899 + improve install-rules for pc-files (patch by Miroslav Lichvar). 900 + make it work with $DESTDIR 901 + create the pkg-config library directory if needed. 902 903 20090124 904 + modify init_pair() to allow caller to create extra color pairs beyond 905 the color_pairs limit, which use default colors (request by Emanuele 906 Giaquinta). 907 + add misc/terminfo.tmp and misc/*.pc to "sources" rule. 908 + fix typo "==" where "=" is needed in ncurses-config.in and 909 gen-pkgconfig.in files (Debian #512161). 910 911 20090117 912 + add -shared option to MK_SHARED_LIB when -Bsharable is used, for 913 *BSD's, without which "main" might be one of the shared library's 914 dependencies (report/analysis by Ken Dickey). 915 + modify waddch_literal(), updating line-pointer after a multicolumn 916 character is found to not fit on the current row, and wrapping is 917 done. Since the line-pointer was not updated, the wrapped 918 multicolumn character was written to the beginning of the current row 919 (cf: 20041023, reported by "Nick" regarding problem with ncmpc 920). 921 922 20090110 923 + add screen.Eterm terminfo entry (GenToo #124887) -TD 924 + modify adacurses-config to look for ".ali" files in the adalib 925 directory. 926 + correct install for Ada95, which omitted libAdaCurses.a used in 927 adacurses-config 928 + change install for adacurses-config to provide additional flavors 929 such as adacursesw-config, for ncursesw (GenToo #167849). 930 931 20090105 932 + remove undeveloped feature in ncurses-config.in for setting 933 prefix variable. 934 + recent change to ncurses-config.in did not take into account the 935 --disable-overwrite option, which sets $includedir to the 936 subdirectory and using just that for a -I option does not work - fix 937 (report by Frederic L W Meunier). 938 939 20090104 940 + modify gen-pkgconfig.in to eliminate a dependency on rpath when 941 deciding whether to add $LIBS to --libs output; that should be shown 942 for the ncurses and tinfo libraries without taking rpath into 943 account. 944 + fix an overlooked change from $AR_OPTS to $ARFLAGS in mk-1st.awk, 945 used in static libraries (report by Marty Jack). 946 947 20090103 948 + add a configure-time check to pick a suitable value for 949 CC_SHARED_OPTS for Solaris (report by Dagobert Michelsen). 950 + add configure --with-pkg-config and --enable-pc-files options, along 951 with misc/gen-pkgconfig.in which can be used to generate ".pc" files 952 for pkg-config (request by Jan Engelhardt). 953 + use $includedir symbol in misc/ncurses-config.in, add --includedir 954 option. 955 + change makefiles to use $ARFLAGS rather than $AR_OPTS, provide a 956 configure check to detect whether a "-" is needed before "ar" 957 options. 958 + update config.guess, config.sub from 959 960 961 20081227 962 + modify mk-1st.awk to work with extra categories for tinfo library. 963 + modify configure script to allow building shared libraries with gcc 964 on AIX 5 or 6 (adapted from patch by Lital Natan). 965 966 20081220 967 + modify to omit the opaque-functions from lib_gen.o when 968 --disable-ext-funcs is used. 969 + add test/clip_printw.c to illustrate how to use printw without 970 wrapping. 971 + modify ncurses 'F' test to demo wborder_set() with colored lines. 972 + modify ncurses 'f' test to demo wborder() with colored lines. 973 974 20081213 975 + add check for failure to open hashed-database needed for db4.6 976 (GenToo #245370). 977 + corrected --without-manpages option; previous change only suppressed 978 the auxiliary rules install.man and uninstall.man 979 + add case for FreeMINT to configure macro CF_XOPEN_SOURCE (patch from 980 GenToo #250454). 981 + fixes from NetBSD port at 982 983 patch-ac (build-fix for DragonFly) 984 patch-ae (use INSTALL_SCRIPT for installing misc/ncurses*-config). 985 + improve configure script macros CF_HEADER_PATH and CF_LIBRARY_PATH 986 by adding CFLAGS, CPPFLAGS and LDFLAGS, LIBS values to the 987 search-lists. 988 + correct title string for keybound manpage (patch by Frederic Culot, 989 OpenBSD documentation/6019), 990 991 20081206 992 + move del_curterm() call from _nc_freeall() to _nc_leaks_tinfo() to 993 work for progs/clear, progs/tabs, etc. 994 + correct buffer-size after internal resizing of wide-character 995 set_field_buffer(), broken in 20081018 changes (report by Mike Gran). 996 + add "-i" option to test/filter.c to tell it to use initscr() rather 997 than newterm(), to investigate report on comp.unix.programmer that 998 ncurses would clear the screen in that case (it does not - the issue 999 was xterm's alternate screen feature). 1000 + add check in mouse-driver to disable connection if GPM returns a 1001 zero, indicating that the connection is closed (Debian #506717, 1002 adapted from patch by Samuel Thibault). 1003 1004 20081129 1005 + improve a workaround in adding wide-characters, when a control 1006 character is found. The library (cf: 20040207) uses unctrl() to 1007 obtain a printable version of the control character, but was not 1008 passing color or video attributes. 1009 + improve test/ncurses.c 'a' test, using unctrl() more consistently to 1010 display meta-characters. 1011 + turn on _XOPEN_CURSES definition in curses.h 1012 + add eterm-color entry (report by Vincent Lefevre) -TD 1013 + correct use of key_name() in test/ncurses.c 'A' test, which only 1014 displays wide-characters, not key-codes since 20070612 (report by 1015 Ricardo Cantu). 1016 1017 20081122 1018 + change _nc_has_mouse() to has_mouse(), reflect its use in C++ and 1019 Ada95 (patch by Juergen Pfeifer). 1020 + document in TO-DO an issue with Cygwin's package for GNAT (report 1021 by Mike Dennison). 1022 + improve error-checking of command-line options in "tabs" program. 1023 1024 20081115 1025 + change several terminfo entries to make consistent use of ANSI 1026 clear-all-tabs -TD 1027 + add "tabs" program (prompted by Debian #502260). 1028 + add configure --without-manpages option (request by Mike Frysinger). 1029 1030 20081102 5.7 release for upload to 1031 1032 20081025 1033 + add a manpage to discuss memory leaks. 1034 + add support for shared libraries for QNX (other than libtool, which 1035 does not work well on that platform). 1036 + build-fix for QNX C++ binding. 1037 1038 20081018 1039 + build-fixes for OS/2 EMX. 1040 + modify form library to accept control characters such as newline 1041 in set_field_buffer(), which is compatible with Solaris (report by 1042 Nit Khair). 1043 + modify configure script to assume --without-hashed-db when 1044 --disable-database is used. 1045 + add "-e" option in ncurses/Makefile.in when generating source-files 1046 to force earlier exit if the build environment fails unexpectedly 1047 (prompted by patch by Adrian Bunk). 1048 + change configure script to use CF_UTF8_LIB, improved variant of 1049 CF_LIBUTF8. 1050 1051 20081012 1052 + add teraterm4.59 terminfo entry, use that as primary teraterm entry, rename 1053 original to teraterm2.3 -TD 1054 + update "gnome" terminfo to 2.22.3 -TD 1055 + update "konsole" terminfo to 1.6.6, needs today's fix for tic -TD 1056 + add "aterm" terminfo -TD 1057 + add "linux2.6.26" terminfo -TD 1058 + add logic to tic for cancelling strings in user-defined capabilities, 1059 overlooked til now. 1060 1061 20081011 1062 + regenerated html documentation. 1063 + add -m and -s options to test/keynames.c and test/key_names.c to test 1064 the meta() function with keyname() or key_name(), respectively. 1065 + correct return value of key_name() on error; it is null. 1066 + document some unresolved issues for rpath and pthreads in TO-DO. 1067 + fix a missing prototype for ioctl() on OpenBSD in tset.c 1068 + add configure option --disable-tic-depends to make explicit whether 1069 tic library depends on ncurses/ncursesw library, amends change from 1070 20080823 (prompted by Debian #501421). 1071 1072 20081004 1073 + some build-fixes for configure --disable-ext-funcs (incomplete, but 1074 works for C/C++ parts). 1075 + improve configure-check for awks unable to handle large strings, e.g. 1076 AIX 5.1 whose awk silently gives up on large printf's. 1077 1078 20080927 1079 + fix build for --with-dmalloc by workaround for redefinition of 1080 strndup between string.h and dmalloc.h 1081 + fix build for --disable-sigwinch 1082 + add environment variable NCURSES_GPM_TERMS to allow override to use 1083 GPM on terminals other than "linux", etc. 1084 + disable GPM mouse support when $TERM does not happen to contain 1085 "linux", since Gpm_Open() no longer limits its assertion to terminals 1086 that it might handle, e.g., within "screen" in xterm. 1087 + reset mouse file-descriptor when unloading GPM library (report by 1088 Miroslav Lichvar). 1089 + fix build for --disable-leaks --enable-widec --with-termlib 1090 > patch by Juergen Pfeifer: 1091 + use improved initialization for soft-label keys in Ada95 sample code. 1092 + discard internal symbol _nc_slk_format (unused since 20080112). 1093 + move call of slk_paint_info() from _nc_slk_initialize() to 1094 slk_intern_refresh(), improving initialization. 1095 1096 20080925 1097 + fix bug in mouse code for GPM from 20080920 changes (reported in 1098 Debian #500103, also Miroslav Lichvar). 1099 1100 20080920 1101 + fix shared-library rules for cygwin with tic- and tinfo-libraries. 1102 + fix a memory leak when failure to connect to GPM. 1103 + correct check for notimeout() in wgetch() (report on linux.redhat 1104 newsgroup by FurtiveBertie). 1105 + add an example warning-suppression file for valgrind, 1106 misc/ncurses.supp (based on example from Reuben Thomas) 1107 1108 20080913 1109 + change shared-library configuration for OpenBSD, make rpath work. 1110 + build-fixes for using libutf8, e.g., on OpenBSD 3.7 1111 1112 20080907 1113 + corrected fix for --enable-weak-symbols (report by Frederic L W 1114 Meunier). 1115 1116 20080906 1117 + corrected gcc options for building shared libraries on IRIX64. 1118 + add configure check for awk programs unable to handle big-strings, 1119 use that to improve the default for --enable-big-strings option. 1120 + makefile-fixes for --enable-weak-symbols (report by Frederic L W 1121 Meunier). 1122 + update test/configure script. 1123 + adapt ifdef's from library to make test/view.c build when mbrtowc() 1124 is unavailable, e.g., with HPUX 10.20. 1125 + add configure check for wcsrtombs, mbsrtowcs, which are used in 1126 test/ncurses.c, and use wcstombs, mbstowcs instead if available, 1127 fixing build of ncursew for HPUX 11.00 1128 1129 20080830 1130 + fixes to make Ada95 demo_panels() example work. 1131 + modify Ada95 'rain' test program to accept keyboard commands like the 1132 C-version. 1133 + modify BeOS-specific ifdef's to build on Haiku (patch by Scott 1134 Mccreary). 1135 + add configure-check to see if the std namespace is legal for cerr 1136 and endl, to fix a build issue with Tru64. 1137 + consistently use NCURSES_BOOL in lib_gen.c 1138 + filter #line's from lib_gen.c 1139 + change delimiter in MKlib_gen.sh from '%' to '@', to avoid 1140 substitution by IBM xlc to '#' as part of its extensions to digraphs. 1141 + update config.guess, config.sub from 1142 1143 (caveat - its maintainer removed support for older Linux systems). 1144 1145 20080823 1146 + modify configure check for pthread library to work with OSF/1 5.1, 1147 which uses #define's to associate its header and library. 1148 + use pthread_mutexattr_init() for initializing pthread_mutexattr_t, 1149 makes threaded code work on HPUX 11.23 1150 + fix a bug in demo_menus in freeing menus (cf: 20080804). 1151 + modify configure script for the case where tic library is used (and 1152 possibly renamed) to remove its dependency upon ncurses/ncursew 1153 library (patch by Dr Werner Fink). 1154 + correct manpage for menu_fore() which gave wrong default for 1155 the attribute used to display a selected entry (report by Mike Gran). 1156 + add Eterm-256color, Eterm-88color and rxvt-88color (prompted by 1157 Debian #495815) -TD 1158 1159 20080816 1160 + add configure option --enable-weak-symbols to turn on new feature. 1161 + add configure-check for availability of weak symbols. 1162 + modify linkage with pthread library to use weak symbols so that 1163 applications not linked to that library will not use the mutexes, 1164 etc. This relies on gcc, and may be platform-specific (patch by Dr 1165 Werner Fink). 1166 + add note to INSTALL to document limitation of renaming of tic library 1167 using the --with-ticlib configure option (report by Dr Werner Fink). 1168 + document (in manpage) why tputs does not detect I/O errors (prompted 1169 by comments by Samuel Thibault). 1170 + fix remaining warnings from Klocwork report. 1171 1172 20080804 1173 + modify _nc_panelhook() data to account for a permanent memory leak. 1174 + fix memory leaks in test/demo_menus 1175 + fix most warnings from Klocwork tool (report by Larry Zhou). 1176 + modify configure script CF_XOPEN_SOURCE macro to add case for 1177 "dragonfly" from xterm #236 changes. 1178 + modify configure script --with-hashed-db to let $LIBS override the 1179 search for the db library (prompted by report by Samson Pierre). 1180 1181 20080726 1182 + build-fixes for gcc 4.3.1 (changes to gnat "warnings", and C inlining 1183 thresholds). 1184 1185 20080713 1186 + build-fix (reports by Christian Ebert, Funda Wang). 1187 1188 20080712 1189 + compiler-warning fixes for Solaris. 1190 1191 20080705 1192 + use NCURSES_MOUSE_MASK() in definition of BUTTON_RELEASE(), etc., to 1193 make those work properly with the "--enable-ext-mouse" configuration 1194 (cf: 20050205). 1195 + improve documentation of build-cc options in INSTALL. 1196 + work-around a bug in gcc 4.2.4 on AIX, which does not pass the 1197 -static/-dynamic flags properly to linker, causing test/bs to 1198 not link. 1199 1200 20080628 1201 + correct some ifdef's needed for the broken-linker configuration. 1202 + make debugging library's $BAUDRATE feature work for termcap 1203 interface. 1204 + make $NCURSES_NO_PADDING feature work for termcap interface (prompted 1205 by comment on FreeBSD mailing list). 1206 + add screen.mlterm terminfo entry -TD 1207 + improve mlterm and mlterm+pcfkeys terminfo entries -TD 1208 1209 20080621 1210 + regenerated html documentation. 1211 + expand manpage description of parameters for form_driver() and 1212 menu_driver() (prompted by discussion with Adam Spragg). 1213 + add null-pointer checks for cur_term in baudrate() and 1214 def_shell_mode(), def_prog_mode() 1215 + fix some memory leaks in delscreen() and wide acs. 1216 1217 20080614 1218 + modify test/ditto.c to illustrate multi-threaded use_screen(). 1219 + change CC_SHARED_OPTS from -KPIC to -xcode=pic32 for Solaris. 1220 + add "-shared" option to MK_SHARED_LIB for gcc on Solaris (report 1221 by Poor Yorick). 1222 1223 20080607 1224 + finish changes to wgetch(), making it switch as needed to the 1225 window's actual screen when calling wrefresh() and wgetnstr(). That 1226 allows wgetch() to get used concurrently in different threads with 1227 some minor restrictions, e.g., the application should not delete a 1228 window which is being used in a wgetch(). 1229 + simplify mutex's, combining the window- and screen-mutex's. 1230 1231 20080531 1232 + modify wgetch() to use the screen which corresponds to its window 1233 parameter rather than relying on SP; some dependent functions still 1234 use SP internally. 1235 + factor out most use of SP in lib_mouse.c, using parameter. 1236 + add internal _nc_keyname(), replacing keyname() to associate with a 1237 particular SCREEN rather than the global SP. 1238 + add internal _nc_unctrl(), replacing unctrl() to associate with a 1239 particular SCREEN rather than the global SP. 1240 + add internal _nc_tracemouse(), replacing _tracemouse() to eliminate 1241 its associated global buffer _nc_globals.tracemse_buf now in SCREEN. 1242 + add internal _nc_tracechar(), replacing _tracechar() to use SCREEN in 1243 preference to the global _nc_globals.tracechr_buf buffer. 1244 1245 20080524 1246 + modify _nc_keypad() to make it switch temporarily as needed to the 1247 screen which must be updated. 1248 + wrap cur_term variable to help make _nc_keymap() thread-safe, and 1249 always set the screen's copy of this variable in set_curterm(). 1250 + restore curs_set() state after endwin()/refresh() (report/patch 1251 Miroslav Lichvar) 1252 1253 20080517 1254 + modify configure script to note that --enable-ext-colors and 1255 --enable-ext-mouse are not experimental, but extensions from 1256 the ncurses ABI 5. 1257 + corrected manpage description of setcchar() (discussion with 1258 Emanuele Giaquinta). 1259 + fix for adding a non-spacing character at the beginning of a line 1260 (report/patch by Miroslav Lichvar). 1261 1262 20080503 1263 + modify screen.* terminfo entries using new screen+fkeys to fix 1264 overridden keys in screen.rxvt (Debian #478094) -TD 1265 + modify internal interfaces to reduce wgetch()'s dependency on the 1266 global SP. 1267 + simplify some loops with macros each_screen(), each_window() and 1268 each_ripoff(). 1269 1270 20080426 1271 + continue modifying test/ditto.c toward making it demonstrate 1272 multithreaded use_screen(), using fifos to pass data between screens. 1273 + fix typo in form.3x (report by Mike Gran). 1274 1275 20080419 1276 + add screen.rxvt terminfo entry -TD 1277 + modify tic -f option to format spaces as \s to prevent them from 1278 being lost when that is read back in unformatted strings. 1279 + improve test/ditto.c, using a "talk"-style layout. 1280 1281 20080412 1282 + change test/ditto.c to use openpty() and xterm. 1283 + add locks for copywin(), dupwin(), overlap(), overlay() on their 1284 window parameters. 1285 + add locks for initscr() and newterm() on updates to the SCREEN 1286 pointer. 1287 + finish table in curs_thread.3x manpage. 1288 1289 20080405 1290 + begin table in curs_thread.3x manpage describing the scope of data 1291 used by each function (or symbol) for threading analysis. 1292 + add null-pointer checks to setsyx() and getsyx() (prompted by 1293 discussion by Martin v. Lowis and Jeroen Ruigrok van der Werven on 1294 python-dev2 mailing list). 1295 1296 20080329 1297 + add null-pointer checks in set_term() and delscreen(). 1298 + move _nc_windows into _nc_globals, since windows can be pads, which 1299 are not associated with a particular screen. 1300 + change use_screen() to pass the SCREEN* parameter rather than 1301 stdscr to the callback function. 1302 + force libtool to use tag for 'CC' in case it does not detect this, 1303 e.g., on aix when using CC=powerpc-ibm-aix5.3.0.0-gcc 1304 (report/patch by Michael Haubenwallner). 1305 + override OBJEXT to "lo" when building with libtool, to work on 1306 platforms such as AIX where libtool may use a different suffix for 1307 the object files than ".o" (report/patch by Michael Haubenwallner). 1308 + add configure --with-pthread option, for building with the POSIX 1309 thread library. 1310 1311 20080322 1312 + fill in extended-color pair two more places in wbkgrndset() and 1313 waddch_nosync() (prompted by Sedeno's patch). 1314 + fill in extended-color pair in _nc_build_wch() to make colors work 1315 for wide-characters using extended-colors (patch by Alejandro R 1316 Sedeno). 1317 + add x/X toggles to ncurses.c C color test to test/demo 1318 wide-characters with extended-colors. 1319 + add a/A toggles to ncurses.c c/C color tests. 1320 + modify test/ditto.c to use use_screen(). 1321 + finish modifying test/rain.c to demonstrate threads. 1322 1323 20080308 1324 + start modifying test/rain.c for threading demo. 1325 + modify test/ncurses.c to make 'f' test accept the f/F/b/F/</> toggles 1326 that the 'F' accepts. 1327 + modify test/worm.c to show trail in reverse-video when other threads 1328 are working concurrently. 1329 + fix a deadlock from improper nesting of mutexes for windowlist and 1330 window. 1331 1332 20080301 1333 + fixes from 20080223 resolved issue with mutexes; change to use 1334 recursive mutexes to fix memory leak in delwin() as called from 1335 _nc_free_and_exit(). 1336 1337 20080223 1338 + fix a size-difference in _nc_globals which caused hanging of mutex 1339 lock/unlock when termlib was built separately. 1340 1341 20080216 1342 + avoid using nanosleep() in threaded configuration since that often 1343 is implemented to suspend the entire process. 1344 1345 20080209 1346 + update test programs to build/work with various UNIX curses for 1347 comparisons. This was to reinvestigate statement in X/Open curses 1348 that insnstr and winsnstr perform wrapping. None of the Unix-branded 1349 implementations do this, as noted in manpage (cf: 20040228). 1350 1351 20080203 1352 + modify _nc_setupscreen() to set the legacy-coding value the same 1353 for both narrow/wide models. It had been set only for wide model, 1354 but is needed to make unctrl() work with locale in the narrow model. 1355 + improve waddch() and winsch() handling of EILSEQ from mbrtowc() by 1356 using unctrl() to display illegal bytes rather than trying to append 1357 further bytes to make up a valid sequence (reported by Andrey A 1358 Chernov). 1359 + modify unctrl() to check codes in 128-255 range versus isprint(). 1360 If they are not printable, and locale was set, use a "M-" or "~" 1361 sequence. 1362 1363 20080126 1364 + improve threading in test/worm.c (wrap refresh calls, and KEY_RESIZE 1365 handling). Now it hangs in napms(), no matter whether nanosleep() 1366 or poll() or select() are used on Linux. 1367 1368 20080119 1369 + fixes to build with --disable-ext-funcs 1370 + add manpage for use_window and use_screen. 1371 + add set_tabsize() and set_escdelay() functions. 1372 1373 20080112 1374 + remove recursive-mutex definitions, finish threading demo for worm.c 1375 + remove a redundant adjustment of lines in resizeterm.c's 1376 adjust_window() which caused occasional misadjustment of stdscr when 1377 softkeys were used. 1378 1379 20080105 1380 + several improvements to terminfo entries based on xterm #230 -TD 1381 + modify MKlib_gen.sh to handle keyname/key_name prototypes, so the 1382 "link_test" builds properly. 1383 + fix for toe command-line options -u/-U to ensure filename is given. 1384 + fix allocation-size for command-line parsing in infocmp from 20070728 1385 (report by Miroslav Lichvar) 1386 + improve resizeterm() by moving ripped-off lines, and repainting the 1387 soft-keys (report by Katarina Machalkova) 1388 + add clarification in wclear's manpage noting that the screen will be 1389 cleared even if a subwindow is cleared (prompted by Christer Enfors 1390 question). 1391 + change test/ncurses.c soft-key tests to work with KEY_RESIZE. 1392 1393 20071222 1394 + continue implementing support for threading demo by adding mutex 1395 for delwin(). 1396 1397 20071215 1398 + add several functions to C++ binding which wrap C functions that 1399 pass a WINDOW* parameter (request by Chris Lee). 1400 1401 20071201 1402 + add note about configure options needed for Berkeley database to the 1403 INSTALL file. 1404 + improve checks for version of Berkeley database libraries. 1405 + amend fix for rpath to not modify LDFLAGS if the platform has no 1406 applicable transformation (report by Christian Ebert, cf: 20071124). 1407 1408 20071124 1409 + modify configure option --with-hashed-db to accept a parameter which 1410 is the install-prefix of a given Berkeley Database (prompted by 1411 pierre4d2 comments). 1412 + rewrite wrapper for wcrtomb(), making it work on Solaris. This is 1413 used in the form library to determine the length of the buffer needed 1414 by field_buffer (report by Alfred Fung). 1415 + remove unneeded window-parameter from C++ binding for wresize (report 1416 by Chris Lee). 1417 1418 20071117 1419 + modify the support for filesystems which do not support mixed-case to 1420 generate 2-character (hexadecimal) codes for the lower-level of the 1421 filesystem terminfo database (request by Michail Vidiassov). 1422 + add configure option --enable-mixed-case, to allow overriding the 1423 configure script's check if the filesystem supports mixed-case 1424 filenames. 1425 + add wresize() to C++ binding (request by Chris Lee). 1426 + define NCURSES_EXT_FUNCS and NCURSES_EXT_COLORS in curses.h to make 1427 it simpler to tell if the extended functions and/or colors are 1428 declared. 1429 1430 20071103 1431 + update memory-leak checks for changes to names.c and codes.c 1432 + correct acsc strings in h19, z100 (patch by Benjamin C W Sittler). 1433 1434 20071020 1435 + continue implementing support for threading demo by adding mutex 1436 for use_window(). 1437 + add mrxvt terminfo entry, add/fix xterm building blocks for modified 1438 cursor keys -TD 1439 + compile with FreeBSD "contemporary" TTY interface (patch by 1440 Rong-En Fan). 1441 1442 20071013 1443 + modify makefile rules to allow clear, tput and tset to be built 1444 without libtic. The other programs (infocmp, tic and toe) rely on 1445 that library. 1446 + add/modify null-pointer checks in several functions for SP and/or 1447 the WINDOW* parameter (report by Thorben Krueger). 1448 + fixes for field_buffer() in formw library (see Redhat Bugzilla 1449 #310071, patches by Miroslav Lichvar). 1450 + improve performance of NCURSES_CHAR_EQ code (patch by Miroslav 1451 Lichvar). 1452 + update/improve mlterm and rxvt terminfo entries, e.g., for 1453 the modified cursor- and keypad-keys -TD 1454 1455 20071006 1456 + add code to curses.priv.h ifdef'd with NCURSES_CHAR_EQ, which 1457 changes the CharEq() macro to an inline function to allow comparing 1458 cchar_t struct's without comparing gaps in a possibly unpacked 1459 memory layout (report by Miroslav Lichvar). 1460 1461 20070929 1462 + add new functions to lib_trace.c to setup mutex's for the _tracef() 1463 calls within the ncurses library. 1464 + for the reentrant model, move _nc_tputs_trace and _nc_outchars into 1465 the SCREEN. 1466 + start modifying test/worm.c to provide threading demo (incomplete). 1467 + separated ifdef's for some BSD-related symbols in tset.c, to make 1468 it compile on LynxOS (report by Greg Gemmer). 1469 20070915 1470 + modify Ada95/gen/Makefile to use shlib script, to simplify building 1471 shared-library configuration on platforms lacking rpath support. 1472 + build-fix for Ada95/src/Makefile to reflect changed dependency for 1473 the terminal-interface-curses-aux.adb file which is now generated. 1474 + restructuring test/worm.c, for use_window() example. 1475 1476 20070908 1477 + add use_window() and use_screen() functions, to develop into support 1478 for threaded library (incomplete). 1479 + fix typos in man/curs_opaque.3x which kept the install script from 1480 creating symbolic links to two aliases created in 20070818 (report by 1481 Rong-En Fan). 1482 1483 20070901 1484 + remove a spurious newline from output of html.m4, which caused links 1485 for Ada95 html to be incorrect for the files generated using m4. 1486 + start investigating mutex's for SCREEN manipulation (incomplete). 1487 + minor cleanup of codes.c/names.c for --enable-const 1488 + expand/revise "Routine and Argument Names" section of ncurses manpage 1489 to address report by David Givens in newsgroup discussion. 1490 + fix interaction between --without-progs/--with-termcap configure 1491 options (report by Michail Vidiassov). 1492 + fix typo in "--disable-relink" option (report by Michail Vidiassov). 1493 1494 20070825 1495 + fix a sign-extension bug in infocmp's repair_acsc() function 1496 (cf: 971004). 1497 + fix old configure script bug which prevented "--disable-warnings" 1498 option from working (patch by Mike Frysinger). 1499 1500 20070818 1501 + add 9term terminal description (request by Juhapekka Tolvanen) -TD 1502 + modify comp_hash.c's string output to avoid misinterpreting a null 1503 "\0" followed by a digit. 1504 + modify MKnames.awk and MKcodes.awk to support big-strings. 1505 This only applies to the cases (broken linker, reentrant) where 1506 the corresponding arrays are accessed via wrapper functions. 1507 + split MKnames.awk into two scripts, eliminating the shell redirection 1508 which complicated the make process and also the bogus timestamp file 1509 which was introduced to fix "make -j". 1510 + add test/test_opaque.c, test/test_arrays.c 1511 + add wgetscrreg() and wgetparent() for applications that may need it 1512 when NCURSES_OPAQUE is defined (prompted by Bryan Christ). 1513 1514 20070812 1515 + amend treatment of infocmp "-r" option to retain the 1023-byte limit 1516 unless "-T" is given (cf: 981017). 1517 + modify comp_captab.c generation to use big-strings. 1518 + make _nc_capalias_table and _nc_infoalias_table private accessed via 1519 _nc_get_alias_table() since the tables are used only within the tic 1520 library. 1521 + modify configure script to skip Intel compiler in CF_C_INLINE. 1522 + make _nc_info_hash_table and _nc_cap_hash_table private accessed via 1523 _nc_get_hash_table() since the tables are used only within the tic 1524 library. 1525 1526 20070728 1527 + make _nc_capalias_table and _nc_infoalias_table private, accessed via 1528 _nc_get_alias_table() since they are used only by parse_entry.c 1529 + make _nc_key_names private since it is used only by lib_keyname.c 1530 + add --disable-big-strings configure option to control whether 1531 unctrl.c is generated using the big-string optimization - which may 1532 use strings longer than supported by a given compiler. 1533 + reduce relocation tables for tic, infocmp by changing type of 1534 internal hash tables to short, and make those private symbols. 1535 + eliminate large fixed arrays from progs/infocmp.c 1536 1537 20070721 1538 + change winnstr() to stop at the end of the line (cf: 970315). 1539 + add test/test_get_wstr.c 1540 + add test/test_getstr.c 1541 + add test/test_inwstr.c 1542 + add test/test_instr.c 1543 1544 20070716 1545 + restore a call to obtain screen-size in _nc_setupterm(), which 1546 is used in tput and other non-screen applications via setupterm() 1547 (Debian #433357, reported by Florent Bayle, Christian Ohm, 1548 cf: 20070310). 1549 1550 20070714 1551 + add test/savescreen.c test-program 1552 + add check to trace-file open, if the given name is a directory, add 1553 ".log" to the name and try again. 1554 + add konsole-256color entry -TD 1555 + add extra gcc warning options from xterm. 1556 + minor fixes for ncurses/hashmap test-program. 1557 + modify configure script to quiet c++ build with libtool when the 1558 --disable-echo option is used. 1559 + modify configure script to disable ada95 if libtool is selected, 1560 writing a warning message (addresses FreeBSD ports/114493). 1561 + update config.guess, config.sub 1562 1563 20070707 1564 + add continuous-move "M" to demo_panels to help test refresh changes. 1565 + improve fix for refresh of window on top of multi-column characters, 1566 taking into account some split characters on left/right window 1567 boundaries. 1568 1569 20070630 1570 + add "widec" row to _tracedump() output to help diagnose remaining 1571 problems with multi-column characters. 1572 + partial fix for refresh of window on top of multi-column characters 1573 which are partly overwritten (report by Sadrul H Chowdhury). 1574 + ignore A_CHARTEXT bits in vidattr() and vid_attr(), in case 1575 multi-column extension bits are passed there. 1576 + add setlocale() call to demo_panels.c, needed for wide-characters. 1577 + add some output flags to _nc_trace_ttymode to help diagnose a bug 1578 report by Larry Virden, i.e., ONLCR, OCRNL, ONOCR and ONLRET, 1579 1580 20070623 1581 + add test/demo_panels.c 1582 + implement opaque version of setsyx() and getsyx(). 1583 1584 20070612 1585 + corrected xterm+pcf2 terminfo modifiers for F1-F4, to match xterm 1586 #226 -TD 1587 + split-out key_name() from MKkeyname.awk since it now depends upon 1588 wunctrl() which is not in libtinfo (report by Rong-En Fan). 1589 1590 20070609 1591 + add test/key_name.c 1592 + add stdscr cases to test/inchs.c and test/inch_wide.c 1593 + update test/configure 1594 + correct formatting of DEL (0x7f) in _nc_vischar(). 1595 + null-terminate result of wunctrl(). 1596 + add null-pointer check in key_name() (report by Andreas Krennmair, 1597 cf: 20020901). 1598 1599 20070602 1600 + adapt mouse-handling code from menu library in form-library 1601 (discussion with Clive Nicolson). 1602 + add a modification of test/dots.c, i.e., test/dots_mvcur.c to 1603 illustrate how to use mvcur(). 1604 + modify wide-character flavor of SetAttr() to preserve the 1605 WidecExt() value stored in the .attr field, e.g., in case it 1606 is overwritten by chgat (report by Aleksi Torhamo). 1607 + correct buffer-size for _nc_viswbuf2n() (report by Aleksi Torhamo). 1608 + build-fixes for Solaris 2.6 and 2.7 (patch by Peter O'Gorman). 1609 1610 20070526 1611 + modify keyname() to use "^X" form only if meta() has been called, or 1612 if keyname() is called without initializing curses, e.g., via 1613 initscr() or newterm() (prompted by LinuxBase #1604). 1614 + document some portability issues in man/curs_util.3x 1615 + add a shadow copy of TTY buffer to _nc_prescreen to fix applications 1616 broken by moving that data into SCREEN (cf: 20061230). 1617 1618 20070512 1619 + add 'O' (wide-character panel test) in ncurses.c to demonstrate a 1620 problem reported by Sadrul H Chowdhury with repainting parts of 1621 a fullwidth cell. 1622 + modify slk_init() so that if there are preceding calls to 1623 ripoffline(), those affect the available lines for soft-keys (adapted 1624 from patch by Clive Nicolson). 1625 + document some portability issues in man/curs_getyx.3x 1626 1627 20070505 1628 + fix a bug in Ada95/samples/ncurses which caused a variable to 1629 become uninitialized in the "b" test. 1630 + fix Ada95/gen/Makefile.in adahtml rule to account for recent 1631 movement of files, fix a few incorrect manpage references in the 1632 generated html. 1633 + add Ada95 binding to _nc_freeall() as Curses_Free_All to help with 1634 memory-checking. 1635 + correct some functions in Ada95 binding which were using return value 1636 from C where none was returned: idcok(), immedok() and wtimeout(). 1637 + amend recent changes for Ada95 binding to make it build with 1638 Cygwin's linker, e.g., with configure options 1639 --enable-broken-linker --with-ticlib 1640 1641 20070428 1642 + add a configure check for gcc's options for inlining, use that to 1643 quiet a warning message where gcc's default behavior changed from 1644 3.x to 4.x. 1645 + improve warning message when checking if GPM is linked to curses 1646 library by not warning if its use of "wgetch" is via a weak symbol. 1647 + add loader options when building with static libraries to ensure that 1648 an installed shared library for ncurses does not conflict. This is 1649 reported as problem with Tru64, but could affect other platforms 1650 (report Martin Mokrejs, analysis by Tim Mooney). 1651 + fix build on cygwin after recent ticlib/termlib changes, i.e., 1652 + adjust TINFO_SUFFIX value to work with cygwin's dll naming 1653 + revert a change from 20070303 which commented out dependency of 1654 SHLIB_LIST in form/menu/panel/c++ libraries. 1655 + fix initialization of ripoff stack pointer (cf: 20070421). 1656 1657 20070421 1658 + move most static variables into structures _nc_globals and 1659 _nc_prescreen, to simplify storage. 1660 + add/use configure script macro CF_SIG_ATOMIC_T, use the corresponding 1661 type for data manipulated by signal handlers (prompted by comments 1662 in mailing.openbsd.bugs newsgroup). 1663 + modify CF_WITH_LIBTOOL to allow one to pass options such as -static 1664 to the libtool create- and link-operations. 1665 1666 20070414 1667 + fix whitespace in curs_opaque.3x which caused a spurious ';' in 1668 the installed aliases (report by Peter Santoro). 1669 + fix configure script to not try to generate adacurses-config when 1670 Ada95 tree is not built. 1671 1672 20070407 1673 + add man/curs_legacy.3x, man/curs_opaque.3x 1674 + fix acs_map binding for Ada95 when --enable-reentrant is used. 1675 + add adacurses-config to the Ada95 install, based on version from 1676 FreeBSD port, in turn by Juergen Pfeifer in 2000 (prompted by 1677 comment on comp.lang.ada newsgroup). 1678 + fix includes in c++ binding to build with Intel compiler 1679 (cf: 20061209). 1680 + update install rule in Ada95 to use mkdirs.sh 1681 > other fixes prompted by inspection for Coverity report: 1682 + modify ifdef's for c++ binding to use try/catch/throw statements 1683 + add a null-pointer check in tack/ansi.c request_cfss() 1684 + fix a memory leak in ncurses/base/wresize.c 1685 + corrected check for valid memu/meml capabilities in 1686 progs/dump_entry.c when handling V_HPUX case. 1687 > fixes based on Coverity report: 1688 + remove dead code in test/bs.c 1689 + remove dead code in test/demo_defkey.c 1690 + remove an unused assignment in progs/infocmp.c 1691 + fix a limit check in tack/ansi.c tools_charset() 1692 + fix tack/ansi.c tools_status() to perform the VT320/VT420 1693 tests in request_cfss(). The function had exited too soon. 1694 + fix a memory leak in tic.c's make_namelist() 1695 + fix a couple of places in tack/output.c which did not check for EOF. 1696 + fix a loop-condition in test/bs.c 1697 + add index checks in lib_color.c for color palettes 1698 + add index checks in progs/dump_entry.c for version_filter() handling 1699 of V_BSD case. 1700 + fix a possible null-pointer dereference in copywin() 1701 + fix a possible null-pointer dereference in waddchnstr() 1702 + add a null-pointer check in _nc_expand_try() 1703 + add a null-pointer check in tic.c's make_namelist() 1704 + add a null-pointer check in _nc_expand_try() 1705 + add null-pointer checks in test/cardfile.c 1706 + fix a double-free in ncurses/tinfo/trim_sgr0.c 1707 + fix a double-free in ncurses/base/wresize.c 1708 + add try/catch block to c++/cursesmain.cc 1709 1710 20070331 1711 + modify Ada95 binding to build with --enable-reentrant by wrapping 1712 global variables (bug: acs_map does not yet work). 1713 + modify Ada95 binding to use the new access-functions, allowing it 1714 to build/run when NCURSES_OPAQUE is set. 1715 + add access-functions and macros to return properties of the WINDOW 1716 structure, e.g., when NCURSES_OPAQUE is set. 1717 + improved install-sh's quoting. 1718 + use mkdirs.sh rather than mkinstalldirs, e.g., to use fixes from 1719 other programs. 1720 1721 20070324 1722 + eliminate part of the direct use of WINDOW data from Ada95 interface. 1723 + fix substitutions for termlib filename to make configure option 1724 --enable-reentrant work with --with-termlib. 1725 + change a constructor for NCursesWindow to allow compiling with 1726 NCURSES_OPAQUE set, since we cannot pass a reference to 1727 an opaque pointer. 1728 1729 20070317 1730 + ignore --with-chtype=unsigned since unsigned is always added to 1731 the type in curses.h; do the same for --with-mmask-t. 1732 + change warning regarding --enable-ext-colors and wide-character 1733 in the configure script to an error. 1734 + tweak error message in CF_WITH_LIBTOOL to distinguish other programs 1735 such as Darwin's libtool program (report by Michail Vidiassov) 1736 + modify edit_man.sh to allow for multiple substitutions per line. 1737 + set locale in misc/ncurses-config.in since it uses a range 1738 + change permissions libncurses++.a install (report by Michail 1739 Vidiassov). 1740 + corrected length of temporary buffer in wide-character version 1741 of set_field_buffer() (related to report by Bryan Christ). 1742 1743 20070311 1744 + fix mk-1st.awk script install_shlib() function, broken in 20070224 1745 changes for cygwin (report by Michail Vidiassov). 1746 1747 20070310 1748 + increase size of array in _nc_visbuf2n() to make "tic -v" work 1749 properly in its similar_sgr() function (report/analysis by Peter 1750 Santoro). 1751 + add --enable-reentrant configure option for ongoing changes to 1752 implement a reentrant version of ncurses: 1753 + libraries are suffixed with "t" 1754 + wrap several global variables (curscr, newscr, stdscr, ttytype, 1755 COLORS, COLOR_PAIRS, COLS, ESCDELAY, LINES and TABSIZE) as 1756 functions returning values stored in SCREEN or cur_term. 1757 + move some initialization (LINES, COLS) from lib_setup.c, 1758 i.e., setupterm() to _nc_setupscreen(), i.e., newterm(). 1759 1760 20070303 1761 + regenerated html documentation. 1762 + add NCURSES_OPAQUE symbol to curses.h, will use to make structs 1763 opaque in selected configurations. 1764 + move the chunk in lib_acs.c which resets acs capabilities when 1765 running on a terminal whose locale interferes with those into 1766 _nc_setupscreen(), so the libtinfo/libtinfow files can be made 1767 identical (requested by Miroslav Lichvar). 1768 + do not use configure variable SHLIB_LIBS for building libraries 1769 outside the ncurses directory, since that symbol is customized 1770 only for that directory, and using it introduces an unneeded 1771 dependency on libdl (requested by Miroslav Lichvar). 1772 + modify mk-1st.awk so the generated makefile rules for linking or 1773 installing shared libraries do not first remove the library, in 1774 case it is in use, e.g., libncurses.so by /bin/sh (report by Jeff 1775 Chua). 1776 + revised section "Using NCURSES under XTERM" in ncurses-intro.html 1777 (prompted by newsgroup comment by Nick Guenther). 1778 1779 20070224 1780 + change internal return codes of _nc_wgetch() to check for cases 1781 where KEY_CODE_YES should be returned, e.g., if a KEY_RESIZE was 1782 ungetch'd, and read by wget_wch(). 1783 + fix static-library build broken in 20070217 changes to remove "-ldl" 1784 (report by Miroslav Lichvar). 1785 + change makefile/scripts for cygwin to allow building termlib. 1786 + use Form_Hook in manpages to match form.h 1787 + use Menu_Hook in manpages, as well as a few places in menu.h 1788 + correct form- and menu-manpages to use specific Field_Options, 1789 Menu_Options and Item_Options types. 1790 + correct prototype for _tracechar() in manpage (cf: 20011229). 1791 + correct prototype for wunctrl() in manpage. 1792 1793 20070217 1794 + fixes for $(TICS_LIST) in ncurses/Makefile (report by Miroslav 1795 Lichvar). 1796 + modify relinking of shared libraries to apply only when rpath is 1797 enabled, and add --disable-relink option which can be used to 1798 disable the feature altogether (reports by Michail Vidiassov, 1799 Adam J Richter). 1800 + fix --with-termlib option for wide-character configuration, stripping 1801 the "w" suffix in one place (report by Miroslav Lichvar). 1802 + remove "-ldl" from some library lists to reduce dependencies in 1803 programs (report by Miroslav Lichvar). 1804 + correct description of --enable-signed-char in configure --help 1805 (report by Michail Vidiassov). 1806 + add pattern for GNU/kFreeBSD configuration to CF_XOPEN_SOURCE, 1807 which matches an earlier change to CF_SHARED_OPTS, from xterm #224 1808 fixes. 1809 + remove "${DESTDIR}" from -install_name option used for linking 1810 shared libraries on Darwin (report by Michail Vidiassov). 1811 1812 20070210 1813 + add test/inchs.c, test/inch_wide.c, to test win_wchnstr(). 1814 + remove libdl from library list for termlib (report by Miroslav 1815 Lichvar). 1816 + fix configure.in to allow --without-progs --with-termlib (patch by 1817 Miroslav Lichvar). 1818 + modify win_wchnstr() to ensure that only a base cell is returned 1819 for each multi-column character (prompted by report by Wei Kong 1820 regarding change in mvwin_wch() cf: 20041023). 1821 1822 20070203 1823 + modify fix_wchnstr() in form library to strip attributes (and color) 1824 from the cchar_t array (field cells) read from a field's window. 1825 Otherwise, when copying the field cells back to the window, the 1826 associated color overrides the field's background color (report by 1827 Ricardo Cantu). 1828 + improve tracing for form library, showing created forms, fields, etc. 1829 + ignore --enable-rpath configure option if --with-shared was omitted. 1830 + add _nc_leaks_tinfo(), _nc_free_tic(), _nc_free_tinfo() entrypoints 1831 to allow leak-checking when both tic- and tinfo-libraries are built. 1832 + drop CF_CPP_VSCAN_FUNC macro from configure script, since C++ binding 1833 no longer relies on it. 1834 + disallow combining configure script options --with-ticlib and 1835 --enable-termcap (report by Rong-En Fan). 1836 + remove tack from ncurses tree. 1837 1838 20070128 1839 + fix typo in configure script that broke --with-termlib option 1840 (report by Rong-En Fan). 1841 1842 20070127 1843 + improve fix for FreeBSD gnu/98975, to allow for null pointer passed 1844 to tgetent() (report by Rong-en Fan). 1845 + update tack/HISTORY and tack/README to tell how to build it after 1846 it is removed from the ncurses tree. 1847 + fix configure check for libtool's version to trim blank lines 1848 (report by sci-fi@hush.ai). 1849 + review/eliminate other original-file artifacts in cursesw.cc, making 1850 its license consistent with ncurses. 1851 + use ncurses vw_scanw() rather than reading into a fixed buffer in 1852 the c++ binding for scanw() methods (prompted by report by Nuno Dias). 1853 + eliminate fixed-buffer vsprintf() calls in c++ binding. 1854 1855 20070120 1856 + add _nc_leaks_tic() to separate leak-checking of tic library from 1857 term/ncurses libraries, and thereby eliminate a library dependency. 1858 + fix test/mk-test.awk to ignore blank lines. 1859 + correct paths in include/headers, for --srcdir (patch by Miroslav 1860 Lichvar). 1861 1862 20070113 1863 + add a break-statement in misc/shlib to ensure that it exits on the 1864 _first_ matched directory (report by Paul Novak). 1865 + add tack/configure, which can be used to build tack outside the 1866 ncurses build-tree. 1867 + add --with-ticlib option, to build/install the tic-support functions 1868 in a separate library (suggested by Miroslav Lichvar). 1869 1870 20070106 1871 + change MKunctrl.awk to reduce relocation table for unctrl.o 1872 + change MKkeyname.awk to reduce relocation table for keyname.o 1873 (patch by Miroslav Lichvar). 1874 1875 20061230 1876 + modify configure check for libtool's version to trim blank lines 1877 (report by sci-fi@hush.ai). 1878 + modify some modules to allow them to be reentrant if _REENTRANT is 1879 defined: lib_baudrate.c, resizeterm.c (local data only) 1880 + eliminate static data from some modules: add_tries.c, hardscroll.c, 1881 lib_ttyflags.c, lib_twait.c 1882 + improve manpage install to add aliases for the transformed program 1883 names, e.g., from --program-prefix. 1884 + used linklint to verify links in the HTML documentation, made fixes 1885 to manpages as needed. 1886 + fix a typo in curs_mouse.3x (report by William McBrine). 1887 + fix install-rule for ncurses5-config to make the bin-directory. 1888 1889 20061223 1890 + modify configure script to omit the tic (terminfo compiler) support 1891 from ncurses library if --without-progs option is given. 1892 + modify install rule for ncurses5-config to do this via "install.libs" 1893 + modify shared-library rules to allow FreeBSD 3.x to use rpath. 1894 + update config.guess, config.sub 1895 1896 20061217 5.6 release for upload to 1897 1898 20061217 1899 + add ifdef's for <wctype.h> for HPUX, which has the corresponding 1900 definitions in <wchar.h>. 1901 + revert the va_copy() change from 20061202, since it was neither 1902 correct nor portable. 1903 + add $(LOCAL_LIBS) definition to progs/Makefile.in, needed for 1904 rpath on Solaris. 1905 + ignore wide-acs line-drawing characters that wcwidth() claims are 1906 not one-column. This is a workaround for Solaris' broken locale 1907 support. 1908 1909 20061216 1910 + modify configure --with-gpm option to allow it to accept a parameter, 1911 i.e., the name of the dynamic GPM library to load via dlopen() 1912 (requested by Bryan Henderson). 1913 + add configure option --with-valgrind, changes from vile. 1914 + modify configure script AC_TRY_RUN and AC_TRY_LINK checks to use 1915 'return' in preference to 'exit()'. 1916 1917 20061209 1918 + change default for --with-develop back to "no". 1919 + add XTABS to tracing of TTY bits. 1920 + updated autoconf patch to ifdef-out the misfeature which declares 1921 exit() for configure tests. This fixes a redefinition warning on 1922 Solaris. 1923 + use ${CC} rather than ${LD} in shared library rules for IRIX64, 1924 Solaris to help ensure that initialization sections are provided for 1925 extra linkage requirements, e.g., of C++ applications (prompted by 1926 comment by Casper Dik in newsgroup). 1927 + rename "$target" in CF_MAN_PAGES to make it easier to distinguish 1928 from the autoconf predefined symbol. There was no conflict, 1929 since "$target" was used only in the generated edit_man.sh file, 1930 but SuSE's rpm package contains a patch. 1931 1932 20061202 1933 + update man/term.5 to reflect extended terminfo support and hashed 1934 database configuration. 1935 + updates for test/configure script. 1936 + adapted from SuSE rpm package: 1937 + remove long-obsolete workaround for broken-linker which declared 1938 cur_term in tic.c 1939 + improve error recovery in PUTC() macro when wcrtomb() does not 1940 return usable results for an 8-bit character. 1941 + patches from rpm package (SuSE): 1942 + use va_copy() in extra varargs manipulation for tracing version 1943 of printw, etc. 1944 + use a va_list rather than a null in _nc_freeall()'s call to 1945 _nc_printf_string(). 1946 + add some see-also references in manpages to show related 1947 wide-character functions (suggested by Claus Fischer). 1948 1949 20061125 1950 + add a check in lib_color.c to ensure caller does not increase COLORS 1951 above max_colors, which is used as an array index (discussion with 1952 Simon Sasburg). 1953 + add ifdef's allowing ncurses to be built with tparm() using either 1954 varargs (the existing status), or using a fixed-parameter list (to 1955 match X/Open). 1956 1957 20061104 1958 + fix redrawing of windows other than stdscr using wredrawln() by 1959 touching the corresponding rows in curscr (discussion with Dan 1960 Gookin). 1961 + add test/redraw.c 1962 + add test/echochar.c 1963 + review/cleanup manpage descriptions of error-returns for form- and 1964 menu-libraries (prompted by FreeBSD docs/46196). 1965 1966 20061028 1967 + add AUTHORS file -TD 1968 + omit the -D options from output of the new config script --cflags 1969 option (suggested by Ralf S Engelschall). 1970 + make NCURSES_INLINE unconditionally defined in curses.h 1971 1972 20061021 1973 + revert change to accommodate bash 3.2, since that breaks other 1974 platforms, e.g., Solaris. 1975 + minor fixes to NEWS file to simplify scripting to obtain list of 1976 contributors. 1977 + improve some shared-library configure scripting for Linux, FreeBSD 1978 and NetBSD to make "--with-shlib-version" work. 1979 + change configure-script rules for FreeBSD shared libraries to allow 1980 for rpath support in versions past 3. 1981 + use $(DESTDIR) in makefile rules for installing/uninstalling the 1982 package config script (reports/patches by Christian Wiese, 1983 Ralf S Engelschall). 1984 + fix a warning in the configure script for NetBSD 2.0, working around 1985 spurious blanks embedded in its ${MAKEFLAGS} symbol. 1986 + change test/Makefile to simplify installing test programs in a 1987 different directory when --enable-rpath is used. 1988 1989 20061014 1990 + work around bug in bash 3.2 by adding extra quotes (Jim Gifford). 1991 + add/install a package config script, e.g., "ncurses5-config" or 1992 "ncursesw5-config", according to configuration options. 1993 1994 20061007 1995 + add several GNU Screen terminfo variations with 16- and 256-colors, 1996 and status line (Alain Bench). 1997 + change the way shared libraries (other than libtool) are installed. 1998 Rather than copying the build-tree's libraries, link the shared 1999 objects into the install directory. This makes the --with-rpath 2000 option work except with $(DESTDIR) (cf: 20000930). 2001 2002 20060930 2003 + fix ifdef in c++/internal.h for QNX 6.1 2004 + test-compiled with (old) egcs-1.1.2, modified configure script to 2005 not unset the $CXX and related variables which would prevent this. 2006 + fix a few terminfo.src typos exposed by improvments to "-f" option. 2007 + improve infocmp/tic "-f" option formatting. 2008 2009 20060923 2010 + make --disable-largefile option work (report by Thomas M Ott). 2011 + updated html documentation. 2012 + add ka2, kb1, kb3, kc2 to vt220-keypad as an extension -TD 2013 + minor improvements to rxvt+pcfkeys -TD 2014 2015 20060916 2016 + move static data from lib_mouse.c into SCREEN struct. 2017 + improve ifdef's for _POSIX_VDISABLE in tset to work with Mac OS X 2018 (report by Michail Vidiassov). 2019 + modify CF_PATH_SYNTAX to ensure it uses the result from --prefix 2020 option (from lynx changes) -TD 2021 + adapt AC_PROG_EGREP check, noting that this is likely to be another 2022 place aggravated by POSIXLY_CORRECT. 2023 + modify configure check for awk to ensure that it is found (prompted 2024 by report by Christopher Parker). 2025 + update config.sub 2026 2027 20060909 2028 + add kon, kon2 and jfbterm terminfo entry (request by Till Maas) -TD 2029 + remove invis capability from klone+sgr, mainly used by linux entry, 2030 since it does not really do this -TD 2031 2032 20060903 2033 + correct logic in wadd_wch() and wecho_wch(), which did not guard 2034 against passing the multi-column attribute into a call on waddch(), 2035 e.g., using data returned by win_wch() (cf: 20041023) 2036 (report by Sadrul H Chowdhury). 2037 2038 20060902 2039 + fix kterm's acsc string -TD 2040 + fix for change to tic/infocmp in 20060819 to ensure no blank is 2041 embedded into a termcap description. 2042 + workaround for 20050806 ifdef's change to allow visbuf.c to compile 2043 when using --with-termlib --with-trace options. 2044 + improve tgetstr() by making the return value point into the user's 2045 buffer, if provided (patch by Miroslav Lichvar (see Redhat Bugzilla 2046 #202480)). 2047 + correct libraries needed for foldkeys (report by Stanislav Ievlev) 2048 2049 20060826 2050 + add terminfo entries for xfce terminal (xfce) and multi gnome 2051 terminal (mgt) -TD 2052 + add test/foldkeys.c 2053 2054 20060819 2055 + modify tic and infocmp to avoid writing trailing blanks on terminfo 2056 source output (Debian #378783). 2057 + modify configure script to ensure that if the C compiler is used 2058 rather than the loader in making shared libraries, the $(CFLAGS) 2059 variable is also used (Redhat Bugzilla #199369). 2060 + port hashed-db code to db2 and db3. 2061 + fix a bug in tgetent() from 20060625 and 20060715 changes 2062 (patch/analysis by Miroslav Lichvar (see Redhat Bugzilla #202480)). 2063 2064 20060805 2065 + updated xterm function-keys terminfo to match xterm #216 -TD 2066 + add configure --with-hashed-db option (tested only with FreeBSD 6.0, 2067 e.g., the db 1.8.5 interface). 2068 2069 20060729 2070 + modify toe to access termcap data, e.g., via cgetent() functions, 2071 or as a text file if those are not available. 2072 + use _nc_basename() in tset to improve $SHELL check for csh/sh. 2073 + modify _nc_read_entry() and _nc_read_termcap_entry() so infocmp, 2074 can access termcap data when the terminfo database is disabled. 2075 2076 20060722 2077 + widen the test for xterm kmous a little to allow for other strings 2078 than \E[M, e.g., for xterm-sco functionality in xterm. 2079 + update xterm-related terminfo entries to match xterm patch #216 -TD 2080 + update config.guess, config.sub 2081 2082 20060715 2083 + fix for install-rule in Ada95 to add terminal_interface.ads 2084 and terminal_interface.ali (anonymous posting in comp.lang.ada). 2085 + correction to manpage for getcchar() (report by William McBrine). 2086 + add test/chgat.c 2087 + modify wchgat() to mark updated cells as changed so a refresh will 2088 repaint those cells (comments by Sadrul H Chowdhury and William 2089 McBrine). 2090 + split up dependency of names.c and codes.c in ncurses/Makefile to 2091 work with parallel make (report/analysis by Joseph S Myers). 2092 + suppress a warning message (which is ignored) for systems without 2093 an ldconfig program (patch by Justin Hibbits). 2094 + modify configure script --disable-symlinks option to allow one to 2095 disable symlink() in tic even when link() does not work (report by 2096 Nigel Horne). 2097 + modify MKfallback.sh to use tic -x when constructing fallback tables 2098 to allow extended capabilities to be retrieved from a fallback entry. 2099 + improve leak-checking logic in tgetent() from 20060625 to ensure that 2100 it does not free the current screen (report by Miroslav Lichvar). 2101 2102 20060708 2103 + add a check for _POSIX_VDISABLE in tset (NetBSD #33916). 2104 + correct _nc_free_entries() and related functions used for memory leak 2105 checking of tic. 2106 2107 20060701 2108 + revert a minor change for magic-cookie support from 20060513, which 2109 caused unexpected reset of attributes, e.g., when resizing test/view 2110 in color mode. 2111 + note in clear manpage that the program ignores command-line 2112 parameters (prompted by Debian #371855). 2113 + fixes to make lib_gen.c build properly with changes to the configure 2114 --disable-macros option and NCURSES_NOMACROS (cf: 20060527) 2115 + update/correct several terminfo entries -TD 2116 + add some notes regarding copyright to terminfo.src -TD 2117 2118 20060625 2119 + fixes to build Ada95 binding with gnat-4.1.0 2120 + modify read_termtype() so the term_names data is always allocated as 2121 part of the str_table, a better fix for a memory leak (cf: 20030809). 2122 + reduce memory leaks in repeated calls to tgetent() by remembering the 2123 last TERMINAL* value allocated to hold the corresponding data and 2124 freeing that if the tgetent() result buffer is the same as the 2125 previous call (report by "Matt" for FreeBSD gnu/98975). 2126 + modify tack to test extended capability function-key strings. 2127 + improved gnome terminfo entry (GenToo #122566). 2128 + improved xterm-256color terminfo entry (patch by Alain Bench). 2129 2130 20060617 2131 + fix two small memory leaks related to repeated tgetent() calls 2132 with TERM=screen (report by "Matt" for FreeBSD gnu/98975). 2133 + add --enable-signed-char to simplify Debian package. 2134 + reduce name-pollution in term.h by removing #define's for HAVE_xxx 2135 symbols. 2136 + correct typo in curs_terminfo.3x (Debian #369168). 2137 2138 20060603 2139 + enable the mouse in test/movewindow.c 2140 + improve a limit-check in frm_def.c (John Heasley). 2141 + minor copyright fixes. 2142 + change configure script to produce test/Makefile from data file. 2143 2144 20060527 2145 + add a configure option --enable-wgetch-events to enable 2146 NCURSES_WGETCH_EVENTS, and correct the associated loop-logic in 2147 lib_twait.c (report by Bernd Jendrissek). 2148 + remove include/nomacros.h from build, since the ifdef for 2149 NCURSES_NOMACROS makes that obsolete. 2150 + add entrypoints for some functions which were only provided as macros 2151 to make NCURSES_NOMACROS ifdef work properly: getcurx(), getcury(), 2152 getbegx(), getbegy(), getmaxx(), getmaxy(), getparx() and getpary(), 2153 wgetbkgrnd(). 2154 + provide ifdef for NCURSES_NOMACROS which suppresses most macro 2155 definitions from curses.h, i.e., where a macro is defined to override 2156 a function to improve performance. Allowing a developer to suppress 2157 these definitions can simplify some application (discussion with 2158 Stanislav Ievlev). 2159 + improve description of memu/meml in terminfo manpage. 2160 2161 20060520 2162 + if msgr is false, reset video attributes when doing an automargin 2163 wrap to the next line. This makes the ncurses 'k' test work properly 2164 for hpterm. 2165 + correct caching of keyname(), which was using only half of its table. 2166 + minor fixes to memory-leak checking. 2167 + make SCREEN._acs_map and SCREEN._screen_acs_map pointers rather than 2168 arrays, making ACS_LEN less visible to applications (suggested by 2169 Stanislav Ievlev). 2170 + move chunk in SCREEN ifdef'd for USE_WIDEC_SUPPORT to the end, so 2171 _screen_acs_map will have the same offset in both ncurses/ncursesw, 2172 making the corresponding tinfo/tinfow libraries binary-compatible 2173 (cf: 20041016, report by Stanislav Ievlev). 2174 2175 20060513 2176 + improve debug-tracing for EmitRange(). 2177 + change default for --with-develop to "yes". Add NCURSES_NO_HARD_TABS 2178 and NCURSES_NO_MAGIC_COOKIE environment variables to allow runtime 2179 suppression of the related hard-tabs and xmc-glitch features. 2180 + add ncurses version number to top-level manpages, e.g., ncurses, tic, 2181 infocmp, terminfo as well as form, menu, panel. 2182 + update config.guess, config.sub 2183 + modify ncurses.c to work around a bug in NetBSD 3.0 curses 2184 (field_buffer returning null for a valid field). The 'r' test 2185 appears to not work with that configuration since the new_fieldtype() 2186 function is broken in that implementation. 2187 2188 20060506 2189 + add hpterm-color terminfo entry -TD 2190 + fixes to compile test-programs with HPUX 11.23 2191 2192 20060422 2193 + add copyright notices to files other than those that are generated, 2194 data or adapted from pdcurses (reports by William McBrine, David 2195 Taylor). 2196 + improve rendering on hpterm by not resetting attributes at the end 2197 of doupdate() if the terminal has the magic-cookie feature (report 2198 by Bernd Rieke). 2199 + add 256color variants of terminfo entries for programs which are 2200 reported to implement this feature -TD 2201 2202 20060416 2203 + fix typo in change to NewChar() macro from 20060311 changes, which 2204 broke tab-expansion (report by Frederic L W Meunier). 2205 2206 20060415 2207 + document -U option of tic and infocmp. 2208 + modify tic/infocmp to suppress smacs/rmacs when acsc is suppressed 2209 due to size limit, e.g., converting to termcap format. Also 2210 suppress them if the output format does not contain acsc and it 2211 was not VT100-like, i.e., a one-one mapping (Novell #163715). 2212 + add configure check to ensure that SIGWINCH is defined on platforms 2213 such as OS X which exclude that when _XOPEN_SOURCE, etc., are 2214 defined (report by Nicholas Cole) 2215 2216 20060408 2217 + modify write_object() to not write coincidental extensions of an 2218 entry made due to it being referenced in a use= clause (report by 2219 Alain Bench). 2220 + another fix for infocmp -i option, which did not ensure that some 2221 escape sequences had comparable prefixes (report by Alain Bench). 2222 2223 20060401 2224 + improve discussion of init/reset in terminfo and tput manpages 2225 (report by Alain Bench). 2226 + use is3 string for a fallback of rs3 in the reset program; it was 2227 using is2 (report by Alain Bench). 2228 + correct logic for infocmp -i option, which did not account for 2229 multiple digits in a parameter (cf: 20040828) (report by Alain 2230 Bench). 2231 + move _nc_handle_sigwinch() to lib_setup.c to make --with-termlib 2232 option work after 20060114 changes (report by Arkadiusz Miskiewicz). 2233 + add copyright notices to test-programs as needed (report by William 2234 McBrine). 2235 2236 20060318 2237 + modify ncurses.c 'F' test to combine the wide-characters with color 2238 and/or video attributes. 2239 + modify test/ncurses to use CTL/Q or ESC consistently for exiting 2240 a test-screen (some commands used 'x' or 'q'). 2241 2242 20060312 2243 + fix an off-by-one in the scrolling-region change (cf_ 20060311). 2244 2245 20060311 2246 + add checks in waddchnstr() and wadd_wchnstr() to stop copying when 2247 a null character is found (report by Igor Bogomazov). 2248 + modify progs/Makefile.in to make "tput init" work properly with 2249 cygwin, i.e., do not pass a ".exe" in the reference string used 2250 in check_aliases (report by Samuel Thibault). 2251 + add some checks to ensure current position is within scrolling 2252 region before scrolling on a new line (report by Dan Gookin). 2253 + change some NewChar() usage to static variables to work around 2254 stack garbage introduced when cchar_t is not packed (Redhat #182024). 2255 2256 20060225 2257 + workarounds to build test/movewindow with PDcurses 2.7. 2258 + fix for nsterm-16color entry (patch by Alain Bench). 2259 + correct a typo in infocmp manpage (Debian #354281). 2260 2261 20060218 2262 + add nsterm-16color entry -TD 2263 + updated mlterm terminfo entry -TD 2264 + remove 970913 feature for copying subwindows as they are moved in 2265 mvwin() (discussion with Bryan Christ). 2266 + modify test/demo_menus.c to demonstrate moving a menu (both the 2267 window and subwindow) using shifted cursor-keys. 2268 + start implementing recursive mvwin() in movewindow.c (incomplete). 2269 + add a fallback definition for GCC_PRINTFLIKE() in test.priv.h, 2270 for movewindow.c (report by William McBrine). 2271 + add help-message to test/movewindow.c 2272 2273 20060211 2274 + add test/movewindow.c, to test mvderwin(). 2275 + fix ncurses soft-key test so color changes are shown immediately 2276 rather than delayed. 2277 + modify ncurses soft-key test to hide the keys when exiting the test 2278 screen. 2279 + fixes to build test programs with PDCurses 2.7, e.g., its headers 2280 rely on autoconf symbols, and it declares stubs for nonfunctional 2281 terminfo and termcap entrypoints. 2282 2283 20060204 2284 + improved test/configure to build test/ncurses on HPUX 11 using the 2285 vendor curses. 2286 + documented ALTERNATE CONFIGURATIONS in the ncurses manpage, for the 2287 benefit of developers who do not read INSTALL. 2288 2289 20060128 2290 + correct form library Window_To_Buffer() change (cf: 20040516), which 2291 should ignore the video attributes (report by Ricardo Cantu). 2292 2293 20060121 2294 + minor fixes to xmc-glitch experimental code: 2295 + suppress line-drawing 2296 + implement max_attributes 2297 tested with xterm. 2298 + minor fixes for the database iterator. 2299 + fix some buffer limits in c++ demo (comment by Falk Hueffner in 2300 Debian #348117). 2301 2302 20060114 2303 + add toe -a option, to show all databases. This uses new private 2304 interfaces in the ncurses library for iterating through the list of 2305 databases. 2306 + fix toe from 20000909 changes which made it not look at 2307 $HOME/.terminfo 2308 + make toe's -v option parameter optional as per manpage. 2309 + improve SIGWINCH handling by postponing its effect during newterm(), 2310 etc., when allocating screens. 2311 2312 20060111 2313 + modify wgetnstr() to return KEY_RESIZE if a sigwinch occurs. Use 2314 this in test/filter.c 2315 + fix an error in filter() modification which caused some applications 2316 to fail. 2317 2318 20060107 2319 + check if filter() was called when getting the screensize. Keep it 2320 at 1 if so (based on Redhat #174498). 2321 + add extension nofilter(). 2322 + refined the workaround for ACS mapping. 2323 + make ifdef's consistent in curses.h for the extended colors so the 2324 header file can be used for the normal curses library. The header 2325 file installed for extended colors is a variation of the 2326 wide-character configuration (report by Frederic L W Meunier). 2327 2328 20051231 2329 + add a workaround to ACS mapping to allow applications such as 2330 test/blue.c to use the "PC ROM" characters by masking them with 2331 A_ALTCHARSET. This worked up til 5.5, but was lost in the revision 2332 of legacy coding (report by Michael Deutschmann). 2333 + add a null-pointer check in the wide-character version of 2334 calculate_actual_width() (report by Victor Julien). 2335 + improve test/ncurses 'd' (color-edit) test by allowing the RGB 2336 values to be set independently (patch by William McBrine). 2337 + modify test/configure script to allow building test programs with 2338 PDCurses/X11. 2339 + modified test programs to allow some to work with NetBSD curses. 2340 Several do not because NetBSD curses implements a subset of X/Open 2341 curses, and also lacks much of SVr4 additions. But it's enough for 2342 comparison. 2343 + update config.guess and config.sub 2344 2345 20051224 2346 + use BSD-specific fix for return-value from cgetent() from CVS where 2347 an unknown terminal type would be reportd as "database not found". 2348 + make tgetent() return code more readable using new symbols 2349 TGETENT_YES, etc. 2350 + remove references to non-existent "tctest" program. 2351 + remove TESTPROGS from progs/Makefile.in (it was referring to code 2352 that was never built in that directory). 2353 + typos in curs_addchstr.3x, some doc files (noticed in OpenBSD CVS). 2354 2355 20051217 2356 + add use_legacy_coding() function to support lynx's font-switching 2357 feature. 2358 + fix formatting in curs_termcap.3x (report by Mike Frysinger). 2359 + modify MKlib_gen.sh to change preprocessor-expanded _Bool back to 2360 bool. 2361 2362 20051210 2363 + extend test/ncurses.c 's' (overlay window) test to exercise overlay(), 2364 overwrite() and copywin() with different combinations of colors and 2365 attributes (including background color) to make it easy to see the 2366 effect of the different functions. 2367 + corrections to menu/m_global.c for wide-characters (report by 2368 Victor Julien). 2369 2370 20051203 2371 + add configure option --without-dlsym, allowing developers to 2372 configure GPM support without using dlsym() (discussion with Michael 2373 Setzer). 2374 + fix wins_nwstr(), which did not handle single-column non-8bit codes 2375 (Debian #341661). 2376 2377 20051126 2378 + move prototypes for wide-character trace functions from curses.tail 2379 to curses.wide to avoid accidental reference to those if 2380 _XOPEN_SOURCE_EXTENDED is defined without ensuring that <wchar.h> is 2381 included. 2382 + add/use NCURSES_INLINE definition. 2383 + change some internal functions to use int/unsigned rather than the 2384 short equivalents. 2385 2386 20051119 2387 + remove a redundant check in lib_color.c (Debian #335655). 2388 + use ld's -search_paths_first option on Darwin to work around odd 2389 search rules on that platform (report by Christian Gennerat, analysis 2390 by Andrea Govoni). 2391 + remove special case for Darwin in CF_XOPEN_SOURCE configure macro. 2392 + ignore EINTR in tcgetattr/tcsetattr calls (Debian #339518). 2393 + fix several bugs in test/bs.c (patch by Stephen Lindholm). 2394 2395 20051112 2396 + other minor fixes to cygwin based on tack -TD 2397 + correct smacs in cygwin (Debian #338234, report by Baurzhan 2398 Ismagulov, who noted that it was fixed in Cygwin). 2399 2400 20051029 2401 + add shifted up/down arrow codes to xterm-new as kind/kri strings -TD 2402 + modify wbkgrnd() to avoid clearing the A_CHARTEXT attribute bits 2403 since those record the state of multicolumn characters (Debian 2404 #316663). 2405 + modify werase to clear multicolumn characters that extend into 2406 a derived window (Debian #316663). 2407 2408 20051022 2409 + move assignment from environment variable ESCDELAY from initscr() 2410 down to newterm() so the environment variable affects timeouts for 2411 terminals opened with newterm() as well. 2412 + fix a memory leak in keyname(). 2413 + add test/demo_altkeys.c 2414 + modify test/demo_defkey.c to exit from loop via 'q' to allow 2415 leak-checking, as well as fix a buffer size in winnstr() call. 2416 2417 20051015 2418 + correct order of use-clauses in rxvt-basic entry which made codes for 2419 f1-f4 vt100-style rather than vt220-style (report by Gabor Z Papp). 2420 + suppress configure check for gnatmake if Ada95/Makefile.in is not 2421 found. 2422 + correct a typo in configure --with-bool option for the case where 2423 --without-cxx is used (report by Daniel Jacobowitz). 2424 + add a note to INSTALL's discussion of --with-normal, pointing out 2425 that one may wish to use --without-gpm to ensure a completely 2426 static link (prompted by report by Felix von Leitner). 2427 2428 20051010 5.5 release for upload to 2429 2430 20051008 2431 + document in demo_forms.c some portability issues. 2432 2433 20051001 2434 + document side-effect of werase() which sets the cursor position. 2435 + save/restore the current position in form field editing to make 2436 overlay mode work. 2437 2438 20050924 2439 + correct header dependencies in progs, allowing parallel make (report 2440 by Daniel Jacobowitz). 2441 + modify CF_BUILD_CC to ensure that pre-setting $BUILD_CC overrides 2442 the configure check for --with-build-cc (report by Daniel Jacobowitz). 2443 + modify CF_CFG_DEFAULTS to not use /usr as the default prefix for 2444 NetBSD. 2445 + update config.guess and config.sub from 2446 2447 2448 20050917 2449 + modify sed expression which computes path for /usr/lib/terminfo 2450 symbolic link in install to ensure that it does not change unexpected 2451 levels of the path (Gentoo #42336). 2452 + modify default for --disable-lp64 configure option to reduce impact 2453 on existing 64-bit builds. Enabling the _LP64 option may change the 2454 size of chtype and mmask_t. However, for ABI 6, it is enabled by 2455 default (report by Mike Frysinger). 2456 + add configure script check for --enable-ext-mouse, bump ABI to 6 by 2457 default if it is used. 2458 + improve configure script logic for bumping ABI to omit this if the 2459 --with-abi-version option was used. 2460 + update address for Free Software Foundation in tack's source. 2461 + correct wins_wch(), which was not marking the filler-cells of 2462 multi-column characters (cf: 20041023). 2463 2464 20050910 2465 + modify mouse initialization to ensure that Gpm_Open() is called only 2466 once. Otherwise GPM gets confused in its initialization of signal 2467 handlers (Debian #326709). 2468 2469 20050903 2470 + modify logic for backspacing in a multiline form field to ensure that 2471 it works even when the preceding line is full (report by Frank van 2472 Vugt). 2473 + remove comment about BUGS section of ncurses manpage (Debian #325481) 2474 2475 20050827 2476 + document some workarounds for shared and libtool library 2477 configurations in INSTALL (see --with-shared and --with-libtool). 2478 + modify CF_GCC_VERSION and CF_GXX_VERSION macros to accommodate 2479 cross-compilers which emit the platform name in their version 2480 message, e.g., 2481 arm-sa1100-linux-gnu-g++ (GCC) 4.0.1 2482 (report by Frank van Vugt). 2483 2484 20050820 2485 + start updating documentation for upcoming 5.5 release. 2486 + fix to make libtool and libtinfo work together again (cf: 20050122). 2487 + fixes to allow building traces into libtinfo 2488 + add debug trace to tic that shows if/how ncurses will write to the 2489 lower corner of a terminal's screen. 2490 + update llib-l* files. 2491 2492 20050813 2493 + modify initializers in c++ binding to build with old versions of g++. 2494 + improve special case for 20050115 repainting fix, ensuring that if 2495 the first changed cell is not a character that the range to be 2496 repainted is adjusted to start at a character's beginning (Debian 2497 #316663). 2498 2499 20050806 2500 + fixes to build on QNX 6.1 2501 + improve configure script checks for Intel 9.0 compiler. 2502 + remove #include's for libc.h (obsolete). 2503 + adjust ifdef's in curses.priv.h so that when cross-compiling to 2504 produce comp_hash and make_keys, no dependency on wchar.h is needed. 2505 That simplifies the build-cppflags (report by Frank van Vugt). 2506 + move modules related to key-binding into libtinfo to fix linkage 2507 problem caused by 20050430 changes to MKkeyname.sh (report by 2508 Konstantin Andreev). 2509 2510 20050723 2511 + updates/fixes for configure script macros from vile -TD 2512 + make prism9's sgr string agree with the rest of the terminfo -TD 2513 + make vt220's sgr0 string consistent with sgr string, do this for 2514 several related cases -TD 2515 + improve translation to termcap by filtering the 'me' (sgr0) strings 2516 as in the runtime call to tgetent() (prompted by a discussion with 2517 Thomas Klausner). 2518 + improve tic check for sgr0 versus sgr(0), to help ensure that sgr0 2519 resets line-drawing. 2520 2521 20050716 2522 + fix special cases for trimming sgr0 for hurd and vt220 (Debian 2523 #318621). 2524 + split-out _nc_trim_sgr0() from modifications made to tgetent(), to 2525 allow it to be used by tic to provide information about the runtime 2526 changes that would be made to sgr0 for termcap applications. 2527 + modify make_sed.sh to make the group-name in the NAME section of 2528 form/menu library manpage agree with the TITLE string when renaming 2529 is done for Debian (Debian #78866). 2530 2531 20050702 2532 + modify parameter type in c++ binding for insch() and mvwinsch() to 2533 be consistent with underlying ncurses library (was char, is chtype). 2534 + modify treatment of Intel compiler to allow _GNU_SOURCE to be defined 2535 on Linux. 2536 + improve configure check for nanosleep(), checking that it works since 2537 some older systems such as AIX 4.3 have a nonworking version. 2538 2539 20050625 2540 + update config.guess and config.sub from 2541 2542 + modify misc/shlib to work in test-directory. 2543 + suppress $suffix in misc/run_tic.sh when cross-compiling. This 2544 allows cross-compiles to use the host's tic program to handle the 2545 "make install.data" step. 2546 + improve description of $LINES and $COLUMNS variables in manpages 2547 (prompted by report by Dave Ulrick). 2548 + improve description of cross-compiling in INSTALL 2549 + add NCURSES-Programming-HOWTO.html by Pradeep Padala 2550 (see). 2551 + modify configure script to obtain soname for GPM library (discussion 2552 with Daniel Jacobowitz). 2553 + modify configure script so that --with-chtype option will still 2554 compute the unsigned literals suffix for constants in curses.h 2555 (report by Daniel Jacobowitz: 2556 + patches from Daniel Jacobowitz: 2557 + the man_db.renames entry for tack.1 was backwards. 2558 + tack.1 had some 1m's that should have been 1M's. 2559 + the section for curs_inwstr.3 was wrong. 2560 2561 20050619 2562 + correction to --with-chtype option (report by Daniel Jacobowitz). 2563 2564 20050618 2565 + move build-time edit_man.sh and edit_man.sed scripts to top directory 2566 to simplify reusing them for renaming tack's manpage (prompted by a 2567 review of Debian package). 2568 + revert minor optimization from 20041030 (Debian #313609). 2569 + libtool-specific fixes, tested with libtool 1.4.3, 1.5.0, 1.5.6, 2570 1.5.10 and 1.5.18 (all work except as noted previously for the c++ 2571 install using libtool 1.5.0): 2572 + modify the clean-rule in c++/Makefile.in to work with IRIX64 make 2573 program. 2574 + use $(LIBTOOL_UNINSTALL) symbol, overlooked in 20030830 2575 + add configure options --with-chtype and --with-mmask-t, to allow 2576 overriding of the non-LP64 model's use of the corresponding types. 2577 + revise test for size of chtype (and mmask_t), which always returned 2578 "long" due to an uninitialized variable (report by Daniel Jacobowitz). 2579 2580 20050611 2581 + change _tracef's that used "%p" format for va_list values to ignore 2582 that, since on some platforms those are not pointers. 2583 + fixes for long-formats in printf's due to largefile support. 2584 2585 20050604 2586 + fixes for termcap support: 2587 + reset pointer to _nc_curr_token.tk_name when the input stream is 2588 closed, which could point to free memory (cf: 20030215). 2589 + delink TERMTYPE data which is used by the termcap reader, so that 2590 extended names data will be freed consistently. 2591 + free pointer to TERMTYPE data in _nc_free_termtype() rather than 2592 its callers. 2593 + add some entrypoints for freeing permanently allocated data via 2594 _nc_freeall() when NO_LEAKS is defined. 2595 + amend 20041030 change to _nc_do_color to ensure that optimization is 2596 applied only when the terminal supports back_color_erase (bce). 2597 2598 20050528 2599 + add sun-color terminfo entry -TD 2600 + correct a missing assignment in c++ binding's method 2601 NCursesPanel::UserPointer() from 20050409 changes. 2602 + improve configure check for large-files, adding check for dirent64 2603 from vile -TD 2604 + minor change to configure script to improve linker options for the 2605 Ada95 tree. 2606 2607 20050515 2608 + document error conditions for ncurses library functions (report by 2609 Stanislav Ievlev). 2610 + regenerated html documentation for ada binding. 2611 see 2612 2613 20050507 2614 + regenerated html documentation for manpages. 2615 + add $(BUILD_EXEEXT) suffix to invocation of make_keys in 2616 ncurses/Makefile (Gentoo #89772). 2617 + modify c++/demo.cc to build with g++ -fno-implicit-templates option 2618 (patch by Mike Frysinger). 2619 + modify tic to filter out long extended names when translating to 2620 termcap format. Only two characters are permissible for termcap 2621 capability names. 2622 2623 20050430 2624 + modify terminfo entries xterm-new and rxvt to add strings for 2625 shift-, control-cursor keys. 2626 + workaround to allow c++ binding to compile with g++ 2.95.3, which 2627 has a broken implementation of static_cast<> (patch by Jeff Chua). 2628 + modify initialization of key lookup table so that if an extended 2629 capability (tic -x) string is defined, and its name begins with 'k', 2630 it will automatically be treated as a key. 2631 + modify test/keynames.c to allow for the possibility of extended 2632 key names, e.g., via define_key(), or via "tic -x". 2633 + add test/demo_termcap.c to show the contents of given entry via the 2634 termcap interface. 2635 2636 20050423 2637 + minor fixes for vt100/vt52 entries -TD 2638 + add configure option --enable-largefile 2639 + corrected libraries used to build Ada95/gen/gen, found in testing 2640 gcc 4.0.0. 2641 2642 20050416 2643 + update config.guess, config.sub 2644 + modify configure script check for _XOPEN_SOURCE, disable that on 2645 Darwin whose header files have problems (patch by Chris Zubrzycki). 2646 + modify form library Is_Printable_String() to use iswprint() rather 2647 than wcwidth() for determining if a character is printable. The 2648 latter caused it to reject menu items containing non-spacing 2649 characters. 2650 + modify ncurses test program's F-test to handle non-spacing characters 2651 by combining them with a reverse-video blank. 2652 + review/fix several gcc -Wconversion warnings. 2653 2654 20050409 2655 + correct an off-by-one error in m_driver() for mouse-clicks used to 2656 position the mouse to a particular item. 2657 + implement test/demo_menus.c 2658 + add some checks in lib_mouse to ensure SP is set. 2659 + modify C++ binding to make 20050403 changes work with the configure 2660 --enable-const option. 2661 2662 20050403 2663 + modify start_color() to return ERR if it cannot allocate memory. 2664 + address g++ compiler warnings in C++ binding by adding explicit 2665 member initialization, assignment operators and copy constructors. 2666 Most of the changes simply preserve the existing semantics of the 2667 binding, which can leak memory, etc., but by making these features 2668 visible, it provides a framework for improving the binding. 2669 + improve C++ binding using static_cast, etc. 2670 + modify configure script --enable-warnings to add options to g++ to 2671 correspond to the gcc --enable-warnings. 2672 + modify C++ binding to use some C internal functions to make it 2673 compile properly on Solaris (and other platforms). 2674 2675 20050327 2676 + amend change from 20050320 to limit it to configurations with a 2677 valid locale. 2678 + fix a bug introduced in 20050320 which broke the translation of 2679 nonprinting characters to uparrow form (report by Takahashi Tamotsu). 2680 2681 20050326 2682 + add ifdef's for _LP64 in curses.h to avoid using wasteful 64-bits for 2683 chtype and mmask_t, but add configure option --disable-lp64 in case 2684 anyone used that configuration. 2685 + update misc/shlib script to account for Mac OS X (report by Michail 2686 Vidiassov). 2687 + correct comparison for wrapping multibyte characters in 2688 waddch_literal() (report by Takahashi Tamotsu). 2689 2690 20050320 2691 + add -c and -w options to tset to allow user to suppress ncurses' 2692 resizing of the terminal emulator window in the special case where it 2693 is not able to detect the true size (report by Win Delvaux, Debian 2694 #300419). 2695 + modify waddch_nosync() to account for locale zn_CH.GBK, which uses 2696 codes 128-159 as part of multibyte characters (report by Wang 2697 WenRui, Debian #300512). 2698 2699 20050319 2700 + modify ncurses.c 'd' test to make it work with 88-color 2701 configuration, i.e., by implementing scrolling. 2702 + improve scrolling in ncurses.c 'c' and 'C' tests, e.g., for 88-color 2703 configuration. 2704 2705 20050312 2706 + change tracemunch to use strict checking. 2707 + modify ncurses.c 'p' test to test line-drawing within a pad. 2708 + implement environment variable NCURSES_NO_UTF8_ACS to support 2709 miscellaneous terminal emulators which ignore alternate character 2710 set escape sequences when in UTF-8 mode. 2711 2712 20050305 2713 + change NCursesWindow::err_handler() to a virtual function (request by 2714 Steve Beal). 2715 + modify fty_int.c and fty_num.c to handle wide characters (report by 2716 Wolfgang Gutjahr). 2717 + adapt fix for fty_alpha.c to fty_alnum.c, which also handled normal 2718 and wide characters inconsistently (report by Wolfgang Gutjahr). 2719 + update llib-* files to reflect internal interface additions/changes. 2720 2721 20050226 2722 + improve test/configure script, adding tests for _XOPEN_SOURCE, etc., 2723 from lynx. 2724 + add aixterm-16color terminfo entry -TD 2725 + modified xterm-new terminfo entry to work with tgetent() changes -TD 2726 + extended changes in tgetent() from 20040710 to allow the substring of 2727 sgr0 which matches rmacs to be at the beginning of the sgr0 string 2728 (request by Thomas Wolff). Wolff says the visual effect in 2729 combination with pre-20040710 ncurses is improved. 2730 + fix off-by-one in winnstr() call which caused form field validation 2731 of multibyte characters to ignore the last character in a field. 2732 + correct logic in winsch() for inserting multibyte strings; the code 2733 would clear cells after the insertion rather than push them to the 2734 right (cf: 20040228). 2735 + fix an inconsistency in Check_Alpha_Field() between normal and wide 2736 character logic (report by Wolfgang Gutjahr). 2737 2738 20050219 2739 + fix a bug in editing wide-characters in form library: deleting a 2740 nonwide character modified the previous wide-character. 2741 + update manpage to describe NCURSES_MOUSE_VERSION 2. 2742 + correct manpage description of mouseinterval() (Debian #280687). 2743 + add a note to default_colors.3x explaining why this extension was 2744 added (Debian #295083). 2745 + add traces to panel library. 2746 2747 20050212 2748 + improve editing of wide-characters in form library: left/right 2749 cursor movement, and single-character deletions work properly. 2750 + disable GPM mouse support when $TERM happens to be prefixed with 2751 "xterm". Gpm_Open() would otherwise assert that it can deal with 2752 mouse events in this case. 2753 + modify GPM mouse support so it closes the server connection when 2754 the caller disables the mouse (report by Stanislav Ievlev). 2755 2756 20050205 2757 + add traces for callback functions in form library. 2758 + add experimental configure option --enable-ext-mouse, which defines 2759 NCURSES_MOUSE_VERSION 2, and modifies the encoding of mouse events to 2760 support wheel mice, which may transmit buttons 4 and 5. This works 2761 with xterm and similar X terminal emulators (prompted by question by 2762 Andreas Henningsson, this is also related to Debian #230990). 2763 + improve configure macros CF_XOPEN_SOURCE and CF_POSIX_C_SOURCE to 2764 avoid redefinition warnings on cygwin. 2765 2766 20050129 2767 + merge remaining development changes for extended colors (mostly 2768 complete, does not appear to break other configurations). 2769 + add xterm-88color.dat (part of extended colors testing). 2770 + improve _tracedump() handling of color pairs past 96. 2771 + modify return-value from start_color() to return OK if colors have 2772 already been started. 2773 + modify curs_color.3x list error conditions for init_pair(), 2774 pair_content() and color_content(). 2775 + modify pair_content() to return -1 for consistency with init_pair() 2776 if it corresponds to the default-color. 2777 + change internal representation of default-color to allow application 2778 to use color number 255. This does not affect the total number of 2779 color pairs which are allowed. 2780 + add a top-level tags rule. 2781 2782 20050122 2783 + add a null-pointer check in wgetch() in case it is called without 2784 first calling initscr(). 2785 + add some null-pointer checks for SP, which is not set by libtinfo. 2786 + modify misc/shlib to ensure that absolute pathnames are used. 2787 + modify test/Makefile.in, etc., to link test programs only against the 2788 libraries needed, e.g., omit form/menu/panel library for the ones 2789 that are curses-specific. 2790 + change SP->_current_attr to a pointer, adjust ifdef's to ensure that 2791 libtinfo.so and libtinfow.so have the same ABI. The reason for this 2792 is that the corresponding data which belongs to the upper-level 2793 ncurses library has a different size in each model (report by 2794 Stanislav Ievlev). 2795 2796 20050115 2797 + minor fixes to allow test-compiles with g++. 2798 + correct column value shown in tic's warnings, which did not account 2799 for leading whitespace. 2800 + add a check in _nc_trans_string() for improperly ended strings, i.e., 2801 where a following line begins in column 1. 2802 + modify _nc_save_str() to return a null pointer on buffer overflow. 2803 + improve repainting while scrolling wide-character data (Eungkyu Song). 2804 2805 20050108 2806 + merge some development changes to extend color capabilities. 2807 2808 20050101 2809 + merge some development changes to extend color capabilities. 2810 + fix manpage typo (FreeBSD report docs/75544). 2811 + update config.guess, config.sub 2812 > patches for configure script (Albert Chin-A-Young): 2813 + improved fix to make mbstate_t recognized on HPUX 11i (cf: 2814 20030705), making vsscanf() prototype visible on IRIX64. Tested for 2815 on HP-UX 11i, Solaris 7, 8, 9, AIX 4.3.3, 5.2, Tru64 UNIX 4.0D, 5.1, 2816 IRIX64 6.5, Redhat Linux 7.1, 9, and RHEL 2.1, 3.0. 2817 + print the result of the --disable-home-terminfo option. 2818 + use -rpath when compiling with SGI C compiler. 2819 2820 20041225 2821 + add trace calls to remaining public functions in form and menu 2822 libraries. 2823 + fix check for numeric digits in test/ncurses.c 'b' and 'B' tests. 2824 + fix typo in test/ncurses.c 'c' test from 20041218. 2825 2826 20041218 2827 + revise test/ncurses.c 'c' color test to improve use for xterm-88color 2828 and xterm-256color, added 'C' test using the wide-character color_set 2829 and attr_set functions. 2830 2831 20041211 2832 + modify configure script to work with Intel compiler. 2833 + fix an limit-check in wadd_wchnstr() which caused labels in the 2834 forms-demo to be one character short. 2835 + fix typo in curs_addchstr.3x (Jared Yanovich). 2836 + add trace calls to most functions in form and menu libraries. 2837 + update working-position for adding wide-characters when window is 2838 scrolled (prompted by related report by Eungkyu Song). 2839 2840 20041204 2841 + replace some references on Linux to wcrtomb() which use it to obtain 2842 the length of a multibyte string with _nc_wcrtomb, since wcrtomb() is 2843 broken in glibc (see Debian #284260). 2844 + corrected length-computation in wide-character support for 2845 field_buffer(). 2846 + some fixes to frm_driver.c to allow it to accept multibyte input. 2847 + modify configure script to work with Intel 8.0 compiler. 2848 2849 20041127 2850 + amend change to setupterm() in 20030405 which would reuse the value 2851 of cur_term if the same output was selected. This now reuses it only 2852 when setupterm() is called from tgetent(), which has no notion of 2853 separate SCREENs. Note that tgetent() must be called after initscr() 2854 or newterm() to use this feature (Redhat Bugzilla #140326). 2855 + add a check in CF_BUILD_CC macro to ensure that developer has given 2856 the --with-build-cc option when cross-compiling (report by Alexandre 2857 Campo). 2858 + improved configure script checks for _XOPEN_SOURCE and 2859 _POSIX_C_SOURCE (fix for IRIX 5.3 from Georg Schwarz, _POSIX_C_SOURCE 2860 updates from lynx). 2861 + cosmetic fix to test/gdc.c to recolor the bottom edge of the box 2862 for consistency (comment by Dan Nelson). 2863 2864 20041120 2865 + update wsvt25 terminfo entry -TD 2866 + modify test/ins_wide.c to test all flavors of ins_wstr(). 2867 + ignore filler-cells in wadd_wchnstr() when adding a cchar_t array 2868 which consists of multi-column characters, since this function 2869 constructs them (cf: 20041023). 2870 + modify winnstr() to return multibyte character strings for the 2871 wide-character configuration. 2872 2873 20041106 2874 + fixes to make slk_set() and slk_wset() accept and store multibyte 2875 or multicolumn characters. 2876 2877 20041030 2878 + improve color optimization a little by making _nc_do_color() check 2879 if the old/new pairs are equivalent to the default pair 0. 2880 + modify assume_default_colors() to not require that 2881 use_default_colors() be called first. 2882 2883 20041023 2884 + modify term_attrs() to use termattrs(), add the extended attributes 2885 such as enter_horizontal_hl_mode for WA_HORIZONTAL to term_attrs(). 2886 + add logic in waddch_literal() to clear orphaned cells when one 2887 multi-column character partly overwrites another. 2888 + improved logic for clearing cells when a multi-column character 2889 must be wrapped to a new line. 2890 + revise storage of cells for multi-column characters to correct a 2891 problem with repainting. In the old scheme, it was possible for 2892 doupdate() to decide that only part of a multi-column character 2893 should be repainted since the filler cells stored only an attribute 2894 to denote them as fillers, rather than the character value and the 2895 attribute. 2896 2897 20041016 2898 + minor fixes for traces. 2899 + add SP->_screen_acs_map[], used to ensure that mapping of missing 2900 line-drawing characters is handled properly. For example, ACS_DARROW 2901 is absent from xterm-new, and it was coincidentally displayed the 2902 same as ACS_BTEE. 2903 2904 20041009 2905 + amend 20021221 workaround for broken acs to reset the sgr, rmacs 2906 and smacs strings as well. Also modify the check for screen's 2907 limitations in that area to allow the multi-character shift-in 2908 and shift-out which seem to work. 2909 + change GPM initialization, using dl library to load it dynamically 2910 at runtime (Debian #110586). 2911 2912 20041002 2913 + correct logic for color pair in setcchar() and getcchar() (patch by 2914 Marcin 'Qrczak' Kowalczyk). 2915 + add t/T commands to ncurses b/B tests to allow a different color to 2916 be tested for the attrset part of the test than is used in the 2917 background color. 2918 2919 20040925 2920 + fix to make setcchar() to work when its wchar_t* parameter is 2921 pointing to a string which contains more data than can be converted. 2922 + modify wget_wstr() and example in ncurses.c to work if wchar_t and 2923 wint_t are different sizes (report by Marcin 'Qrczak' Kowalczyk). 2924 2925 20040918 2926 + remove check in wget_wch() added to fix an infinite loop, appears to 2927 have been working around a transitory glibc bug, and interferes 2928 with normal operation (report by Marcin 'Qrczak' Kowalczyk). 2929 + correct wadd_wch() and wecho_wch(), which did not pass the rendition 2930 information (report by Marcin 'Qrczak' Kowalczyk). 2931 + fix aclocal.m4 so that the wide-character version of ncurses gets 2932 compiled as libncursesw.5.dylib, instead of libncurses.5w.dylib 2933 (adapted from patch by James J Ramsey). 2934 + change configure script for --with-caps option to indicate that it 2935 is no longer experimental. 2936 + change configure script to reflect the fact that --enable-widec has 2937 not been "experimental" since 5.3 (report by Bruno Lustosa). 2938 2939 20040911 2940 + add 'B' test to ncurses.c, to exercise some wide-character functions. 2941 2942 20040828 2943 + modify infocmp -i option to match 8-bit controls against its table 2944 entries, e.g., so it can analyze the xterm-8bit entry. 2945 + add morphos terminfo entry, improve amiga-8bit entry (Pavel Fedin). 2946 + correct translation of "%%" in terminfo format to termcap, e.g., 2947 using "tic -C" (Redhat Bugzilla #130921). 2948 + modified configure script CF_XOPEN_SOURCE macro to ensure that if 2949 it defines _POSIX_C_SOURCE, that it defines it to a specific value 2950 (comp.os.stratus newsgroup comment). 2951 2952 20040821 2953 + fixes to build with Ada95 binding with gnat 3.4 (all warnings are 2954 fatal, and gnat does not follow the guidelines for pragmas). 2955 However that did find a coding error in Assume_Default_Colors(). 2956 + modify several terminfo entries to ensure xterm mouse and cursor 2957 visibility are reset in rs2 string: hurd, putty, gnome, 2958 konsole-base, mlterm, Eterm, screen (Debian #265784, #55637). The 2959 xterm entries are left alone - old ones for compatibility, and the 2960 new ones do not require this change. -TD 2961 2962 20040814 2963 + fake a SIGWINCH in newterm() to accommodate buggy terminal emulators 2964 and window managers (Debian #265631). 2965 > terminfo updates -TD 2966 + remove dch/dch1 from rxvt because they are implemented inconsistently 2967 with the common usage of bce/ech 2968 + remove khome from vt220 (vt220's have no home key) 2969 + add rxvt+pcfkeys 2970 2971 20040807 2972 + modify test/ncurses.c 'b' test, adding v/V toggles to cycle through 2973 combinations of video attributes so that for instance bold and 2974 underline can be tested. This made the legend too crowded, added 2975 a help window as well. 2976 + modify test/ncurses.c 'b' test to cycle through default colors if 2977 the -d option is set. 2978 + update putty terminfo entry (Robert de Bath). 2979 2980 20040731 2981 + modify test/cardfile.c to allow it to read more data than can be 2982 displayed. 2983 + correct logic in resizeterm.c which kept it from processing all 2984 levels of window hierarchy (reports by Folkert van Heusden, 2985 Chris Share). 2986 2987 20040724 2988 + modify "tic -cv" to ignore delays when comparing strings. Also 2989 modify it to ignore a canceled sgr string, e.g., for terminals which 2990 cannot properly combine attributes in one control sequence. 2991 + corrections for gnome and konsole entries (Redhat Bugzilla #122815, 2992 patch by Hans de Goede) 2993 > terminfo updates -TD 2994 + make ncsa-m rmacs/smacs consistent with sgr 2995 + add sgr, rc/sc and ech to syscons entries 2996 + add function-keys to decansi 2997 + add sgr to mterm-ansi 2998 + add sgr, civis, cnorm to emu 2999 + correct/simplify cup in addrinfo 3000 3001 20040717 3002 > terminfo updates -TD 3003 + add xterm-pc-fkeys 3004 + review/update gnome and gnome-rh90 entries (prompted by Redhat 3005 Bugzilla #122815). 3006 + review/update konsole entries 3007 + add sgr, correct sgr0 for kterm and mlterm 3008 + correct tsl string in kterm 3009 3010 20040711 3011 + add configure option --without-xterm-new 3012 3013 20040710 3014 + add check in wget_wch() for printable bytes that are not part of a 3015 multibyte character. 3016 + modify wadd_wchnstr() to render text using window's background 3017 attributes. 3018 + improve tic's check to compare sgr and sgr0. 3019 + fix c++ directory's .cc.i rule. 3020 + modify logic in tgetent() which adjusts the termcap "me" string 3021 to work with ISO-2022 string used in xterm-new (cf: 20010908). 3022 + modify tic's check for conflicting function keys to omit that if 3023 converting termcap to termcap format. 3024 + add -U option to tic and infocmp. 3025 + add rmam/smam to linux terminfo entry (Trevor Van Bremen) 3026 > terminfo updates -TD 3027 + minor fixes for emu 3028 + add emu-220 3029 + change wyse acsc strings to use 'i' map rather than 'I' 3030 + fixes for avatar0 3031 + fixes for vp3a+ 3032 3033 20040703 3034 + use tic -x to install terminfo database -TD 3035 + add -x to infocmp's usage message. 3036 + correct field used for comparing O_ROWMAJOR in set_menu_format() 3037 (report/patch by Tony Li). 3038 + fix a missing nul check in set_field_buffer() from 20040508 changes. 3039 > terminfo updates -TD 3040 + make xterm-xf86-v43 derived from xterm-xf86-v40 rather than 3041 xterm-basic -TD 3042 + align with xterm patch #192's use of xterm-new -TD 3043 + update xterm-new and xterm-8bit for cvvis/cnorm strings -TD 3044 + make xterm-new the default "xterm" entry -TD 3045 3046 20040626 3047 + correct BUILD_CPPFLAGS substitution in ncurses/Makefile.in, to allow 3048 cross-compiling from a separate directory tree (report/patch by 3049 Dan Engel). 3050 + modify is_term_resized() to ensure that window sizes are nonzero, 3051 as documented in the manpage (report by Ian Collier). 3052 + modify CF_XOPEN_SOURCE configure macro to make Hurd port build 3053 (Debian #249214, report/patch by Jeff Bailey). 3054 + configure-script mods from xterm, e.g., updates to CF_ADD_CFLAGS 3055 + update config.guess, config.sub 3056 > terminfo updates -TD 3057 + add mlterm 3058 + add xterm-xf86-v44 3059 + modify xterm-new aka xterm-xfree86 to accommodate luit, which 3060 relies on G1 being used via an ISO-2022 escape sequence (report by 3061 Juliusz Chroboczek) 3062 + add 'hurd' entry 3063 3064 20040619 3065 + reconsidered winsnstr(), decided after comparing other 3066 implementations that wrapping is an X/Open documentation error. 3067 + modify test/inserts.c to test all flavors of insstr(). 3068 3069 20040605 3070 + add setlocale() calls to a few test programs which may require it: 3071 demo_forms.c, filter.c, ins_wide.c, inserts.c 3072 + correct a few misspelled function names in ncurses-intro.html (report 3073 by Tony Li). 3074 + correct internal name of key_defined() manpage, which conflicted with 3075 define_key(). 3076 3077 20040529 3078 + correct size of internal pad used for holding wide-character 3079 field_buffer() results. 3080 + modify data_ahead() to work with wide-characters. 3081 3082 20040522 3083 + improve description of terminfo if-then-else expressions (suggested 3084 by Arne Thomassen). 3085 + improve test/ncurses.c 'd' test, allow it to use external file for 3086 initial palette (added xterm-16color.dat and linux-color.dat), and 3087 reset colors to the initial palette when starting/ending the test. 3088 + change limit-check in init_color() to allow r/g/b component to 3089 reach 1000 (cf: 20020928). 3090 3091 20040516 3092 + modify form library to use cchar_t's rather than char's in the 3093 wide-character configuration for storing data for field buffers. 3094 + correct logic of win_wchnstr(), which did not work for more than 3095 one cell. 3096 3097 20040508 3098 + replace memset/memcpy usage in form library with for-loops to 3099 simplify changing the datatype of FIELD.buf, part of wide-character 3100 changes. 3101 + fix some inconsistent use of #if/#ifdef (report by Alain Guibert). 3102 3103 20040501 3104 + modify menu library to account for actual number of columns used by 3105 multibyte character strings, in the wide-character configuration 3106 (adapted from patch by Philipp Tomsich). 3107 + add "-x" option to infocmp like tic's "-x", for use in "-F" 3108 comparisons. This modifies infocmp to only report extended 3109 capabilities if the -x option is given, making this more consistent 3110 with tic. Some scripts may break, since infocmp previous gave this 3111 information without an option. 3112 + modify termcap-parsing to retain 2-character aliases at the beginning 3113 of an entry if the "-x" option is used in tic. 3114 3115 20040424 3116 + minor compiler-warning and test-program fixes. 3117 3118 20040417 3119 + modify tic's missing-sgr warning to apply to terminfo only. 3120 + free some memory leaks in tic. 3121 + remove check in post_menu() that prevented menus from extending 3122 beyond the screen (request by Max J. Werner). 3123 + remove check in newwin() that prevents allocating windows 3124 that extend beyond the screen. Solaris curses does this. 3125 + add ifdef in test/color_set.c to allow it to compile with older 3126 curses. 3127 + add napms() calls to test/dots.c to make it not be a CPU hog. 3128 3129 20040403 3130 + modify unctrl() to return null if its parameter does not correspond 3131 to an unsigned char. 3132 + add some limit-checks to guard isprint(), etc., from being used on 3133 values that do not fit into an unsigned char (report by Sami Farin). 3134 3135 20040328 3136 + fix a typo in the _nc_get_locale() change. 3137 3138 20040327 3139 + modify _nc_get_locale() to use setlocale() to query the program's 3140 current locale rather than using getenv(). This fixes a case in tin 3141 which relies on legacy treatment of 8-bit characters when the locale 3142 is not initialized (reported by Urs Jansen). 3143 + add sgr string to screen's and rxvt's terminfo entries -TD. 3144 + add a check in tic for terminfo entries having an sgr0 but no sgr 3145 string. This confuses Tru64 and HPUX curses when combined with 3146 color, e.g., making them leave line-drawing characters in odd places. 3147 + correct casts used in ABSENT_BOOLEAN, CANCELLED_BOOLEAN, matches the 3148 original definitions used in Debian package to fix PowerPC bug before 3149 20030802 (Debian #237629). 3150 3151 20040320 3152 + modify PutAttrChar() and PUTC() macro to improve use of 3153 A_ALTCHARSET attribute to prevent line-drawing characters from 3154 being lost in situations where the locale would otherwise treat the 3155 raw data as nonprintable (Debian #227879). 3156 3157 20040313 3158 + fix a redefinition of CTRL() macro in test/view.c for AIX 5.2 (report 3159 by Jim Idle). 3160 + remove ".PP" after ".SH NAME" in a few manpages; this confuses 3161 some apropos script (Debian #237831). 3162 3163 20040306 3164 + modify ncurses.c 'r' test so editing commands, like inserted text, 3165 set the field background, and the state of insert/overlay editing 3166 mode is shown in that test. 3167 + change syntax of dummy targets in Ada95 makefiles to work with pmake. 3168 + correct logic in test/ncurses.c 'b' for noncolor terminals which 3169 did not recognize a quit-command (cf: 20030419). 3170 3171 20040228 3172 + modify _nc_insert_ch() to allow for its input to be part of a 3173 multibyte string. 3174 + split out lib_insnstr.c, to prepare to rewrite it. X/Open states 3175 that this function performs wrapping, unlike all of the other 3176 insert-functions. Currently it does not wrap. 3177 + check for nl_langinfo(CODESET), use it if available (report by 3178 Stanislav Ievlev). 3179 + split-out CF_BUILD_CC macro, actually did this for lynx first. 3180 + fixes for configure script CF_WITH_DBMALLOC and CF_WITH_DMALLOC, 3181 which happened to work with bash, but not with Bourne shell (report 3182 by Marco d'Itri via tin-dev). 3183 3184 20040221 3185 + some changes to adapt the form library to wide characters, incomplete 3186 (request by Mike Aubury). 3187 + add symbol to curses.h which can be used to suppress include of 3188 stdbool.h, e.g., 3189 #define NCURSES_ENABLE_STDBOOL_H 0 3190 #include <curses.h> 3191 (discussion on XFree86 mailing list). 3192 3193 20040214 3194 + modify configure --with-termlib option to accept a value which sets 3195 the name of the terminfo library. This would allow a packager to 3196 build libtinfow.so renamed to coincide with libtinfo.so (discussion 3197 with Stanislav Ievlev). 3198 + improve documentation of --with-install-prefix, --prefix and 3199 $(DESTDIR) in INSTALL (prompted by discussion with Paul Lew). 3200 + add configure check if the compiler can use -c -o options to rename 3201 its output file, use that to omit the 'cd' command which was used to 3202 ensure object files are created in a separate staging directory 3203 (prompted by comments by Johnny Wezel, Martin Mokrejs). 3204 3205 20040208 5.4 release for upload to 3206 + update TO-DO. 3207 3208 20040207 pre-release 3209 + minor fixes to _nc_tparm_analyze(), i.e., do not count %i as a param, 3210 and do not count %d if it follows a %p. 3211 + correct an inconsistency between handling of codes in the 128-255 3212 range, e.g., as illustrated by test/ncurses.c f/F tests. In POSIX 3213 locale, the latter did not show printable results, while the former 3214 did. 3215 + modify MKlib_gen.sh to compensate for broken C preprocessor on Mac 3216 OS X, which alters "%%" to "% % " (report by Robert Simms, fix 3217 verified by Scott Corscadden). 3218 3219 20040131 pre-release 3220 + modify SCREEN struct to align it between normal/wide curses flavors 3221 to simplify future changes to build a single version of libtinfo 3222 (patch by Stanislav Ievlev). 3223 + document handling of carriage return by addch() in manpage. 3224 + document special features of unctrl() in manpage. 3225 + documented interface changes in INSTALL. 3226 + corrected control-char test in lib_addch.c to account for locale 3227 (Debian #230335, cf: 971206). 3228 + updated test/configure.in to use AC_EXEEXT and AC_OBJEXT. 3229 + fixes to compile Ada95 binding with Debian gnat 3.15p-4 package. 3230 + minor configure-script fixes for older ports, e.g., BeOS R4.5. 3231 3232 20040125 pre-release 3233 + amend change to PutAttrChar() from 20030614 which computed the number 3234 of cells for a possibly multi-cell character. The 20030614 change 3235 forced the cell to a blank if the result from wcwidth() was not 3236 greater than zero. However, wcwidth() called for parameters in the 3237 range 128-255 can give this return value. The logic now simply 3238 ensures that the number of cells is greater than zero without 3239 modifying the displayed value. 3240 3241 20040124 pre-release 3242 + looked good for 5.4 release for upload to (but see above) 3243 + modify configure script check for ranlib to use AC_CHECK_TOOL, since 3244 that works better for cross-compiling. 3245 3246 20040117 pre-release 3247 + modify lib_get_wch.c to prefer mblen/mbtowc over mbrlen/mbrtowc to 3248 work around core dump in Solaris 8's locale support, e.g., for 3249 zh_CN.GB18030 (report by Saravanan Bellan). 3250 + add includes for <stdarg.h> and <stdio.h> in configure script macro 3251 to make <wchar.h> check work with Tru64 4.0d. 3252 + add terminfo entry for U/Win -TD 3253 + add terminfo entries for SFU aka Interix aka OpenNT (Federico 3254 Bianchi). 3255 + modify tput's error messages to prefix them with the program name 3256 (report by Vincent Lefevre, patch by Daniel Jacobowitz (see Debian 3257 #227586)). 3258 + correct a place in tack where exit_standout_mode was used instead of 3259 exit_attribute_mode (patch by Jochen Voss (see Debian #224443)). 3260 + modify c++/cursesf.h to use const in the Enumeration_Field method. 3261 + remove an ambiguous (actually redundant) method from c++/cursesf.h 3262 + make $HOME/.terminfo update optional (suggested by Stanislav Ievlev). 3263 + improve sed script which extracts libtool's version in the 3264 CF_WITH_LIBTOOL macro. 3265 + add ifdef'd call to AC_PROG_LIBTOOL to CF_WITH_LIBTOOL macro (to 3266 simplify local patch for Albert Chin-A-Young).. 3267 + add $(CXXFLAGS) to link command in c++/Makefile.in (adapted from 3268 patch by Albert Chin-A-Young).. 3269 + fix a missing substitution in configure.in for "$target" needed for 3270 HPUX .so/.sl case. 3271 + resync CF_XOPEN_SOURCE configure macro with lynx; fixes IRIX64 and 3272 NetBSD 1.6 conflicts with _XOPEN_SOURCE. 3273 + make check for stdbool.h more specific, to ensure that including it 3274 will actually define/declare bool for the configured compiler. 3275 + rewrite ifdef's in curses.h relating NCURSES_BOOL and bool. The 3276 intention of that is to #define NCURSES_BOOL as bool when the 3277 compiler declares bool, and to #define bool as NCURSES_BOOL when it 3278 does not (reported by Jim Gifford, Sam Varshavchik, cf: 20031213). 3279 3280 20040110 pre-release 3281 + change minor version to 4, i.e., ncurses 5.4 3282 + revised/improved terminfo entries for tvi912b, tvi920b (Benjamin C W 3283 Sittler). 3284 + simplified ncurses/base/version.c by defining the result from the 3285 configure script rather than using sprintf (suggested by Stanislav 3286 Ievlev). 3287 + remove obsolete casts from c++/cursesw.h (reported by Stanislav 3288 Ievlev). 3289 + modify configure script so that when configuring for termlib, programs 3290 such as tic are not linked with the upper-level ncurses library 3291 (suggested by Stanislav Ievlev). 3292 + move version.c from ncurses/base to ncurses/tinfo to allow linking 3293 of tic, etc., using libtinfo (suggested by Stanislav Ievlev). 3294 3295 20040103 3296 + adjust -D's to build ncursesw on OpenBSD. 3297 + modify CF_PROG_EXT to make OS/2 build with EXEEXT. 3298 + add pecho_wchar(). 3299 + remove <wctype.h> include from lib_slk_wset.c which is not needed (or 3300 available) on older platforms. 3301 3302 20031227 3303 + add -D's to build ncursew on FreeBSD 5.1. 3304 + modify shared library configuration for FreeBSD 4.x/5.x to add the 3305 soname information (request by Marc Glisse). 3306 + modify _nc_read_tic_entry() to not use MAX_ALIAS, but PATH_MAX only 3307 for limiting the length of a filename in the terminfo database. 3308 + modify termname() to return the terminal name used by setupterm() 3309 rather than $TERM, without truncating to 14 characters as documented 3310 by X/Open (report by Stanislav Ievlev, cf: 970719). 3311 + re-add definition for _BSD_TYPES, lost in merge (cf: 20031206). 3312 3313 20031220 3314 + add configure option --with-manpage-format=catonly to address 3315 behavior of BSDI, allow install of man+cat files on NetBSD, whose 3316 behavior has diverged by requiring both to be present. 3317 + remove leading blanks from comment-lines in manlinks.sed script to 3318 work with Tru64 4.0d. 3319 + add screen.linux terminfo entry (discussion on mutt-users mailing 3320 list). 3321 3322 20031213 3323 + add a check for tic to flag missing backslashes for termcap 3324 continuation lines. ncurses reads the whole entry, but termcap 3325 applications do not. 3326 + add configure option "--with-manpage-aliases" extending 3327 "--with-manpage-aliases" to provide the option of generating ".so" 3328 files rather than symbolic links for manpage aliases. 3329 + add bool definition in include/curses.h.in for configurations with no 3330 usable C++ compiler (cf: 20030607). 3331 + fix pathname of SigAction.h for building with --srcdir (reported by 3332 Mike Castle). 3333 3334 20031206 3335 + folded ncurses/base/sigaction.c into includes of ncurses/SigAction.h, 3336 since that header is used only within ncurses/tty/lib_tstp.c, for 3337 non-POSIX systems (discussion with Stanislav Ievlev). 3338 + remove obsolete _nc_outstr() function (report by Stanislav Ievlev 3339 <inger@altlinux.org>). 3340 + add test/background.c and test/color_set.c 3341 + modify color_set() function to work with color pair 0 (report by 3342 George Andreou <gbandreo@tem.uoc.gr>). 3343 + add configure option --with-trace, since defining TRACE seems too 3344 awkward for some cases. 3345 + remove a call to _nc_free_termtype() from read_termtype(), since the 3346 corresponding buffer contents were already zeroed by a memset (cf: 3347 20000101). 3348 + improve configure check for _XOPEN_SOURCE and related definitions, 3349 adding special cases for Solaris' __EXTENSIONS__ and FreeBSD's 3350 __BSD_TYPES (reports by Marc Glisse <marc.glisse@normalesup.org>). 3351 + small fixes to compile on Solaris and IRIX64 using cc. 3352 + correct typo in check for pre-POSIX sort options in MKkey_defs.sh 3353 (cf: 20031101). 3354 3355 20031129 3356 + modify _nc_gettime() to avoid a problem with arithmetic on unsigned 3357 values (Philippe Blain). 3358 + improve the nanosleep() logic in napms() by checking for EINTR and 3359 restarting (Philippe Blain). 3360 + correct expression for "%D" in lib_tgoto.c (Juha Jarvi 3361 <mooz@welho.com>). 3362 3363 20031122 3364 + add linux-vt terminfo entry (Andrey V Lukyanov <land@long.yar.ru>). 3365 + allow "\|" escape in terminfo; tic should not warn about this. 3366 + save the full pathname of the trace-file the first time it is opened, 3367 to avoid creating it in different directories if the application 3368 opens and closes it while changing its working directory. 3369 + modify configure script to provide a non-empty default for 3370 $BROKEN_LINKER 3371 3372 20031108 3373 + add DJGPP to special case of DOS-style drive letters potentially 3374 appearing in TERMCAP environment variable. 3375 + fix some spelling in comments (reports by Jason McIntyre, Jonathon 3376 Gray). 3377 + update config.guess, config.sub 3378 3379 20031101 3380 + fix a memory leak in error-return from setupterm() (report by 3381 Stanislav Ievlev <inger@altlinux.org>). 3382 + use EXEEXT and OBJEXT consistently in makefiles. 3383 + amend fixes for cross-compiling to use separate executable-suffix 3384 BUILD_EXEEXT (cf: 20031018). 3385 + modify MKkey_defs.sh to check for sort utility that does not 3386 recognize key options, e.g., busybox (report by Peter S Mazinger 3387 <ps.m@gmx.net>). 3388 + fix potential out-of-bounds indexing in _nc_infotocap() (found by 3389 David Krause using some of the new malloc debugging features 3390 under OpenBSD, patch by Ted Unangst). 3391 + modify CF_LIB_SUFFIX for Itanium releases of HP-UX, which use a 3392 ".so" suffix (patch by Jonathan Ward <Jonathan.Ward@hp.com>). 3393 3394 20031025 3395 + update terminfo for xterm-xfree86 -TD 3396 + add check for multiple "tc=" clauses in a termcap to tic. 3397 + check for missing op/oc in tic. 3398 + correct _nc_resolve_uses() and _nc_merge_entry() to allow infocmp and 3399 tic to show cancelled capabilities. These functions were ignoring 3400 the state of the target entry, which should be untouched if cancelled. 3401 + correct comment in tack/output.c (Debian #215806). 3402 + add some null-pointer checks to lib_options.c (report by Michael 3403 Bienia). 3404 + regenerated html documentation. 3405 + correction to tar-copy.sh, remove a trap command that resulted in 3406 leaving temporary files (cf: 20030510). 3407 + remove contact/maintainer addresses for Juergen Pfeifer (his request). 3408 3409 20031018 3410 + updated test/configure to reflect changes for libtool (cf: 20030830). 3411 + fix several places in tack/pad.c which tested and used the parameter- 3412 and parameterless strings inconsistently, i.e., in pad_rin(), 3413 pad_il(), pad_indn() and pad_dl() (Debian #215805). 3414 + minor fixes for configure script and makefiles to cleanup executables 3415 generated when cross-compiling for DJGPP. 3416 + modify infocmp to omit check for $TERM for operations that do not 3417 require it, e.g., "infocmp -e" used to build fallback list (report by 3418 Koblinger Egmont). 3419 3420 20031004 3421 + add terminfo entries for DJGPP. 3422 + updated note about maintainer in ncurses-intro.html 3423 3424 20030927 3425 + update terminfo entries for gnome terminal. 3426 + modify tack to reset colors after each color test, correct a place 3427 where exit_standout_mode was used instead of exit_attribute_mode. 3428 + improve tack's bce test by making it set colors other than black 3429 on white. 3430 + plug a potential recursion between napms() and _nc_timed_wait() 3431 (report by Philippe Blain). 3432 3433 20030920 3434 + add --with-rel-version option to allow workaround to allow making 3435 libtool on Darwin generate the "same" library names as with the 3436 --with-shared option. The Darwin ld program does not work well 3437 with a zero as the minor-version value (request by Chris Zubrzycki). 3438 + modify CF_MIXEDCASE_FILENAMES macro to work with cross-compiling. 3439 + modify tack to allow it to run from fallback terminfo data. 3440 > patch by Philippe Blain: 3441 + improve PutRange() by adjusting call to EmitRange() and corresponding 3442 return-value to not emit unchanged characters on the end of the 3443 range. 3444 + improve a check for changed-attribute by exiting a loop when the 3445 change is found. 3446 + improve logic in TransformLine(), eliminating a duplicated comparison 3447 in the clr_bol logic. 3448 3449 20030913 3450 > patch by Philippe Blain: 3451 + in ncurses/tty/lib_mvcur.c, 3452 move the label 'nonlocal' just before the second gettimeofday() to 3453 be able to compute the diff time when 'goto nonlocal' used. 3454 Rename 'msec' to 'microsec' in the debug-message. 3455 + in ncurses/tty/lib_mvcur.c, 3456 Use _nc_outch() in carriage return/newline movement instead of 3457 putchar() which goes to stdout. Move test for xold>0 out of loop. 3458 + in ncurses/tinfo/setbuf.c, 3459 Set the flag SP->_buffered at the end of operations when all has been 3460 successful (typeMalloc can fail). 3461 + simplify NC_BUFFERED macro by moving check inside _nc_setbuf(). 3462 3463 20030906 3464 + modify configure script to avoid using "head -1", which does not 3465 work if POSIXLY_CORRECT (sic) is set. 3466 + modify run_tic.in to avoid using wrong shared libraries when 3467 cross-compiling (Dan Kegel). 3468 3469 20030830 3470 + alter configure script help message to make it clearer that 3471 --with-build-cc does not specify a cross-compiler (suggested by Dan 3472 Kegel <dank@kegel.com>). 3473 + modify configure script to accommodate libtool 1.5, as well as add an 3474 parameter to the "--with-libtool" option which can specify the 3475 pathname of libtool (report by Chris Zubrzycki). We note that 3476 libtool 1.5 has more than one bug in its C++ support, so it is not 3477 able to install libncurses++, for instance, if $DESTDIR or the option 3478 --with-install-prefix is used. 3479 3480 20030823 3481 > patch by Philippe Blain: 3482 + move assignments to SP->_cursrow, SP->_curscol into online_mvcur(). 3483 + make baudrate computation in delay_output() consistent with the 3484 assumption in _nc_mvcur_init(), i.e., a byte is 9 bits. 3485 3486 20030816 3487 + modify logic in waddch_literal() to take into account zh_TW.Big5 3488 whose multibyte sequences may contain "printable" characters, e.g., 3489 a "g" in the sequence "\247g" (Debian #204889, cf: 20030621). 3490 + improve storage used by _nc_safe_strcpy() by ensuring that the size 3491 is reset based on the initialization call, in case it were called 3492 after other strcpy/strcat calls (report by Philippe Blain). 3493 > patch by Philippe Blain: 3494 + remove an unused ifdef for REAL_ATTR & WANT_CHAR 3495 + correct a place where _cup_cost was used rather than _cuu_cost 3496 3497 20030809 3498 + fix a small memory leak in _nc_free_termtype(). 3499 + close trace-file if trace() is called with a zero parameter. 3500 + free memory allocated for soft-key strings, in delscreen(). 3501 + fix an allocation size in safe_sprintf.c for the "*" format code. 3502 + correct safe_sprintf.c to not return a null pointer if the format 3503 happens to be an empty string. This applies to the "configure 3504 --enable-safe-sprintf" option (Redhat #101486). 3505 3506 20030802 3507 + modify casts used for ABSENT_BOOLEAN and CANCELLED_BOOLEAN (report by 3508 Daniel Jacobowitz). 3509 > patch by Philippe Blain: 3510 + change padding for change_scroll_region to not be proportional to 3511 the size of the scroll-region. 3512 + correct error-return in _nc_safe_strcat(). 3513 3514 20030726 3515 + correct limit-checks in _nc_scroll_window() (report and test-case by 3516 Thomas Graf <graf@dms.at> cf: 20011020). 3517 + re-order configure checks for _XOPEN_SOURCE to avoid conflict with 3518 _GNU_SOURCE check. 3519 3520 20030719 3521 + use clr_eol in preference to blanks for bce terminals, so select and 3522 paste will have fewer trailing blanks, e.g., when using xterm 3523 (request by Vincent Lefevre). 3524 + correct prototype for wunctrl() in manpage. 3525 + add configure --with-abi-version option (discussion with Charles 3526 Wilson). 3527 > cygwin changes from Charles Wilson: 3528 + aclocal.m4: on cygwin, use autodetected prefix for import 3529 and static lib, but use "cyg" for DLL. 3530 + include/ncurses_dll.h: correct the comments to reflect current 3531 status of cygwin/mingw port. Fix compiler warning. 3532 + misc/run_tic.in: ensure that tic.exe can find the uninstalled 3533 DLL, by adding the lib-directory to the PATH variable. 3534 + misc/terminfo.src (nxterm|xterm-color): make xterm-color 3535 primary instead of nxterm, to match XFree86's xterm.terminfo 3536 usage and to prevent circular links. 3537 (rxvt): add additional codes from rxvt.org. 3538 (rxvt-color): new alias 3539 (rxvt-xpm): new alias 3540 (rxvt-cygwin): like rxvt, but with special acsc codes. 3541 (rxvt-cygwin-native): ditto. rxvt may be run under XWindows, or 3542 with a "native" MSWin GUI. Each takes different acsc codes, 3543 which are both different from the "normal" rxvt's acsc. 3544 (cygwin): cygwin-in-cmd.exe window. Lots of fixes. 3545 (cygwinDBG): ditto. 3546 + mk-1st.awk: use "cyg" for the DLL prefix, but "lib" for import 3547 and static libs. 3548 3549 20030712 3550 + update config.guess, config.sub 3551 + add triples for configuring shared libraries with the Debian 3552 GNU/FreeBSD packages (patch by Robert Millan <zeratul2@wanadoo.es>). 3553 3554 20030705 3555 + modify CF_GCC_WARNINGS so it only applies to gcc, not g++. Some 3556 platforms have installed g++ along with the native C compiler, which 3557 would not accept gcc warning options. 3558 + add -D_XOPEN_SOURCE=500 when configuring with --enable-widec, to 3559 get mbstate_t declaration on HPUX 11.11 (report by David Ellement). 3560 + add _nc_pathlast() to get rid of casts in _nc_basename() calls. 3561 + correct a sign-extension in wadd_wch() and wecho_wchar() from 3562 20030628 (report by Tomohiro Kubota). 3563 + work around omission of btowc() and wctob() from wide-character 3564 support (sic) in NetBSD 1.6 using mbtowc() and wctomb() (report by 3565 Gabor Z Papp). 3566 + add portability note to curs_get_wstr.3x (Debian #199957). 3567 3568 20030628 3569 + rewrite wadd_wch() and wecho_wchar() to call waddch() and wechochar() 3570 respectively, to avoid calling waddch_noecho() with wide-character 3571 data, since that function assumes its input is 8-bit data. 3572 Similarly, modify waddnwstr() to call wadd_wch(). 3573 + remove logic from waddnstr() which transformed multibyte character 3574 strings into wide-characters. Rewrite of waddch_literal() from 3575 20030621 assumes its input is raw multibyte data rather than wide 3576 characters (report by Tomohiro Kubota). 3577 3578 20030621 3579 + write getyx() and related 2-return macros in terms of getcury(), 3580 getcurx(), etc. 3581 + modify waddch_literal() in case an application passes bytes of a 3582 multibyte character directly to waddch(). In this case, waddch() 3583 must reassemble the bytes into a wide-character (report by Tomohiro 3584 Kubota <kubota@debian.org>). 3585 3586 20030614 3587 + modify waddch_literal() in case a multibyte value occupies more than 3588 two cells. 3589 + modify PutAttrChar() to compute the number of character cells that 3590 are used in multibyte values. This fixes a problem displaying 3591 double-width characters (report/test by Mitsuru Chinen 3592 <mchinen@yamato.ibm.com>). 3593 + add a null-pointer check for result of keyname() in _tracechar() 3594 + modify _tracechar() to work around glibc sprintf bug. 3595 3596 20030607 3597 + add a call to setlocale() in cursesmain.cc, making demo display 3598 properly in a UTF-8 locale. 3599 + add a fallback definition in curses.priv.h for MB_LEN_MAX (prompted 3600 by discussion with Gabor Z Papp). 3601 + use macros NCURSES_ACS() and NCURSES_WACS() to hide cast needed to 3602 appease -Wchar-subscript with g++ 3.3 (Debian #195732). 3603 + fix a redefinition of $RANLIB in the configure script when libtool 3604 is used, which broke configure on Mac OS X (report by Chris Zubrzycki 3605 <beren@mac.com>). 3606 + simplify ifdef for bool declaration in curses.h.in (suggested by 3607 Albert Chin-A-Young). 3608 + remove configure script check to allow -Wconversion for older 3609 versions of gcc (suggested by Albert Chin-A-Young). 3610 3611 20030531 3612 + regenerated html manpages. 3613 + modify ifdef's in curses.h.in that disabled use of __attribute__() 3614 for g++, since recent versions implement the cases which ncurses uses 3615 (Debian #195230). 3616 + modify _nc_get_token() to handle a case where an entry has no 3617 description, and capabilities begin on the same line as the entry 3618 name. 3619 + fix a typo in ncurses_dll.h reported by gcc 3.3. 3620 + add an entry for key_defined.3x to man_db.renames. 3621 3622 20030524 3623 + modify setcchar() to allow converting control characters to complex 3624 characters (report/test by Mitsuru Chinen <mchinen@yamato.ibm.com>). 3625 + add tkterm entry -TD 3626 + modify parse_entry.c to allow a terminfo entry with a leading 3627 2-character name (report by Don Libes). 3628 + corrected acsc in screen.teraterm, which requires a PC-style mapping. 3629 + fix trace statements in read_entry.c to use lseek() rather than 3630 tell(). 3631 + fix signed/unsigned warnings from Sun's compiler (gcc should give 3632 these warnings, but it is unpredictable). 3633 + modify configure script to omit -Winline for gcc 3.3, since that 3634 feature is broken. 3635 + modify manlinks.sed to add a few functions that were overlooked since 3636 they return function pointers: field_init, field_term, form_init, 3637 form_term, item_init, item_term, menu_init and menu_term. 3638 3639 20030517 3640 + prevent recursion in wgetch() via wgetnstr() if the connection cannot 3641 be switched between cooked/raw modes because it is not a TTY (report 3642 by Wolfgang Gutjahr <gutw@knapp.com>). 3643 + change parameter of define_key() and key_defined() to const (prompted 3644 by Debian #192860). 3645 + add a check in test/configure for ncurses extensions, since there 3646 are some older versions, etc., which would not compile with the 3647 current test programs. 3648 + corrected demo in test/ncurses.c of wgetn_wstr(), which did not 3649 convert wchar_t string to multibyte form before printing it. 3650 + corrections to lib_get_wstr.c: 3651 + null-terminate buffer passed to setcchar(), which occasionally 3652 failed. 3653 + map special characters such as erase- and kill-characters into 3654 key-codes so those will work as expected even if they are not 3655 mentioned in the terminfo. 3656 + modify PUTC() and Charable() macros to make wide-character line 3657 drawing work for POSIX locale on Linux console (cf: 20021221). 3658 3659 20030510 3660 + make typography for program options in manpages consistent (report 3661 by Miloslav Trmac <mitr@volny.cz>). 3662 + correct dependencies in Ada95/src/Makefile.in, so the builds with 3663 "--srcdir" work (report by Warren L Dodge). 3664 + correct missing definition of $(CC) in Ada95/gen/Makefile.in 3665 (reported by Warren L Dodge <warrend@mdhost.cse.tek.com>). 3666 + fix typos and whitespace in manpages (patch by Jason McIntyre 3667 <jmc@prioris.mini.pw.edu.pl>). 3668 3669 20030503 3670 + fix form_driver() cases for REQ_CLR_EOF, REQ_CLR_EOL, REQ_DEL_CHAR, 3671 REQ_DEL_PREV and REQ_NEW_LINE, which did not ensure the cursor was at 3672 the editing position before making modifications. 3673 + add test/demo_forms and associated test/edit_field.c demos. 3674 + modify test/configure.in to use test/modules for the list of objects 3675 to compile rather than using the list of programs. 3676 3677 20030419 3678 + modify logic of acsc to use the original character if no mapping is 3679 defined, noting that Solaris does this. 3680 + modify ncurses 'b' test to avoid using the acs_map[] array since 3681 20021231 changes it to no longer contain information from the acsc 3682 string. 3683 + modify makefile rules in c++, progs, tack and test to ensure that 3684 the compiler flags (e.g., $CFLAGS or $CCFLAGS) are used in the link 3685 command (report by Jose Luis Rico Botella <informatica@serpis.com>). 3686 + modify soft-key initialization to use A_REVERSE if A_STANDOUT would 3687 not be shown when colors are used, i.e., if ncv#1 is set in the 3688 terminfo as is done in "screen". 3689 3690 20030412 3691 + add a test for slk_color(), in ncurses.c 3692 + fix some issues reported by valgrind in the slk_set() and slk_wset() 3693 code, from recent rewrite. 3694 + modify ncurses 'E' test to use show previous label via slk_label(), 3695 as in 'e' test. 3696 + modify wide-character versions of NewChar(), NewChar2() macros to 3697 ensure that the whole struct is initialized. 3698 3699 20030405 3700 + modify setupterm() to check if the terminfo and terminal-modes have 3701 already been read. This ensures that it does not reinvoke 3702 def_prog_mode() when an application calls more than one function, 3703 such as tgetent() and initscr() (report by Olaf Buddenhagen). 3704 3705 20030329 3706 + add 'E' test to ncurses.c, to exercise slk_wset(). 3707 + correct handling of carriage-return in wgetn_wstr(), used in demo of 3708 slk_wset(). 3709 + first draft of slk_wset() function. 3710 3711 20030322 3712 + improved warnings in tic when suppressing items to fit in termcap's 3713 1023-byte limit. 3714 + built a list in test/README showing which externals are being used 3715 by either programs in the test-directory or via internal library 3716 calls. 3717 + adjust include-options in CF_ETIP_DEFINES to avoid missing 3718 ncurses_dll.h, fixing special definitions that may be needed for 3719 etip.h (reported by Greg Schafer <gschafer@zip.com.au>). 3720 3721 20030315 3722 + minor fixes for cardfile.c, to make it write the updated fields to 3723 a file when ^W is given. 3724 + add/use _nc_trace_bufcat() to eliminate some fixed buffer limits in 3725 trace code. 3726 3727 20030308 3728 + correct a case in _nc_remove_string(), used by define_key(), to avoid 3729 infinite loop if the given string happens to be a substring of other 3730 strings which are assigned to keys (report by John McCutchan). 3731 + add key_defined() function, to tell which keycode a string is bound 3732 to (discussion with John McCutchan <ttb@tentacle.dhs.org>). 3733 + correct keybound(), which reported definitions in the wrong table, 3734 i.e., the list of definitions which are disabled by keyok(). 3735 + modify demo_keydef.c to show the details it changes, and to check 3736 for errors. 3737 3738 20030301 3739 + restructured test/configure script, make it work for libncursesw. 3740 + add description of link_fieldtype() to manpage (report by 3741 L Dee Holtsclaw <dee@sunbeltsoft.com>). 3742 3743 20030222 3744 + corrected ifdef's relating to configure check for wchar_t, etc. 3745 + if the output is a socket or other non-tty device, use 1 millisecond 3746 for the cost in mvcur; previously it was 9 milliseconds because the 3747 baudrate was not known. 3748 + in _nc_get_tty_mode(), initialize the TTY buffer on error, since 3749 glibc copies uninitialized data in that case, as noted by valgrind. 3750 + modify tput to use the same parameter analysis as tparm() does, to 3751 provide for user-defined strings, e.g., for xterm title, a 3752 corresponding capability might be 3753 title=\E]2;%p1%s^G, 3754 + modify MKlib_gen.sh to avoid passing "#" tokens through the C 3755 preprocessor. This works around Mac OS X's preprocessor, which 3756 insists on adding a blank on each side of the token (report/analysis 3757 by Kevin Murphy <murphy@genome.chop.edu>). 3758 3759 20030215 3760 + add configure check for wchar_t and wint_t types, rather than rely 3761 on preprocessor definitions. Also work around for gcc fixinclude 3762 bug which creates a shadow copy of curses.h if it sees these symbols 3763 apparently typedef'd. 3764 + if database is disabled, do not generate run_tic.sh 3765 + minor fixes for memory-leak checking when termcap is read. 3766 3767 20030208 3768 + add checking in tic for incomplete line-drawing character mapping. 3769 + updated configure script to reflect fix for AC_PROG_GCC_TRADITIONAL, 3770 which is broken in autoconf 2.5x for Mac OS X 10.2.3 (report by 3771 Gerben Wierda <Sherlock@rna.nl>). 3772 + make return value from _nc_printf_string() consistent. Before, 3773 depending on whether --enable-safe-sprintf was used, it might not be 3774 cached for reallocating. 3775 3776 20030201 3777 + minor fixes for memory-leak checking in lib_tparm.c, hardscroll.c 3778 + correct a potentially-uninitialized value if _read_termtype() does 3779 not read as much data as expected (report by Wolfgang Rohdewald 3780 <wr6@uni.de>). 3781 + correct several places where the aclocal.m4 macros relied on cache 3782 variable names which were incompatible (as usual) between autoconf 3783 2.13 and 2.5x, causing the test for broken-linker to give incorrect 3784 results (reports by Gerben Wierda <Sherlock@rna.nl> and Thomas Esser 3785 <te@dbs.uni-hannover.de>). 3786 + do not try to open gpm mouse driver if standard output is not a tty; 3787 the gpm library does not make this check (bug report for dialog 3788 by David Oliveira <davidoliveira@develop.prozone.ws>). 3789 3790 20030125 3791 + modified emx.src to correspond more closely to terminfo.src, added 3792 emx-base to the latter -TD 3793 + add configure option for FreeBSD sysmouse, --with-sysmouse, and 3794 implement support for that in lib_mouse.c, lib_getch.c 3795 3796 20030118 3797 + revert 20030105 change to can_clear_with(), does not work for the 3798 case where the update is made on cells which are blanks with 3799 attributes, e.g., reverse. 3800 + improve ifdef's to guard against redefinition of wchar_t and wint_t 3801 in curses.h (report by Urs Jansen). 3802 3803 20030111 3804 + improve mvcur() by checking if it is safe to move when video 3805 attributes are set (msgr), and if not, reset/restore attributes 3806 within that function rather than doing it separately in the GoTo() 3807 function in tty_update.c (suggested by Philippe Blain). 3808 + add a message in run_tic.in to explain more clearly what does not 3809 work when attempting to create a symbolic link for /usr/lib/terminfo 3810 on OS/2 and other platforms with no symbolic links (report by John 3811 Polterak). 3812 + change several sed scripts to avoid using "\+" since it is not a BRE 3813 (basic regular expression). One instance caused terminfo.5 to be 3814 misformatted on FreeBSD (report by Kazuo Horikawa 3815 <horikawa@FreeBSD.org> (see FreeBSD docs/46709)). 3816 + correct misspelled 'wint_t' in curs_get_wch.3x (Michael Elkins). 3817 3818 20030105 3819 + improve description of terminfo operators, especially static/dynamic 3820 variables (comments by Mark I Manning IV <mark4th@earthlink.net>). 3821 + demonstrate use of FIELDTYPE by modifying test/ncurses 'r' test to 3822 use the predefined TYPE_ALPHA field-type, and by defining a 3823 specialized type for the middle initial/name. 3824 + fix MKterminfo.sh, another workaround for POSIXLY_CORRECT misfeature 3825 of sed 4.0 3826 > patch by Philippe Blain: 3827 + optimize can_clear_with() a little by testing first if the parameter 3828 is indeed a "blank". 3829 + simplify ClrBottom() a little by allowing it to use clr_eos to clear 3830 sections as small as one line. 3831 + improve ClrToEOL() by checking if clr_eos is available before trying 3832 to use it. 3833 + use tputs() rather than putp() in a few cases in tty_update.c since 3834 the corresponding delays are proportional to the number of lines 3835 affected: repeat_char, clr_eos, change_scroll_region. 3836 3837 20021231 3838 + rewrite of lib_acs.c conflicts with copying of SCREEN acs_map to/from 3839 global acs_map[] array; removed the lines that did the copying. 3840 3841 20021228 3842 + change some overlooked tputs() calls in scrolling code to use putp() 3843 (report by Philippe Blain). 3844 + modify lib_getch.c to avoid recursion via wgetnstr() when the input 3845 is not a tty and consequently mode-changes do not work (report by 3846 <R.Chamberlin@querix.com>). 3847 + rewrote lib_acs.c to allow PutAttrChar() to decide how to render 3848 alternate-characters, i.e., to work with Linux console and UTF-8 3849 locale. 3850 + correct line/column reference in adjust_window(), needed to make 3851 special windows such as curscr track properly when resizing (report 3852 by Lucas Gonze <lgonze@panix.com>). 3853 > patch by Philippe Blain: 3854 + correct the value used for blank in ClrBottom() (broken in 20000708). 3855 + correct an off-by-one in GoTo() parameter in _nc_scrolln(). 3856 3857 20021221 3858 + change several tputs() calls in scrolling code to use putp(), to 3859 enable padding which may be needed for some terminals (patch by 3860 Philippe Blain). 3861 + use '%' as sed substitute delimiter in run_tic script to avoid 3862 problems with pathname delimiters such as ':' and '@' (report by John 3863 Polterak). 3864 + implement a workaround so that line-drawing works with screen's 3865 crippled UTF-8 support (tested with 3.9.13). This only works with 3866 the wide-character support (--enable-widec); the normal library will 3867 simply suppress line-drawing when running in a UTF-8 locale in screen. 3868 3869 20021214 3870 + allow BUILD_CC and related configure script variables to be 3871 overridden from the environment. 3872 + make build-tools variables in ncurses/Makefile.in consistent with 3873 the configure script variables (report by Maciej W Rozycki). 3874 + modify ncurses/modules to allow 3875 configure --disable-leaks --disable-ext-funcs 3876 to build (report by Gary Samuelson). 3877 + fix a few places in configure.in which lacked quotes (report by 3878 Gary Samuelson <gary.samuelson@verizon.com>). 3879 + correct handling of multibyte characters in waddch_literal() which 3880 force wrapping because they are started too late on the line (report 3881 by Sam Varshavchik). 3882 + small fix for CF_GNAT_VERSION to ignore the help-message which 3883 gnatmake adds to its version-message. 3884 > Maciej W Rozycki <macro@ds2.pg.gda.pl>: 3885 + use AC_CHECK_TOOL to get proper values for AR and LD for cross 3886 compiling. 3887 + use $cross_compiling variable in configure script rather than 3888 comparing $host_alias and $target alias, since "host" is 3889 traditionally misused in autoconf to refer to the target platform. 3890 + change configure --help message to use "build" rather than "host" 3891 when referring to the --with-build-XXX options. 3892 3893 20021206 3894 + modify CF_GNAT_VERSION to print gnatmake's version, and to allow for 3895 possible gnat versions such as 3.2 (report by Chris Lingard 3896 <chris@stockwith.co.uk>). 3897 + modify #define's for CKILL and other default control characters in 3898 tset to use the system's default values if they are defined. 3899 + correct interchanged defaults for kill and interrupt characters 3900 in tset, which caused it to report unnecessarily (Debian #171583). 3901 + repair check for missing C++ compiler, which is broken in autoconf 3902 2.5x by hardcoding it to g++ (report by Martin Mokrejs). 3903 + update config.guess, config.sub (2002-11-30) 3904 + modify configure script to skip --with-shared, etc., when the 3905 --with-libtool option is given, since they would be ignored anyway. 3906 + fix to allow "configure --with-libtool --with-termlib" to build. 3907 + modify configure script to show version number of libtool, to help 3908 with bug reports. libtool still gets confused if the installed 3909 ncurses libraries are old, since it ignores the -L options at some 3910 point (tested with libtool 1.3.3 and 1.4.3). 3911 + reorder configure script's updating of $CPPFLAGS and $CFLAGS to 3912 prevent -I options in the user's environment from introducing 3913 conflicts with the build -I options (may be related to reports by 3914 Patrick Ash and George Goffe). 3915 + rename test/define_key.c to test/demo_defkey.c, test/keyok.c to 3916 test/demo_keyok.c to allow building these with libtool. 3917 3918 20021123 3919 + add example program test/define_key.c for define_key(). 3920 + add example program test/keyok.c for keyok(). 3921 + add example program test/ins_wide.c for wins_wch() and wins_wstr(). 3922 + modify wins_wch() and wins_wstr() to interpret tabs by using the 3923 winsch() internal function. 3924 + modify setcchar() to allow for wchar_t input strings that have 3925 more than one spacing character. 3926 3927 20021116 3928 + fix a boundary check in lib_insch.c (patch by Philippe Blain). 3929 + change type for *printw functions from NCURSES_CONST to const 3930 (prompted by comment by Pedro Palhoto Matos <plpm@mega.ist.utl.pt>, 3931 but really from a note on X/Open's website stating that either is 3932 acceptable, and the latter will be used in a future revision). 3933 + add xterm-1002, xterm-1003 terminfo entries to demonstrate changes in 3934 lib_mouse.c (20021026) -TD 3935 + add screen-bce, screen-s entries from screen 3.9.13 (report by 3936 Adam Lazur <zal@debian.org>) -TD 3937 + add mterm terminfo entries -TD 3938 3939 20021109 3940 + split-out useful fragments in terminfo for vt100 and vt220 numeric 3941 keypad, i.e., vt100+keypad, vt100+pfkeys, vt100+fnkeys and 3942 vt220+keypad. The last as embedded in various entries had ka3 and 3943 kb2 interchanged (report/discussion with Leonard den Ottolander 3944 <leonardjo@hetnet.nl>). 3945 + add check in tic for keypads consistent with vt100 layout. 3946 + improve checks in tic for color capabilities 3947 3948 20021102 3949 + check for missing/empty/illegal terminfo name in _nc_read_entry() 3950 (report by Martin Mokrejs, where $TERM was set to an empty string). 3951 + rewrote lib_insch.c, combining it with lib_insstr.c so both handle 3952 tab and other control characters consistently (report by Philippe 3953 Blain). 3954 + remove an #undef for KEY_EVENT from curses.tail used in the 3955 experimental NCURSES_WGETCH_EVENTS feature. The #undef confuses 3956 dpkg's build script (Debian #165897). 3957 + fix MKlib_gen.sh, working around the ironically named POSIXLY_CORRECT 3958 feature of GNU sed 4.0 (reported by Ervin Nemeth <airwin@inf.bme.hu>). 3959 3960 20021026 3961 + implement logic in lib_mouse.c to handle position reports which are 3962 generated when XFree86 xterm is initialized with private modes 1002 3963 or 1003. These are returned to the application as the 3964 REPORT_MOUSE_POSITION mask, which was not implemented. Tested both 3965 with ncurses 'a' menu (prompted by discussion with Larry Riedel 3966 <Larry@Riedel.org>). 3967 + modify lib_mouse.c to look for "XM" terminfo string, which allows 3968 one to override the escape sequence used to enable/disable mouse 3969 mode. In particular this works for XFree86 xterm private modes 3970 1002 and 1003. If "XM" is missing (note that this is an extended 3971 name), lib_mouse uses the conventional private mode 1000. 3972 + correct NOT_LOCAL() macro in lib_mvcur.c to refer to screen_columns 3973 where it used screen_lines (report by Philippe Blain). 3974 + correct makefile rules for the case when both --with-libtool and 3975 --with-gpm are given (report by Mr E_T <troll@logi.net.au>). 3976 + add note to terminfo manpage regarding the differences between 3977 setaf/setab and setf/setb capabilities (report by Pavel Roskin). 3978 3979 20021019 3980 + remove redundant initialization of TABSIZE in newterm(), since it is 3981 already done in setupterm() (report by Philippe Blain). 3982 + add test/inserts.c, to test winnstr() and winsch(). 3983 + replace 'sort' in dist.mk with script that sets locale to POSIX. 3984 + update URLs in announce.html.in (patch by Frederic L W Meunier). 3985 + remove glibc add-on files, which are no longer needed (report by 3986 Frederic L W Meunier). 3987 3988 20021012 5.3 release for upload to 3989 + modify ifdef's in etip.h.in to allow the etip.h header to compile 3990 with gcc 3.2 (patch by Dimitar Zhekov <jimmy@is-vn.bg>). 3991 + add logic to setupterm() to make it like initscr() and newterm(), 3992 by checking for $NCURSES_TRACE environment variable and enabling 3993 the debug trace in that case. 3994 + modify setupterm() to ensure that it initializes the baudrate, for 3995 applications such as tput (report by Frank Henigman).
http://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=NEWS;hb=f86cbeb5f9bd96ab041d34039c35749a14965039
CC-MAIN-2022-40
refinedweb
30,038
66.44
1 /*2 * Copyright (C) 2004 Sun Microsystems, Inc. All rights reserved. Use is3 * subject to license terms.4 * 5 * This program is free software; you can redistribute it and/or modify6 * it under the terms of the Lesser GNU General Public License as7 * published by the Free Software Foundation; either version 2 of the8 * License, or (at your option) any later version.9 * 10 * This program is distributed in the hope that it will be useful, but11 * WITHOUT ANY WARRANTY; without even the implied warranty of12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU13 * General Public License for more details.14 * 15 * You should have received a copy of the GNU General Public License16 * along with this program; if not, write to the Free Software17 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-130718 * USA.19 */ 20 21 package org.jdesktop.jdic.desktop.internal.impl;22 23 import java.io.File ;24 import java.io.IOException ;25 26 import org.jdesktop.jdic.desktop.internal.LaunchFailedException;27 import org.jdesktop.jdic.desktop.internal.LaunchService;28 29 /**30 * Concrete implementation of the LaunchService interface for Mac OS X.31 *32 * @author Elliott Hughes <enh@acm.org>33 */34 public class MacLaunchService implements LaunchService {35 static {36 System.loadLibrary("jdic");37 }38 39 /** 40 * Converts the given filename path to a unique canonical form. Which 41 * removes redundent names, such as: `.' or `..' or symbolic links (on UNIX).42 */43 public File resolveLinkFile(File file) {44 File resolvedFile = file; 45 try {46 resolvedFile = file.getCanonicalFile(); 47 } catch (IOException e) {48 }49 50 return resolvedFile;51 } 52 53 /**54 * Launches the associated application to open the given file.55 * 56 * @param file the given file to be opened.57 * @throws LaunchFailedException if the given file has no associated 58 * application, or the associated application fails to be launched.59 */60 public void open(File file) throws LaunchFailedException {61 boolean result = nativeOpenFile(file.toString());62 if (result == false) {63 throw new LaunchFailedException("Failed to launch the associated " +64 "application with the specified file."); 65 }66 }67 68 /**69 * Checks if the given file is editable.70 */71 public boolean isEditable(File file) {72 return false;73 }74 75 /**76 * Launches the associated editor to edit the given file.77 * 78 * @param file the given file to be edited.79 * @throws LaunchFailedException if the given file has no associated editor, 80 * or the associated editor fails to be launched.81 */82 public void edit(File file) throws LaunchFailedException {83 throw new LaunchFailedException("No application associated with the " +84 "specified file and verb."); 85 }86 87 /**88 * Checks if the given file is printable.89 */90 public boolean isPrintable(File file) {91 return true;92 }93 94 /**95 * Prints the given file.96 * 97 * @param file the given file to be printed.98 * @throws LaunchFailedException if the given file has no associated 99 * application, or the associated application fails to be launched.100 */101 public void print(File file) throws LaunchFailedException {102 boolean result = nativePrintFile(file.toString());103 if (result == false) {104 throw new LaunchFailedException("Failed to launch the associated " +105 "application with the specified file."); 106 }107 }108 109 private native boolean nativeOpenFile(String filePath);110 private native boolean nativePrintFile(String filePath);111 }112 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/jdesktop/jdic/desktop/internal/impl/MacLaunchService.java.htm
CC-MAIN-2018-05
refinedweb
556
50.02
On 20.07.2008, at 19:34, Sho Fukamachi wrote: > as a fellow newbie embarked on a CouchDB crash course I hope I can > > Can you access Futon, the couchdb web interface? If you can, create > a database $MY_DB and switch to it. > > Choose a namespace for your view ($VIEW) (ie just make something > up!). Then create a new document in that database with the name > "_design/$VIEW". > > Futon should now switch to your new document. Create a field in it > called "views". Save. > > Futon is a bit weird about how it presents text editors for fields, > so first put "{}" into the views field and save. Then, double click > it again and you should have a nice big text window. > > Now copy this into the views field: > > { > "get_all": { > "map": "function(doc) { emit(null, doc); }" > } > } > > And save. Actually, it's a bit simpler than that :) 1. If you don't already have a database, create one. 2. Switch the "Custom Query" in the view dropdown at the top right. 3. Type in your query and test/refine it until it does what you want it to do. 4. Click the "Save As" button at the bottom right. 5. In the dialog, enter the design doc ID and the name of the view. 6. You're done. Cheers, -- Christopher Lenz cmlenz at gmx.de
http://mail-archives.apache.org/mod_mbox/couchdb-user/200807.mbox/%3CE4E5C084-D4B3-41E4-9C59-511095775AC6@gmx.de%3E
CC-MAIN-2017-34
refinedweb
222
91.92
A Simple Thread Pooling Approach Introduction Over the last year or two, processor speeds appear to have peaked. 4 GHz, which was expected last year, has still not made it out of the labs. In fact, the trend now appears to be towards doing more work in parallel, as seen in multi-core processors, rather than speeding up processors. However, multi-core processors do not translate directly into performance boosts for all software applications. Rather, multi-core, multi-processor systems would favour well-behaved, multi-threaded applications. With this scenario in mind, applications in the future would need to make use of threads (and fibers) intelligently to make optimal use of available resources. A Quick Look at Threading ." To create a thread, Windows provides the CreateThread function that creates a thread to execute within the virtual address space of the calling process, based on the LPTHREAD_START_ROUTINE parameter, which is the starting address of the thread, around which the thread's stack is created. This is a rather expensive call because a thread is a Kernel object, and CreateThread needs to drop into Kernel mode (briefly) and back into User mode, and requires a significant amount of memory (the default stack size is 1 Mb per thread). So, even though you may need to paralellize various activities within your application, doing so in an ad-hoc manner by creating threads at whim would actually degrade your application performance. Thread Pooling Because creating a thread is expensive, the obvious solution is to create threads once and reuse them. In other words: Thread pooling. To achieve this, the thread function—in other words, the LPTHREAD_START_ROUTINE parameter—would need to be a generic function that would service queued requests. So ideally, an application would create a pool of threads, based on the number of processors available. And then, based on the work to be performed, it would divide the work into various discrete activities or jobs, and parcel them out to the various threads in the queue. This parcelling out would be based on interdependancies amaong the jobs as well as the load on each individual thread in the pool. Analyzing the interdependancies among various jobs, and coordinating them is an extremely tricky task and frequently error prone. To further compound these problems, there are very few modelling tools or techniques that one can apply to solve them. Whereas Erlang has made significant steps towards paralellization, there is still a lot of work to be done, and I am most certainly not qualified to advise the reader on how to solve his/her application-specific problems. However, the thread pooling mechanism is a rather generic piece of code, and there are various sophisticated approaches to implementing them. The easiest approach (which this article illustrates) would make use of the facilities Windows offers—such as messages and message queues. This approach would require wrapping a thread in a shallow wrapper class. The thread would be based on a static member function of this class, which then would run a message loop. Any jobs to be submitted to this thread would be posted as a message to this thread. Whenever the thread receives a message, it would break up the message received into certain pre-specified parts to get the job and the data, and then invoke the job. The Approach in Depth Now that you have looked at the need for pooling as well as had a bird's eye view of the implementation, take a deeper look at the implementation. The zip file with this article contains a ThreadPool project (that compiles into a static lib, with no MFC dependencies) and a Test stub (An MFC-based dialog application). The ThreadPool lib contains three classes: - CThreadPoolMgr: This class manages the creation, maintenance, and freeing of the thread pool. - CWorkerThread: This class wraps a Win32 Thread. It has a static function that forms the base of the thread. - CJob: This represents a job that needs to be performed on any thread in the pool. From the application lifecycle perspective, the interaction among the instances of these classes is as follows: - Startup: The CThreadPoolMgr creates a vector of CWorkerThread (one per thread in the pool). In the constructor of the CWorkerThread, CreateThread is invoked with the static method ThreadFunc as the LPTHREAD_START_ROUTINE. - Submitting a job: Whenever a client wants to run a job, they need to create a CJob object and pass it to the CThreadPoolMgr. The CJob contains a function pointer that corresponds to the job, the parameters for the job, and a notification function pointer, to be called when the function completes. When the client submits a job to the CThreadPoolMgr, the CThreadPoolMgr locates a suitable thread from the pool, and then posts the CJob pointer to the thread. As the thread is running a message loop, it receives the CJob pointer as part of the MSG structure, which it then proceeds to unpack. After unpacking, it first calls the job function, and then the notification function. - Shutdown: The CThreadPoolMgr iterates over the vector of threads, posting a WM_QUIT message to each of the threads and then waits on it. When a thread finishes processing all its jobs, it then would reach the WM_QUIT message, and return (that is, incidentally, the only clean way for a thread to terminate—TerminateThread, EndThread, and so forth—are not clean ways to terminate a thread). Notes The ThreadPool is NOT production-grade code because: - It does not have reliable error handling/propagating mechanisms. - It assumes all jobs are equal. No mechanism for weighting/prioritizing jobs is provided. - The pool size is static. It should be dynamic, based on the number of jobs, processors, processor free time, and so on. - The pool manager cannot clear jobs from the queue. - There are no reporting mechanisms/statistics for jobs in each thread's queue. - It does not support job cancellation directly. - It is not object oriented. (Ideally, functors should be used for jobs as well as notifiers. I have retained them as function pointers to make it easy to "plug and play" your existing multi-threaded code). - It has been kept simple and easy to read rather than reliable/bug free because it is for demonstration purposes only. (The fact that I am lazy has nothing to do with it whatsoever, of course.) Also, take a look at Herb Sutter's great article on concurrency available online at. Simple and elegantPosted by reachuraj on 11/23/2005 08:23pm Article looks simple and elegant, yet very informative.Reply
http://www.codeguru.com/cpp/misc/misc/threadsprocesses/article.php/c10977/A-Simple-Thread-Pooling-Approach.htm
CC-MAIN-2017-04
refinedweb
1,088
60.45
#include <wx/dataview.h> wxDataViewItem is a small opaque class that represents an item in a wxDataViewCtrl in a persistent way, i.e. independent of the position of the item in the control or changes to its contents. It must hold a unique ID of type void* in its only field and can be converted to and from it. If the ID is NULL the wxDataViewItem is invalid and wxDataViewItem::IsOk will return false which used in many places in the API of wxDataViewCtrl to indicate that e.g. no item was found. An ID of NULL is also used to indicate the invisible root. Examples for this are wxDataViewModel::GetParent and wxDataViewModel::GetChildren. Constructor. Constructor. Constructor. Returns the ID. Returns true if the ID is not NULL.
https://docs.wxwidgets.org/trunk/classwx_data_view_item.html
CC-MAIN-2021-49
refinedweb
128
74.79
So a pretty weird problem. I’m doing a data engineering bootcamp and have chosen streamlit to present my final project. I’ve set up an etl through aws which then pops all my data nicely into snowflake. I’ve run through the snowflake tutorial, and uploaded to the cloud, just using the tutorial code with uploaded secrets… It works perfectly fine on the cloud, but locally the app says the query execution completes, but then it just immediately closes streamlit, doesn’t display the query result on the app or in terminal (where I’ve added print statements) Again this is just the tutorial code, sure I’ve tried a multitude of things since to try and get it to work but nothing seems to change. It’s not my credentials as making a change causes it to error, to be clear there is no error here, the app just stops after is says it’s made the query, but again it works perfectly fine on the cloud and now I’m so confused can someone help? Running init_connection(). # THIS COMPLETES Running run_query(…). # GETS STUCK HERE TERMINAL UserWarning: You have an incompatible version of ‘pyarrow’ installed (8.0.0), please install a version that adheres to: ‘pyarrow<6.1.0,>=6.0.0; extra == “pandas”’ warn_incompatible_dep( 2022-07-06 15:59:07.791 Snowflake Connector for Python Version: 2.7.9, Python Version: 3.10.3, Platform: Windows-10-10.0.22000-SP0 2022-07-06 15:59:07.792 This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity. 2022-07-06 15:59:07.792 Setting use_openssl_only mode to False 2022-07-06 15:59:08.909 query: [SELECT * from redshift_customerdata;] 2022-07-06 15:59:09.083 query execution done (venv) PS C:\Users\robfa\Downloads\SSS> This is all happening in like 2 second and the local app is just stuck saying Running run_query(...) . I’ve tried the exact tutorial code, I’ve tried making changes, I’ve tried removing singleton and memo, I’ve tried so much and absolutely nothing stops it from saying query execution done and then just closing. Here is the code for reference app_dashboard.py import streamlit as st import snowflake.connector @st.experimental_singleton def init_connection(): return snowflake.connector.connect(**st.secrets[“snowflake”]) conn = init_connection() @st.experimental_memo(ttl=600) def run_query(query): with conn.cursor() as cur: cur.execute(query) return cur.fetchall() def run(): st.write(“Let gooooo”) rows = run_query("SELECT * from redshift_customerdata;") for row in rows: st.write(f"{row[0]} has a :{row[1]}:") print(row) print(rows) run() Seriously appreciate any help I can get, I’ve looked around but couldnt find anything similar… unless I’m being super dumb but its so weird as it works perfectly fine on the cloud.
https://discuss.streamlit.io/t/fixed-app-doesnt-run-locally-does-run-on-cloud-using-snowflake-tutorial/27422
CC-MAIN-2022-33
refinedweb
491
57.77
Closed Bug 493232 Opened 11 years ago Closed 11 years ago Wrong variable value accessed in closure Categories (Core :: JavaScript Engine, defect) Tracking () People (Reporter: mossop, Assigned: jorendorff) Details (Keywords: fixed1.9.1, regression, Whiteboard: fixed-in-tracemonkey) Attachments (1 file, 1 obsolete file) Crappy summary but I'm not really sure how to describe this. See the following page in Shiretoko: When you click on any of the markers, the info for Hamburg pops open. If you do the same on a Minefield build then the correct info pops open for each. Gran Paradiso also only shows the Hamburg info all the time so I guess this is a bug on trunk, but I'm not sure what is wrong with the js. I basically have the following: for (place in uniques) { var location = uniques[place]; var marker = new GMarker(...); GEvent.addEventListener(marker, "click", function() { ... }); } Within the anonymous function the values of location and marker look to be always the last values they had during the last iteration of the loop. I would expect them to have the values that they had at the time the function was created. Looks like maybe a merge has happened so no Shiretoko is behaving the same as Minefield. Which means this behaviour will change between 3.0 and 3.5 and so we should figure out whether it was actually correct before or after before we release. Flags: blocking1.9.1? Keywords: regression I expected the Gran Paradiso behaviour: each of those anonymous functions closes over the same scope, and so will see the subsequent updates to marker and location. Is this a desired ES5 change, or a regression? blocking to find out! Flags: blocking1.9.1? → blocking1.9.1+? What about tracemonkey tip? /be Assignee: general → brendan Status: NEW → ASSIGNED OS: Mac OS X → All Hardware: x86 → All (In reply to comment #3) >? To make that hopefully a bit clear. When I filed the bug, 3.0.x and 3.5 were behaving correctly. 3.6 was broken. Now both 3.5 and 3.6 are broken, so a patch landed on the 1.9.1 branch between the 15th and 26th that broke it. We just need to narrow that down. When you say "behaving correctly", do you mean that they see the same values, or that they see different values? (In reply to comment #5) > When you say "behaving correctly", do you mean that they see the same values, > or that they see different values? I mean correctly based on your expectation in comment 2, i.e. the same as Gran Paradiso. I'll have a narrower range in a few minutes anyway. The regression on 1.9.1 occurred in this range: Toggling the jit pref doesn't change this. (function () { var funs = []; for (var i = 0; i < 2; i++) { var v = i; funs[i] = function() { return v; }; } assertEq(funs[0](), 3); }()); We make a flat closure for the lambda, I guess because v is only assigned once (oops). Assignee: brendan → jorendorff Dream-JS has Scheme-ish bindings. Meanwhile back in reality... Talked to jorendorff on IRC, straightforward conservative fix in sight. /be The conservative fix will be to deoptimize functions that appear inside loops.. I need to hack a bit more, probably adding a bit to JSTreeContext. ETA tomorrow morning. (Note that the assertEq in comment 9 should also say 2, not 3.) Heh, did I say 2? How about 1, would you believe 1? Here are the really-correct-I-promise tests. (function () { var funs = []; for (var i = 0; i < 2; i++) { var v = i; funs[i] = function() { return v; }; } assertEq(funs[0](), 1); })(); (function() { var funs = []; for (var i = 0; i < 2; i++) { var v = i; funs[i] = (function() { return function () { return v; }; })(); } assertEq(funs[0](), 1); })(); Comment on attachment 379906 [details] [diff] [review] v1 Does the assertion in NoteLValue need to be adjusted? if (dn->frameLevel() != tc->staticLevel) { /* * The above condition takes advantage of the all-ones nature of * FREE_UPVAR_COOKIE, and the reserved frame level JS_BITMASK(16). * We make a stronger assertion by excluding FREE_UPVAR_COOKIE. */ JS_ASSERT_IF(dn->pn_cookie != FREE_UPVAR_COOKIE, dn->frameLevel() < tc->staticLevel); tc->flags |= TCF_FUN_SETS_OUTER_NAME; } Look out for places where the fact that FREE_UPVAR_COOKIE is > any valid frame level is employed to optimize tests. The check in SetStaticLevel uses >= FREE_STATIC_LEVEL, so that's cool. Please patch or file a followup bug if BumpStaticLevel could overflow //in extremis//. /be So does that mean that this function will now fall off trace? Or is that not affected? (In reply to comment #11) >. Only if the upvar crosses a funbox with inLoop true, where inLoop is set based only on that funbox's tree context. IOW, avoid the outer loop in newFunctionBox, and make use such a "local inLoop" in the dominance analysis. BumpStaticLevel needs protection. Something like: eval(Array((1<<14)-2).join("(function(){")+"return (i for (i in g));"+Array((1<<14)-2).join("}())")) should cause BumpStaticLevel to increment the static level to equal FREE_STATIC_LEVEL as revised by this bug's patch. Pre-existing bug, sorry -- fix or file separately, either's good. /be To answer bz's comment 14 question, the example given should cause deoptimization to use heavyweight functions, which dmandelin is working on tracing. /be Stack checking and our recursive-descent parser team up to prevent function nesting from getting anywhere near 1<<14, it turns out. On my machine, in a DEBUG build, we get as far as 166 levels of nesting before throwing InternalError. Fixed anyway, I think, but it's impossible to test. Attachment #379906 - Attachment is obsolete: true Attachment #379951 - Flags: review?(brendan) (In reply to comment #17) > Created an attachment (id=379951) [details] > v2 > > Stack checking and our recursive-descent parser team up to prevent function > nesting from getting anywhere near 1<<14, it turns out. Stack limit is configurable. > On my machine, in a > DEBUG build, we get as far as 166 levels of nesting before throwing > InternalError. Cool, thanks for checking. Could certainly split cookies differently based on this -- 8 bit skip/level + 24 bit slot. > Fixed anyway, I think, but it's impossible to test. Try jacking the stack limit? Reviewing now. /be Comment on attachment 379951 [details] [diff] [review] v2 > while (afunbox->level != lexdepLevel) { >+ if (afunbox->inLoop) >+ goto break2; Could comment briefly about closure evaluating each time through the loop but capturing only the scope chain objects, so the value of the dominating var will be whatever ends up there on the last iteration. Give a brief example? >+ * Assert but check anyway, to check future changes >+ * that bind eval upvars in the parser. (Thanks for trimming this comment!) > */ > JS_ASSERT(afunbox); > > /* > * If this function is reaching up across an > * enclosing funarg, we cannot make a flat > * closure. The display stops working once the > * funarg escapes. > */ > if (!afunbox || afunbox->node->isFunArg()) > goto break2; > } >+ if (afunbox->inLoop) >+ goto break2; This could just be a break, but perhaps clearer to goto break2 again. Too bad the afunbox->inLoop test has to be repeated. One idea to avoid that at the price of a redundant test (which might be optimized away) is to add !afunbox->inLoop to the while loop's condition. Then this would be the common code both for the trivial break/goto break2, and for the comment with example. YMMV, just wanted to throw the idea out. >- return NULL; >+ return false; Yikes -- thanks for fixing these two. Oh well, 0 is 0. > uint32 queued:1, >- level:15, >+ inLoop:1, /* in a loop in parent function */ >+ level:JSFB_LEVEL_BITS, Uber-nit: indent the comment more to separate the columnation of it and any future right-hand-side-of-line comments from the ending column of the longest member declarator (in this case the JSFB_LEVEL_BITS bitfield size). r=me with above considered. Thanks, /be Brendan agreed on IRC that we don't need to check afunbox->inLoop inside the loop at all; the new comment in the changeset I pushed explains. Whiteboard: fixed-in-tracemonkey Status: ASSIGNED → RESOLVED Closed: 11 years ago Resolution: --- → FIXED
https://bugzilla.mozilla.org/show_bug.cgi?id=493232
CC-MAIN-2020-29
refinedweb
1,337
66.44
Hi, While I had tried to use view.substr() to extract Japanese character (utf-8) on a buffer, it didn't work.Is it possible to handle this correctly ? Text on a buffer: あいうえお and I had tried to use view.substr() on the console: print view.substr(sublime.Region(0,2)) then, codecs.py had caused the following error messages: >>>, It works on Windows.Looks like the encoding used for the console is ACSII, which mean that you couldn't print these chars.So the command work fine but you couldn't print the result to the console. Try to type only: view.substr(sublime.Region(0,2)) Don't know how to change encoding in OS X. Thanks for your prompt reply. Yes, you may be right because it was no problem to use substr() w/o 'print', and my environment is os x actually. However, this leads me to another question about handling utf-8 characters on Python. It seems NOT to handle utf-8 characters in webbrowser module same as console.The following code would be fail to Google the query, such as 寿司 (sushi) # import webbrowser webbrowser.open_new_tab('寿司') Given that Python would be able to handle utf-8 along with the following statements, is there any solution to handle utf-8 correctly even in plugin using webbrowser module of the Sublime Text [23] ? #!/usr/bin/env python # -*- coding: utf-8 -*- Thank you, You must give the source an encoding using the header (like your example) AND use an unicode string for the url by prefixing it with u: # -*- coding: utf-8 -*- import sublime, sublime_plugin import webbrowser class ExampleCommand(sublime_plugin.WindowCommand): def run(self): webbrowser.open_new_tab(u'寿司') Thanks again, bizoo. Now I have doubt that this problem might come up ONLY OS X because ... a. My original code can work all right on Windows, even though doesn't work on os xb. The sample code you can provide me doesn't work on os x as well The difference of Python's behavior might come from implementation of Python interpreter. Only os x version uses the system Python. Is there any workaround for this ? Any ideas ? URLs need to be escaped, and typically need to be encoded in UTF-8. The following worked for me on OSX: # -*- coding: utf-8 -*- import sublime, sublime_plugin import webbrowser import urllib class ExampleCommand(sublime_plugin.WindowCommand): def run(self): quoted = urllib.quote_plus(u'寿司'.encode('utf-8')) webbrowser.open_new_tab(''+quoted) Thanks sapphirehamster, Now I have a clear understanding for that, and I can close the problem !!! Thanks again, sapphirehamster, bizoo.Kind regards,
https://forum.sublimetext.com/t/how-can-i-use-utf-8-codes-in-a-buffer-with-view-substr/8967/7
CC-MAIN-2016-30
refinedweb
433
58.69
For TinyMCE 5: See how to add TinyMCE 5 to a simple React project. After the roaring success of our Angular 2 and TinyMCE article, I am back to show you how to create a similar functioning piece of code. This time, using the React library. If you have already read the Angular 2 version, most of this will be familiar territory. Even so, I hope you will still have fun following along. Prerequisites This will not be a particularly advanced blog post, technically speaking, but I do expect you to have a working understanding of: - JavaScript (including some of the newer, es2015 features) - React - How to use the terminal/command line I also assume that you have Node and npm installed on your computer. We will write our code with the newest ES2015 syntax, which will be transpiled down to more browser friendly JavaScript by Babel in our build step. If all of this sounds scary and complicated, it really isn’t. Setting up Setting up a new project can be a very daunting process. Just as in the Angular 2 post, we will be using a marvelous tool that takes care of most setup for us. Meet create-react-app. As you will soon see, using create-react-app makes the first setup process extremely easy and quick, but using create-react-app is optional to follow along in this tutorial. If you already have a project setup that you want to use you should be able to follow along with this guide – although you might have to solve some ‘gotchas’ and problems on your own that create-react-app takes care of otherwise. Anyway, to start off we will install the create-react-app tool with the following command: npm install create-react-app --global After some installing, you now have the create-react-app command available in your terminal, so let’s start using it! Creating the project Using create-react-app, we will create a new project using the following command: create-react-app tiny-react Where tiny-react is simply the name of the folder the project will be generated into, you could also just write create-react-app . (Note: That’s a single period at the end to generate the project in the current directory.) After the project has been generated and all the dependencies has been installed, just cd into the project directory: cd tiny-react If you are using git for your version control (if not … why?) this might also be a perfect time to initialize a repository in the project directory and commit all the files so you have something to reset to if you mess something up later on. git init git add --all git commit -m ‘initial commit’ Installing TinyMCE The next step is to install TinyMCE into our project, which we simply do with npm and the following command: npm install tinymce --save After that has finished installing we are ready to start writing some code, so open up the project directory in your IDE/editor of choice. Getting the skin For the TinyMCE editor to work it needs a skin, which simply consists of font and CSS files used by the editor. The most straightforward way to get these files to the correct place in a project generated by create-react-app is to copy them from the tinymce directory in the node_modules to the public directory in the root of your project (it was created by create-react-app). Anything in this public directory will be moved to the build directory by the build step provided by create-react-app, and an environment variable called PUBLIC_URL is available to get the absolute url to the public files (the environment variable is replaced by the build step and won’t show up in the built code). How you do the copying is up to you, either manually using the file explorer/finder, or with a terminal command looking something like this: Macos and Linux: cp -r node_modules/tinymce/skins public/skins Windows xcopy /I /E node_modules/tinymce/skins src/assets/skins Then later, when initializing a TinyMCE instance, just add the skin_url setting with the url like this: tinymce.init({ // other settings... skin_url: `${process.env.PUBLIC_URL}/skins/lightgray`, // … more settings }); create-react-app will then take care of all the other stuff for you, making sure it works in both development and production. Creating a simple TinyEditorComponent Now, let’s finally get started writing some code! Create a new directory in the src directory called components and create a file called TinyEditorComponent.js in that directory. In this file, let’s start by creating a simple component that only returns a paragraph with the text “Hello World” in it, with the code looking something like this: import React, { Component } from "react"; class TinyEditorComponent extends Component { render() { return <p>Hello World</p>; } } export default TinyEditorComponent; And then add the component to the App.js file in the source directory, changing the code to look something like this:</h2> </div> <TinyEditorComponent /> </div> ); } } export default App; What we are doing here is simply importing the TinyEditorComponent into the App.js file and then adding it to our app with the JSX syntax. If you start the development server with npm start now you should see the text “Hello World” underneath the spinning React logo. Success! Now let’s get on with adding tinymce to the mix. More code I will start by adding all of the code in the TinyEditorComponent first and then go through it bit by bit underneath. TinyEditorComponent.js import React, { Component } from "react"; import tinymce from "tinymce"; import "tinymce/themes/modern"; import "tinymce/plugins/wordcount"; import "tinymce/plugins/table"; class TinyEditorComponent extends Component { constructor() { super(); this.state = { editor: null }; } componentDidMount() { tinymce.init({ selector: `#${this.props.id}`, skin_url: `${process.env.PUBLIC_URL}/skins/lightgray`, plugins: "wordcount table", setup: (editor) => { this.setState({ editor }); editor.on("keyup change", () => { const content = editor.getContent(); this.props.onEditorChange(content); }); }, }); } componentWillUnmount() { tinymce.remove(this.state.editor); } render() { return ( <textarea id={this.props.id} value={this.props.content} onChange={(e) => console.log(e)} /> ); } } export default TinyEditorComponent; If we start with the beginning we simply have the import statements: import React, { Component } from "react"; import tinymce from "tinymce"; import "tinymce/themes/modern"; import "tinymce/plugins/wordcount"; import "tinymce/plugins/table"; First, we import React (and destructure out Component for a cleaner class syntax later), then tinymce. A theme is also needed for the editor to work so I import the modern theme underneath. Any plugin that you want to include will also have to be imported similarly. In this example, I am including the Word Count and Table plugins. Next, let’s jump into the component class itself and start with the constructor: constructor() { super(); this.state = { editor: null }; } Not that much to talk about here, we are only setting up the initial state to null to be as clear as possible with the fact that this component will be stateful. I think you could just skip the constructor and just set the state anyway, but let’s be as straightforward with the state as possible. The next method is the componentDidMount lifecycle hook: componentDidMount() { tinymce.init({ selector: `#${this.props.id}`, skin_url: `${process.env.PUBLIC_URL}/skins/lightgray`, plugins: 'wordcount table', setup: editor => { this.setState({ editor }); editor.on('keyup change', () => { const content = editor.getContent(); this.props.onEditorChange(content); }); } }); } Here, we will initialize the TinyMCE editor, with the selector set to the id prop that is passed down from the parent component. At this point, the skin_url setting will be set to the public folder with the environment variable as previously explained, the two plugins that we imported are added to the plugins setting, and the setup function setting up both states – that is, say saving a reference to the TinyMCE editor for later cleanup purposes. We are also adding the event handlers for the keyup and change events in the editor to call the onEditorChange prop callback function passed in from the parent component with the editor content. Next up, is the componentWillUnmount lifecycle hook: componentWillUnmount() { tinymce.remove(this.state.editor); } This is pretty self-explanatory. It simply removes the editor when the component is unmounted (removed from the page) using the reference to the editor instance we saved in the component’s state. Last but not least, we have the render function: render() { return ( <textarea id="{this.props.id}" /> ); } This shouldn’t take that much explanation either. We simply render a textarea with the id set to the id prop sent in from the parent component. We also have the defaultValue set to the ‘this.props.’ content, making it possible to initialize the editor with content in it. If we now go back to the App.js file and change the TinyEditorComponent JSX to add some props like this: <TinyEditorComponent id="myCoolEditor" onEditorChange={(content) => console.log(content)} />; Save and start up the development server again (or just wait for it to automatically reload if you left it running), and you should see a TinyMCE editor underneath the spinning React logo. Type something into the editor and you should see it being logged out in the browser console. Yay! Wrapping up Hopefully, you found this blog post valuable and entertaining. I also hope it is leaving you with some kind of hunger to start using TinyMCE in your future React applications. If you’re building something for a customer, consider signing up for our Cloud Developer plan to get more out of TinyMCE. If you have questions, feel free to comment below or post in our Community forum. Have fun hacking! Bonus material Tired of the spinning React logo on the standard page that create-react-att generated? Open up the logo.svg file in the src directory and replace its contents with the following: <svg viewBox="0 0 225.000000 199.000000"> <g transform="translate(0.000000,199.000000) scale(0.100000,-0.100000)" fill="#4775ce"> <path d="M551 1416 l-552 -552 31 -29 31 -29 62 62 62 62 460 -460 460 -460 559 552 558 552 -26 23 c-15 13 -32 23 -37 23 -6 0 -34 -23 -64 -51 -30 -28 -58 -49 -64 -47 -5 1 -216 206 -468 455 l-460 452 -552 -553z m984 -856 l-430 -430 -430 430 -430 430 430 430 430 430 430 -430 430 -430 -430 -430z"/> <path d="M900 1250 l0 -40 220 0 220 0 0 40 0 40 -220 0 -220 0 0 -40z"/> <path d="M740 1070 l0 -40 370 0 370 0 0 40 0 40 -370 0 -370 0 0 -40z"/> <path d="M740 890 l0 -40 370 0 370 0 0 40 0 40 -370 0 -370 0 0 -40z"/> <path d="M900 710 l0 -40 220 0 220 0 0 40 0 40 -220 0 -220 0 0 -40z"/> </g> </svg> Then, a spinning TinyMCE logo appears! So much nicer! 🙂 Stuck on the App.js code? + TinyMCE</h2> </div> <TinyEditorComponent id="myCoolEditor" onEditorChange={(content) => console.log(content)} /> </div> ); } } export default App;
https://www.tiny.cloud/blog/how-to-integrate-react-with-tinymce/
CC-MAIN-2021-39
refinedweb
1,847
51.38
#include <fmtmsg.h> int fmtmsg(long classification, const char *label, int severity, const char *text, const char *action, const char *tag); The label parameter identifies the source of the message. The string must consist of two colon separated parts where the first part has not more than 10 and the second part not more than 14 characters. The text parameter describes the condition of the error. The action parameter describes possible steps to recover from the error. If it is printed, it is prefixed by "TO FIX: ". The tag parameter is a reference to the online documentation where more information can be found. It should contain the label value and a unique identification number. The first value defines the output channel. The second value is the source of the error: The third value encodes the detector of the problem: The fourth value shows the severity of the incident: The numeric values are between 0 and 4. Using addseverity()(). #include <stdio.h> #include <fmtmsg.h> int main() {"); } return 0; } The output should be: util-linux:mount: ERROR: unknown mount option TO FIX: See mount(8). util-linux:mount:017and after MSGVERB=text:action; export MSGVERBthe output becomes: unknown mount option TO FIX: See mount(8).
http://www.linuxmanpages.com/man3/fmtmsg.3.php
crawl-003
refinedweb
204
55.34
Aborting is not "safe". Aborting is one of at least five ways to handle this particular error. Whether that is 'safe' depends on the context, i.e. your definition of that word. The fundamental point is that you cannot know beforehand whether looking at the string twice is a [performance] problem, whether truncating (with or without fixing incomplete UTF-8 codes) is better than not starting to fill the buffer in the first place, whether calling abort() is a good idea (I'd say that if you are a library, it almost never is), whether to return something negative or the new length or the source length or …, and a host of related questions, all of which do not lend themselves to consensus answers. As this discussion shows quite clearly, IMHO. My point is that, with the sole exception of leaving the destination buffer undisturbed when the source won't fit, any of the aforementioned behaviors can be implemented with a reasonably-trivial O(1) wrapper around strlcpy(). Therefore, keeping strlcpy() out of libc is … kindof stupid. Again, IMHO. Instead, people are told to use strncpy(). Which they'll do incorrectly. Let's face it, running off the end of a string into la-la land is always worse than truncating it. The ups and downs of strlcpy() Posted Jul 26, 2012 1:28 UTC (Thu) by nybble41 (subscriber, #55106) [Link]Here's a suggestion (only partly sarcastic): typedef size_t (*strxcpy_handler_t)(char *dst, const char *src, size_t size, void *data); size_t strxcpy(char *dst, const char *src, size_t size, strxcpy_handler_t overflow_fn, void *overflow_data) { char *p; const char *q; for (p = dst, q = src; *q; ++p, ++q) { if ((p - dst) >= size) { return overflow_fn(dst, src, size, overflow_data); } *p = *q; } /* get here only if strlen(src) < size */ *p++ = '\0'; return (p - dst); } size_t strxcpy_truncate(char *dst, const char *src, size_t size, void *data) { if (size <= 0) abort(); dst[size - 1] = '\0'; return size + strlen(src + size); } size_t strxcpy_abort(char *dst, const char *src, size_t size, void *data) { abort(); return size; } if (strxcpy(dst, src, dst_size, strxcpy_truncate, NULL) >= dst_size) ...; (void)strxcpy(dst, src, dst_size, strxcpy_abort, NULL); (void)strxcpy(dst, src, dst_size, strxcpy_subst, "(input too long)"); /* ... */ The ups and downs of strlcpy() Posted Jul 26, 2012 8:53 UTC (Thu) by renox (subscriber, #23785) [Link] That said, one size doesn't fit all so having different function is reasonable, the biggest issue is that there is no sane default behaviour.. The ups and downs of strlcpy() Posted Jul 26, 2012 16:27 UTC (Thu) by nybble41 (subscriber, #55106) [Link] The strxcpy function isn't just a wrapper; it does all of the real work. The strxcpy_abort, strxcpy_truncate functions only run when an overflow condition is detected. This allows you to substitute your own preferred method of error-handling. This is actually rather similar to the way exceptions are handled in Common Lisp or Scheme programs, except that the Lisp version would use dynamic variables rather than explicit arguments for the handler code, which results in less cluttered code. (define (default-error-handler error-value) (abort)) (define current-error-handler (make-parameter default-error-handler)) (define (do-something) (... (if (ok? var) var ((current-error-handler) var)) ... )) ; aborts on error (do-something) ; evaluates to #t on success, or #f on error (let/cc return (parameterize ([current-error-handler (lambda _ (return #f))]) (do-something) #t) ; uses "value" in place of var on error (parameterize ([current-error-handler (lambda _ value)]) (do-something)) Scheme-style parameters are attached to the current continuation, meaning that they're not only thread-safe, but that the bindings only affect the inner dynamic scope of the (parameterize) form, even in exceptional cases such as non-local return (like the middle example above) and even re-entry into a dynamic scope which was previously exited. Linux is a registered trademark of Linus Torvalds
https://lwn.net/Articles/508152/
CC-MAIN-2017-30
refinedweb
641
52.83
Astro IconAstro Icon A straight-forward Icon component for Astro. SetupSetup - Install astro-icon. npm i astro-icon # or yarn add astro-icon export default { vite: { ssr: { external: ["svgo"], }, }, }; Icon PacksIcon Packs astro-icon automatically includes all of the most common icon packs, powered by Iconify! To browse supported icons, check the official Icon Sets reference or visit Icônes. UsageUsage Icon will inline the SVG directly in your HTML. --- import { Icon } from 'astro-icon' --- <!-- Automatically fetches and inlines Material Design Icon's "account" SVG --> <Icon pack="mdi" name="account" /> <!-- Equivalent shorthand --> <Icon name="mdi:account" /> Sprite will reference the SVG from a spritesheet via <use>. --- import { Sprite } from 'astro-icon' --- <!-- Required ONCE per page as a parent of any <Sprite> components! Creates `<symbol>` for each icon --> <!-- Can also be included in your Layout component! --> <Sprite.Provider> <!-- Automatically fetches and inlines Material Design Icon's "account" SVG --> <Sprite pack="mdi" name="account" /> <!-- Equivalent shorthand --> <Sprite name="mdi:account" /> </Sprite.Provider> You may also create Local Icon Packs. Local IconsLocal Icons By default, astro-icon supports custom local svg icons. They are optimized with svgo automatically with no extra build step. See "A Pretty Good SVG Icon System" from CSS Tricks. UsageUsage - Create a directory inside of src/named icons/. - Add each desired icon as an individual .svgfile to src/icons/ - Reference a specific icon file using the nameprop. Icon will inline the SVG directly in your HTML. --- import { Icon } from 'astro-icon'; --- <!-- Loads the SVG in `/src/icons/filename.svg` --> <Icon name="filename" /> Sprite will reference the SVG from a spritesheet via <use>. --- import { Sprite } from 'astro-icon'; --- <!-- Required ONCE per page as a parent of any <Sprite> components! Creates `<symbol>` for each icon --> <!-- Can also be included in your Layout component! --> <Sprite.Provider> <!-- Uses the sprite from `/src/icons/filename.svg` --> <Sprite name="filename" /> </Sprite.Provider> Local Icon PacksLocal Icon Packs astro-icon supports custom local icon packs. These are also referenced with the pack and/or name props. - Create a directory inside of src/named icons/. - Create a JS/TS file with your packname inside of that directory, eg src/icons/my-pack.ts - Use the createIconPackutility to handle most common situations. import { createIconPack } from "astro-icon"; // Resolves `heroicons` dependency and reads SVG files from the `heroicons/outline` directory export default createIconPack({ package: "heroicons", dir: "outline" }); // Resolves `name` from a remote server, like GitHub! export default createIconPack({ url: " }); If you have custom constraints, you can always create the resolver yourself. Export a default function that resolves the name argument to an SVG string. import { loadMyPackSvg } from "my-pack"; export default async (name: string): Promise<string> => { const svgString = await loadMyPackSvg(name); return svgString; }; StylingStyling Styling your astro-icon is straightforward. Any styles can be targeted to the [astro-icon] attribute selector. If you want to target a specific icon, you may target it by name using [astro-icon="filename"]. --- import { Icon } from 'astro-icon'; --- <style lang="css"> [astro-icon] { color: blue; /* OR */ fill: blue; } [ <!-- will be blue --> <Icon name="annotation" /> <!-- will be red --> PropsProps <Icon> and <Sprite> share the same interface. The name prop references a specific icon. It is required. The optimize prop is a boolean. Defaults to true. In the future it will control svgo options. Both components also accepts any global HTML attributes and aria attributes. They will be forwarded to the rendered <svg> element. See the Props.ts file for more details.
https://giters.com/natemoo-re/astro-icon
CC-MAIN-2022-21
refinedweb
566
61.93
LINQ to SharePoint – Scope is Site Collection). However, if you’re testing your SPMetal driven data model with a console application – as I have – and have cross site collection content queries, you won’t end up with the limitation. Here’s a little demonstration – a console app with an SPMetal generated data access to retrieve blog-posts from SharePoint sites. First, we would generate data access model with SPMetal. In this case the objective is reached shown in the following image. The result is a code file named BlogDC.cs, namespace would be DataModel.Blogs, not DataMode.Blogs, a typo in the screen shot. Then a console app, which retrieves blog posts from given site collections using the generated data access and outputs the titles of the posts: using System.Collections.Generic; using System.Linq; using Microsoft.SharePoint; namespace DataModel.Blogs { class Program { static void Main(string[] args) { var sites = new List<string> {"", ""}; var posts = Get(5, sites); foreach (var post in posts) { Console.WriteLine(post.Title); } Console.ReadKey(); } public static IEnumerable<Post> Get(int limit, List<string> siteUrls) { var posts = new List<Post>(); foreach (var siteUrl in siteUrls) { posts.AddRange(Get(limit, siteUrl).ToList()); } return posts.OrderByDescending(p => p.Published).Take(limit); } public static IEnumerable<Post> Get(int limit, string siteUrl) { var posts = new List<Post>(); using (var site = new SPSite(siteUrl)) { foreach (var web in site.AllWebs.Where(w => w.WebTemplate.Equals(SPWebTemplate.WebTemplateBLOG))) { posts.AddRange(GetFromWeb(limit, web.Url)); } } return posts.OrderByDescending(p => p.Published).Take(limit); } public static List<Post> GetFromWeb(int limit, string webUrl) { using (var dc = new BlogDataContext(webUrl)) { try { return (from post in dc.Posts orderby post.Published descending select post).Take(limit).ToList(); } catch (ArgumentException) { return new List<Post>(); } } } } } The outcome of the console application is that it finds and outputs titles of two blog-posts “Blog in another site collection” and “My first blog post” from two different Blog-sites in two different site collections: and as I have two site collections in my web application and a Blog-site on both of them with one post each . My former colleague Sami Poimala took a little time to investigate a problem we had in our SharePoint customization case where a similar scenario to what is described above was first tested successfully in a console app but the data access model wouldn’t work when testing on SharePoint and would only find items from the current site collection. The confusion having read the msdn-article How to: Query Using LINQ to SharePoint, which suggests there should be no problem, was almost unbearable. Here’s what Sami found out with a little help of reflector and google: Firstly, there seems to be an assumption made in Microsoft.SharePoint.Linq.Provider.SPServerDataConnection constructor, which forces the use of SPContext.Current.Site when operating in SharePoint context. Secondly, you can find Chun Liu’s great dive into LINQ to SharePoint, which explains the behaviour – a little quote from Liu’s post: So why SPServerDataConnection was designed to use SPContext.Current.Site, instead of using new SPSite(url) directly? Does it make sense? Well, at lease to me it does make sense because site collection should be a scope of custom queries. . So, what is the conclusion. Site collections are boundaries and you shouldn’t consider showing content from another site collection – at least with LINQ to SharePoint and SPMetal driven solutions, and if a slight expression of irritation is allowed considering the mixed information available – because it does make sense to some. If this is the case, at least put a flag in the constructor or something that lets you specifiy which to use. And if you are going to cross site collections, we should be prepared for security context changes as well I suppose. I think Liu’s post about “because site collection should be a scope of custom queries. ” meaning it shouldn’t be (typo) but this is because the security context cannot be assumed valid. I’d say as long as I can specifiy both, we are good. Thanks for your comment. Security context shouldn’t be a problem because every query is run in the context of current user and no data is or at least shouldn’t be retrieved if current user has no privileges to the queried list whether the list lies in the current or some other site collection. That’s how, for instance, CrossListQueryCache’s GetSiteData-method works and it works just fine. I don’t think Liu had a typo in his post. It becomes obvious if you read the next chapter after the supposed typo. I’m late to this party, but I’ll throw an opinion into the pot. Surely Liu’s “rule” that custom queries shouldn’t cross a site collection boundary is an arbitrary one. Why shouldn’t they? While it might be convenient for the SharePoint development team to come up with such arbitrary limitations, the simple fact is that the world doesn’t want to work that way. I can’t think of a client I’ve worked with over the past three or four years who didn’t have some requirement that involved data mashups of cross-site-collection data. Many times that’s because their SharePoint installations have evolved over time and just don’t have “optimal” site structures. While it might make someone feel important to declare that it shouldn’t be done because an obvious screw-up in the code prevents it, let them try talking these clients into tearing down and rebuilding their farms so site collection boundaries don’t have to be crossed. Microsoft’s left that gap to be filled by 3p SharePoint vendors for years, and now that they actually have a reasonable OOTB mechanism for doing it, they ship it hobbled. Strange, but not completely unexpected. Thanks for the comment Dan. I feel your pain. If you are looking for alternatives, Camlex.NET () might be something to interest you. Haven’t tried it yet myself but have met Alexey Sadomov and am convinced. I too worked under the assumption that Linq to SQL would be universal in its access given it is working with the server object model, and I only discovered the issue when trying to deploy a solution to the production environment. In my dev environment I had created a sub web off the Site root as a data site for shared data to simulate our production environment, but only realised when deploying that the live data site was in its own site collection, as it’s shared data is used by multiple site collections. Rather than rewriting all my code to dump Linq and use manual loading of sites/webs/lists I looked into overriding certain functions as a potential solution, but Microsoft have locked down most of the key areas with the “internal” keyword. Its a bit of a hack but I was able to get around this rather stupid (imo) limitation with a bit of reflector digging. Essentially I modified the getter of the lists in the generated data context classes to temporarily clear the HttpContext while doing the Linq requests, which force the SPServerDataConnection constructor to use my specified URL rather than the current context. public Microsoft.SharePoint.Linq.EntityList MyList { get { HttpContext current = HttpContext.Current; HttpContext.Current = null; EntityList items = this.GetList(“My List”); HttpContext.Current = current; return items; } } I’m not sure if there is any potentially implications of this that I have overlooked (potentially including multiple threads using the context singleton), but a cursory review of the code in reflector seems to indicate that there would be no obvious issues with this. If anyone can tell me differently I would be interested to hear your thoughts on it. Yes, Kain, that’s the technique of the workaround. You might want to consider try {} finally {} when toggling HttpContext.Current. Hey guys, Great article. I am pretty new to SP so i decided to try out the Linq to SP in our multi-site implementation that I am working on. I quickly found this issue as well while reflecting via dotPeek. Did anyone submitted a feature request to take care of this? This seriously limits the usefullness of the provider and thus makes creating SP implementation so much harder. I am currently looking at a way to report this as a bug or get in touch with SP dev team. Ping me if you’re interested on the outcome. PS: i have not used the blog site in a while but does have current info to get to me. diversas páginas web por alrededor de un año y estoy nerviosa preocupados en cambiar a otra plataforma. He oído grandes. ¿Hay una manera que puedo transferencia toda mi wordpress Mensajes en él? Cualquier ayuda sería muy grandemente apreciada! Now I am going to do my breakfast, after having my breakfast coming yet again 6 minutes to skinny reviews () read more news.
http://www.sharepointblues.com/2010/09/15/linq-to-sharepoint-and-site-collection-scope/
CC-MAIN-2017-13
refinedweb
1,496
54.52
Ad Campaign, Ad Set and Ads have one of following status types: For background see Ads Developer Blog, Deleted versus Archived. Live ad objects can have the following status: ACTIVE PAUSED, for ADSET or CAMPAIGN PENDING_REVIEW CREDIT_CARD_NEEDED PREAPPROVED DISABLED Set the ad object to ARCHIVED by setting status field to ARCHIVED. When an object status is set to ARCHIVED you can continue to query the details and stats based on the object id. However there is maximum limit on the number of objects you can archive. So you should respect this limit and change status to DELETED when you no longer need an object. An ARCHIVED object has only two fields you can change: name and status. You can also only change status to DELETED. Set the ad object to DELETED by either setting status field to DELETED or sending an HTTP DELETE to that object. Once an object status is set to DELETED you cannot set it back to ARCHIVED. If you keep the deleted object ID, you can continue to retrieve stats or object details by querying the object ID. However you cannot retrieve the deleted objects as a connection object from a non deleted node or object. For example, <API_VERSION>/<AD_ID>/insights works for a deleted object but <API_VERSION>/act_<AD_ACCOUNT_ID>/insights?level=ad does not return stats for the deleted object. After you delete an ad, it may still track impressions, clicks, and actions for 28 days after the date of last delivery. You can query insights for DELETED objects using the ad.effective_status filter. If you have an ad set with two ads in it, and you delete one ad, the following two queries do not return the same results:<API_VERSION>/<AD_SET_ID>/insights<API_VERSION>/<AD_ID>/insights The ad set returns stats for both the deleted and the non-deleted ads in it. However when you query for ads in the ad set, you only see one ad:<API_VERSION>/<AD_SET_ID>/ads To avoid this scenario, you should delete ads 28 days after their last date of delivery to ensure stats no longer change. Also you should store the stats or ids of those objects in your own system before you delete them. This recommendation is optional: You cannot change any field, except name, for a DELETED object. This is how you typically manage object status: deletedstate to reduce the limit. The status on ad objects works this way for the hierarchy of ad objects: paused, archived, or deletedfor a campaign, all the objects below it will automatically inherit that status. If you set an ad campaign to deleted, you cannot retrieve the ad sets or ads below that campaign without explicitly specifying the IDs. paused, archived, or deleted, the ad set or ad campaign containing that ad will keep its original status and will be available for retrieval. The following limits apply to ARCHIVED objects for given ad account: If you read archived edges, you need to specifically filter for the archived objects since we do not return them by default. If you read stats for an ad object, we include the stats of all children objects, no matter if the child is active, archived, or deleted. Therefore you need no filter for insights on child objects. Objects with statuses such as ACTIVE, PAUSED differ from those with ARCHIVED status, and DELETED. Here are the major differences. To set an ad to be archived: use FacebookAds\Object\Ad; $ad = new Ad(<AD_ID>); $ad->archive(); from facebookads.adobjects.ad import Ad ad = Ad(ad_id) ad.remote_archive() new Ad(<AD_ID>, context).update() .setStatus(Ad.EnumStatus.VALUE_ARCHIVED) .execute(); curl \ -F 'status=ARCHIVED' \ -F 'access_token=<ACCESS_TOKEN>' \<AD_ID> To delete an ad: use FacebookAds\Object\Ad; $ad = new Ad(<AD_ID>); $ad->deleteSelf(); from facebookads.adobjects.ad import Ad ad = Ad(<AD_ID>) ad.remote_delete() new Ad(<AD_ID>, context).update() .setStatus(Ad.EnumStatus.VALUE_DELETED) .execute(); curl -X DELETE \ -d 'access_token=<ACCESS_TOKEN>' \<AD_ID>/ To retrieve live sub-objects of a live object, for example, all live ads of an ad campaign, not including ARCHIVED or DELETED ads: To retrieve ARCHIVED sub-objects of a live object, for example, all ARCHIVED ads of an ad set, requires the status filter:
https://developers.facebook.com/docs/marketing-api/best-practices/storing_adobjects/
CC-MAIN-2019-13
refinedweb
695
61.16
I have data such as: ['$15.50'] ['$10.00'] ['$15.50'] ['$15.50'] ['$22.28'] ['$50'] ['$15.50'] ['$10.00'] I want to get rid of the dollar sign and turn the strings into floats so I can use the numbers for several calculations. I have tried the following: array[0] = float(array.text.strip('$')) which gives me an attribute error, because apparently a 'list' object has no 'text' attribute. My bad. Is there a similar way for 'list' objects to get stripped? Any other suggestions would be welcome too. Thanks in advance. Try using a list comprehension: array = [float(x.strip("$")) for x in array] With regex: import re array = ([float(re.sub("\$","",x)) for x in array]) In case '$' is not at the end or beginning of the string This should do: [float(s.replace(',', '.').replace('$', '')) for s in array] I have taken the liberty to change your data in order to consider a wider variety of test cases: array = ['$15.50', '$ 10.00', ' $15.50 ', '$15,50', '$22,28 ', ' 10,00 $ '] And this is what you get: In [8]: [float(s.replace(',', '.').replace('$', '')) for s in array] Out[8]: [15.5, 10.0, 15.5, 15.5, 22.28, 10.0]
http://www.dlxedu.com/askdetail/3/bd83a93c091e0cf44185995bb3c86cb2.html
CC-MAIN-2018-47
refinedweb
204
76.01
svecon + 37 comments Pretty easy with C#: using System; class Solution { static void Main(String[] args) { int N = int.Parse(Console.ReadLine()); for (int i = 0; i < N; i++) Console.WriteLine(new String('#', i + 1).PadLeft(N, ' ')); } } _ankit_singh_ + 4 comments I can't comment in formatted way. Can you tell me how to do that? ismailkuet + 7 comments It's a fun with Python:: for i in range(1, n + 1): print(str('#'*i).rjust(n)) chaubey_rajan3 + 1 comment why we use r.just(n) what is its work shivanshu_saxena + 2 comments for i in range(1,n+1): print((" ")*(n-i),(("#")*i)) why this is not correct as it shows same output? akashagrawal3011 + 0 comments in code editor output window shows me 'int' object has no attribute 'rjust' Why?? shivam_hackgod + 2 comments print('\n'.join([' '*(n-x)+'#'*x for x in range(1,n+1)])) nikhilgupta_myid + 1 comment "+" (concat operation) would be a heavy operation. sivamamillapalli + 1 comment I tried below: for i in range(n): print("%6s" % ((i+1)*"#")) Except one test case all other failed. Can you please explain why? Thanks in advance carl_smotricz + 0 comments Sure. "%6s"means your hashes will be printed in a field 6 wide. That's fine if your stair is exactly 6 high, and only then. If you're going to use a format, you need to adapt that "6" to the intended height / width of your stair. societalghost + 0 comments Very similar to my solution: for i in range(n): print(str.rjust((i+1)*'#', n)) fabrinaruto7 + 0 comments al = n * "#" el = n * " " for i in range(1, len(al)+1): result = el[i:] + al[:i] print(result) i did it like this and i passed a lot of time thinking it xd venison + 1 comment I was looking for exactly that type of string method. I figured c# just had to have something like that. Thanks fer sharin'. _BlueBird_ + 15 comments Just that simple with JAVA StringBuilder public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); StringBuilder builder = new StringBuilder(); for (int i = 0; i <n ; i++) builder.append(" "); int j = 0; for (int i = 1; i <=n; i++) { builder.replace(builder.length()-i, builder.length() - j, "#"); System.out.println(builder); j++; } } asbadve + 8 comments Scanner in = new Scanner(System.in); int n = in.nextInt(); int spaceCnt = 0; for (int i = 0; i < n; i++) { spaceCnt = n - (i + 1); System.out.print(new String(new char[spaceCnt]).replace("\0", " ") + new String(new char[n - spaceCnt]).replace("\0", "#") + "\n"); } Hows it? douglas_bell + 6 comments I had the same general idea, but it's not passing even though the output is identical :/ import java.io.*; import java.util.*; public class Solution { public static void main(String[] args) { Scanner scan = new Scanner(System.in); int stairs = scan.nextInt(); for (int i = 0; i < stairs; i++) System.out.printf("%" + (stairs + 1) + "s", new String(new char[i + 1]).replace("\0", "#") + "\r"); } } edit: crap. changed the CR to newline and it passed. iamfaker + 9 comments public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); in.close(); //char c = '#'; for(int i=0 ; i<n ;i++){ for(int j = 0; j <= n-i-2; j++){ System.out.print(" "); } for(int j = n-i-1 ; j< n; j++){ System.out.print("#"); } System.out.println(); } } My idea neil_cuajotor + 1 comment nice puzzle mate but it seems pretty confusing CodeAcharya + 6 comments How about this mate public class Solution { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int n = sc.nextInt(); for(int i=1;i<=n;i++) { for(int k=n;k>i;k--) { System.out.printf(" "); } for(int j=1;j<=i;j++) { System.out.printf("#"); } System.out.println(); } } } joaquimley + 3 comments 3 nested for loops? There's def. something to be improved here, think about recursion. tatianaensslin + 0 comments its really only 2 nested for loops and therye each operating on different ranges. benaffleks + 1 comment 2 nested loops and the size is extremely small so computation time is not something we need to consider dallas3 + 1 comment function createString(str, ns, n) { if (ns > 0) return createString(str+" ", ns-1, n); else if (n > 0 && ns === 0) return createString(str+"#", ns, n-1); else if (n === 0 && ns === 0) return str; } // Complete the staircase function below. function staircase(n) { let ns = n.length-1; console.log(createString("", ns, n)); } i was going to add a while loop but i keep getting undefined not sure the problem. Phiber_Optik + 1 comment i m still confused with the logic, can you please explain it furthure more? bineykingsley36 + 2 comments - I loop from 1 to the maximum number (n). - And then I print a number of # amounting to the current loop index on each line followed with a newline break. - But since this will lead to the output being aligned on the left, I also print (n-i) spaces on each line where n is the maximum number and i is the current loop index before executing (2). So for instance when i is 1, I print 5 spaces before I print 1 # to fill up the six spaces (total number of spaces on each line). When i is 2, I print 4 spaces before I print 2 # to make sure all the six spaces on the line are filled. This also ensures that the # is aligned to the right. NB. The range is (1, n+1). I didn't start from 0 because that will cause an empty space at the beginning of the output and I also ended at n+1 because the loop ends and n-1 so I add 1 to get to the correct n. Do you get it now? Phiber_Optik + 1 comment Yes! you explained it awesome. Thank you man! michael_broscius + 1 comment What?! Because it's not convoluted crap it's brute force? Only silly thing is that he started indices at 0 instead of 1. bg2407111 + 1 comment You can't get better than O(n^2), the goal is to print an nxn matrix where some of the elements are spaces and some are #. In order to do that you have to do at least the n^2 print statements. Just because you can code it in a few lines in a language like C# doesn't mean it's more efficient, that same process is going on behind the scenes. shanuraj1995 + 1 comment yes,bro it you are saying correct...it will minimum require of O(n^2) time complexity filipi_braga + 0 comments static void staircase(int n) { string str = ""; for (int i = 0; i < n; i++) { str += "#"; Console.Write(str.PadLeft(n,' ')+"\n"); } } and, what does my answer mean? zeel_mehta97 + 1 comment How about this? for i in 1...n { var string = "" string = String(repeating: " ", count: n-i) string.append(String(repeating: "#", count: i)) print(string) } bineykingsley36 + 1 comment Python2 isspek + 17 comments public class Solution { public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); String str="#"; for (int i=0;i<n;i++) { System.out.printf("%"+(n+1)+"s",str+"\n"); str=str+"#"; } } } I inspired by your solution. alexgiby + 1 comment good to see your implemenation , by seeing your code i found that i was uncessary using an extra loop michael_broscius + 1 comment The "extra" loop is hidden by the printf. So what? Complexity is the same. er_shreyansh1991 + 1 comment can you please explain how this line wokrs System.out.printf("%15s","##"+"\n"); alexgiby + 1 comment The printf is for formatting and when we give %s it indicate the String need to format, the number indicate (15) starting point for printing string (##) and \n indicate for new line so that ## will start printing in a new Line and start from 15th space for more details refer * * aswathylalitha + 2 comments A small change in your code to make it more simple public class Solution { public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); String str="#"; for (int i=0;i<n;i++) { System.out.printf("%"+n+"s%n",str); str=str+"#"; } } } sbphillips19 + 0 comments Why is it n+1. Isn't that saying how long the string is so it should be n for all? System.out.printf("%"+(n+1)+"s",str+"\n"); nshtmishra13 + 0 comments please can you explain this line System.out.printf("%"+(n+1)+"s",str+"\n"); mansiagrawal2103 + 1 comment can you please explain this line i didn't got that System.out.printf("%"+(n+1)+"s",str+"\n"); Kanahaiya + 0 comments Hello Mansi, I think this video can help you to understand the above mentioned line. Here is the video explanation – Anoop_SS + 1 comment how about this? String hash = "#"; for(int i = 1; i <= n; i++){ System.out.printf("%"+n+"."+i+"s%n", hash); hash+="#"; } mahamadbilal696 + 0 comments Sorry Anoop, i may not help you, i am expert in python notin other languages.... uphillbattler1 + 1 comment Can you please explain mamoun003 + 17 comments python is love,,,,python is life n = int(input()) for i in range(1,n+1): print(('#'*i).rjust(n,' ')) snchzantonio + 4 comments I updated my answer based on yours n = int(input()) [print(("#"*f).rjust(n) ) for f in range(1,n+1)] greenhand1 + 2 comments if you don't want to use rjust. You can simply add space yourself for length in range(n): print(' '*(n-length-1)+'#'*(length+1)) hualing_yu + 2 comments Same idea! for i in range(n): print(" "*(n-i-1)+"#"*(i+1)) michelbetancour1 + 1 comment one liner (lambda num: [print(('#'*x).rjust(num)) for x in range(num+1)])(int(input().strip())) mahamadbilal696 + 0 comments i have got the error as follows... (lambda num: [print(('#'*x).rjust(num)) for x in range(num+1)])(int(input().strip())) ^ SyntaxError: invalid syntax can you please help me why it so..... RANA5069 + 2 comments Bro...how the n value in rjust() is decreasing...first it will print n spaces than in next iteration what will happen..please explain yrisheet + 0 comments Consider the example "john".rjust(5) This doesnt actually give 5 spaces to the string. Consider the given string as a block [j][o][h][n](i.e there are four blocks) As we have given width=5 and the given string is of 4 blocks ,it just adds one more block(i.e,the empty block which is a space) to make it equal to 5. [][j][o][h][n] So now it becomes " john".Make note of the space before the string. If you specify any char,the space then gets filled with the given char. ruturaj_haval + 2 comments Will Someone please explain me this python code ? I know how to do it using 2 loops thanhtrung674 + 2 comments prashy_shtty + 2 comments try this count = 1 while(count <= n): print(('#' * count).rjust(n)) count += 1 an0o0nym + 4 comments Python3 is even more love: for i in range(n): print('{:>{len}}'.format('#'*(i+1), len=n)) You can also do it pretty easy as a one-liner :) shubhambigdream + 1 comment my output is same but still showing fail, can anybody tell me whats wrong: shubhambigdream + 0 comments a=int(input())+1 i=0 while(a!=0): print(" "*a,end="") print("#"*i) a=a-1 i=i+1 shubhambigdream + 1 comment didi u find anything Kanahaiya + 0 comments Could you please format your code or do one thing if its not in java then cross verify your solution with mine one. It will help you to figure out the mistake at your own. Here is the video explanation – maqsudinamdar7 + 1 comment I'm beginner in python. Can you explain me this code prashy_shtty + 0 comments simpler one count = 1 while(count <= n): print(('#' * count).rjust(n)) count += 1 fenwicknate + 1 comment Good Evening! I was amazed at your succinct and easy solution to this problem. I am currently getting my feet wet in Python and have been using Hackerrank in order to improve my familiarity with it along with its online documentation. I main question is this: How did you know that you could use print('#'*1) I have scoured programiz.com and Stack Overflow for ideas on handling this problem, but this technique never came up. Any lead on becoming a better python programmer would be greatly appreciated! fenwicknate + 0 comments Realized that I never put the answer to my own question here in case someone else was curious about the same thing. This technique is covered in the Python documentation. Although, it's not covered in the Print Methods or String section of the documentation, it is in the Built-in Types Section. Have been reading this section and learning a lot; hopefully it will be as useful for the rest of the programming enthusiasts out there! farhanfarooqui + 7 comments This is how I did it. public static void main(String[] args) { Scanner in = new Scanner(System.in); int N; N = in.nextInt(); for(int i = 0; i < N; i++){ for(int j = 0; j < N; j++){ if(j < N-1-i){ System.out.print(" "); }else{ System.out.print("#"); } } System.out.println(); } } nileshtheace + 0 comments Wow really really awsome I am amazed by this logic Please keep posting such amazing things natedehorn + 1 comment public static void main(String[] args) { Scanner in = new Scanner(System.in); int max = in.nextInt(); for (int i = 1; i <= max; i++) { System.out.println(new String(new char[max-i]).replace("\0", " ") + new String(new char[i]).replace("\0", "#")); } } jovanovdusan1 + 1 comment public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); StringBuilder sb =new StringBuilder(); for(int i=1; i<=n; i++){ for(int j=1; j<=i; j++){ sb.append("#"); } System.out.printf("%"+n+"s",sb.toString()); sb.delete(0,sb.length()); System.out.println(); } } misoknr + 1 comment You really don't need more than one loop for this and can do with formatted output chase_franz + 1 comment this code also works well. time complexity is O(N). public static String repeat(String str, int times) { return new String(new char[times]).replace("\0", str); } public static void main(String[] args) { Scanner scannerObj = new Scanner(System.in); int n = scannerObj.nextInt(); for(int i = n-1; i >= 0; i--) { System.out.print(repeat(" ", i)); System.out.println(repeat("#", n-i)); } } zoomstereo + 0 comments I did it like this ... import java.io.*; import java.util.*; public class Solution { static String getFloor(int size, int i) { String strFloor = ""; int n = size - (i + 2); for(int k = 0; k < size; ++k) { if(k > n) strFloor += "#"; else strFloor += " "; } return strFloor; } public static void main(String[] args) { Scanner sc = new Scanner(System.in); int size = sc.nextInt(); for(int i = 0; i < size; ++i) { System.out.println(getFloor(size, i)); } } } Glorian + 0 comments This problem reminds me with a similar problem that I faced during the early semesters of my Bachelor study :) At that time, I used the inefficient nested loop solutions. Turns out, there's a way better solution posted here. Thanks! Here's my solution (inspired by your solution): import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { static void staircase(int n) { // Complete this function StringBuilder sb = new StringBuilder(); //create the spaces for(int i = 0;i<n-1;i++){ sb.append(" "); } //create the pound signs (replace the spaces to the pound sign) for(int j=n;j>0;j--){ sb.replace(j-1, j, "#"); System.out.println(sb); } } public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); staircase(n); in.close(); } } raghavsaggu + 1 comment here is my solution: static void staircase(int n) { for(int i = 1; i <= n; i++) { for(int j = 1; j <= n; j++) { if(j <= n-i) System.out.print(' '); else System.out.print('#'); } System.out.println(); } } ggarciadeleon + 0 comments All you need to do is replace the character. Try this on for size. int counter = 1; StringBuilder sb = new StringBuilder(n); for(int i = 0; i < n; i++){ sb.append(" "); } System.out.println(sb); for (int i = 1; i <=n; i++) { sb.setCharAt(n-counter,'#'); counter++; System.out.println(sb); } mehdi_raz + 1 comment It needs less coding, but it is not a fast approach. This String constrcutor overload is too slow. Check here: "I can get Substring to be 5-6 times faster but using a constructor like String(Char, Int32) turns out to be atleast 3 times slower" Hegistvan + 1 comment var cnt = Convert.ToInt32(Console.ReadLine()); var message = string.Empty; for (int i = 1; i <= cnt; i++) { Console.WriteLine("{0,"+cnt+"}", message += "#"); } deathbullet + 36 comments That is ugly. Meet Python n = int(raw_input()) for i in range(1,n+1): print " "*(n-i) + "#"*i marcolandiego + 2 comments (1..n).each {|a| print " "*(n-a) + "#"*a + "\n"} erichamion + 2 comments Or for those who feel a semicolon in a one-liner is cheating, Python2: print '\n'.join(('#' * (i + 1)).rjust(n) for n in [int(raw_input())] for i in xrange(n)) Python3: [print(('#' * (i + 1)).rjust(n)) for n in (int(input()),) for i in range(n)] TheRealUpstart + 1 comment How does it keep looping and jumping to the next line? (Python2) erichamion + 0 comments The entire argument to joinis a list comprehension. Within the comprehension, for n in [int(raw_input())] for i in xrange(n)technically sets up a nested loop, although the outer loop only executes once. [int(raw_input())]creates a list with one element, the height of the staircase. for n in [int(raw_input())]goes through (each of) the element(s) of that list, setting nto that element. (This only happens once because there is only one element). This part is just a trick to set n to the staircase height without needing a separate statement (separate line or semicolon). The inner loop, for i in xrange(n), is the real loop. Because it's the inner loop, nhas already been set to the height of the staircase. Of course, this inner loop sets ito every integer from 0 to n-1, in turn. ('#' * (i + 1)).rjust(n)creates each line. For each iteration of the inner loop (and therefore for each value of nand i), the list of results has an element with (i + 1) '#' characters, right-justified in a field of width n. sahiljalan1 + 0 comments for java one liner's while(++i <= n) System.out.println(new String(new char[n-i]).replace("\0", " ")+new String(new char[i]).replace("\0", "#")); ArdyFlora + 1 comment Hi, Could you Please what's wrong with this: space=' ' n = int(input().strip()) for i in range(n): noOfSpaces = n - i -1 print((space*noOfSpaces),(n-noOfSpaces)*"#") nicolewhite + 0 comments A nicer way is to use rjust, I think: for i in range(n): s = "#" * (i + 1) print(s.rjust(n, " ")) RandhyllCho + 4 comments I made this work in Swift by overloading the operator var n = Int(readLine()!)! func *(string: String, scalar:Int) -> String { let value = Array(count: scalar, repeatedValue: string) return value.joinWithSeparator("") } for i in 1..<n + 1 { print(" " * (n - i) + "#" * i) } nemanja1 + 2 comments My solution: let space = Character(" ") let char = Character("#") var spaceCount = 0 var charCount = 0 for (var i = 0; i<n; i++) { spaceCount = n-1-i charCount = i+1 let spaceString = String(count: spaceCount, repeatedValue: space) let charString = String(count: charCount, repeatedValue: char) print("\(spaceString)\(charString)") } paul31 + 1 comment Mine was very similar, only real difference is that my for loop is ready for Swift 3... var n = Int(readLine()!)! for i in 1...n { var row = "" let spaces = n-i let hashes = n-spaces row += String(count: spaces, repeatedValue: Character(" ")) row += String(count: hashes, repeatedValue: Character("#")) print(row) } SteveMartinClark + 0 comments // Current Syntax Is for i in 1...n { var row = "" let spaces = n-i let hashes = n-spaces row += String(repeating: Character(" "), count: spaces) row += String(repeating: Character("#"), count: hashes) print(row) } Kijit + 2 comments how about this ? for i in 1...n { print(String(repeating: " ", count: n - i) + String(repeating: "#", count: i)) } sergeyoleynich + 0 comments More swift like solution (1...n).forEach { value in print(String(repeating: " ", count: n - value) + String(repeating: "#", count: value)) } or use reducefunction print((1...n).reduce(into: "") { result, value in result.append(String(repeating: " ", count: (n - value))) result.append(String(repeating: "#", count: value)) result.append("\n") }) I do not know about the complexity. PhoenixFox + 0 comments Python's awesome for small scripts like this, but it doesn't do so well when performance matters. I tend to use it until I need fast AI/rendering. That's usually when I switch to Java or C++. Python has some of the nicest features for algorithms and data conversion. jtipton + 1 comment Python, Meet Ruby n = gets.strip.to_i for i in 1..n do puts "#{' ' * (n-i)}#{'#' * i}" end mvanlamz + 1 comment Range and map is more ruby-esque: def staircase(n) (1..n).map{|x| "#{' ' * (n - x)}#{'#' * x}"}.join("\n") + "\n" end n = gets.to_i puts(staircase n) nouriyouri + 0 comments Alternatively you can also use Integer#upto def staircase(n) 1.upto(n) { |i| puts ('#' * i).rjust(n) } end n = gets.to_i aonazarov + 1 comment Great solution. For python2 it's better to use xrange( rangefor python3) and I think using rjustis more pythonic. My solution for i in xrange(1, n+1): print ('#' * i).rjust(n) jefjob15 + 0 comments I used string concatenation in my code's print statement, but I like this better! I'd never had cause to use rjustbefore, so it didn't even occur to me. This solution has the added bonus of actually following the problem description (i.e., "The staircase is right-aligned..."). Now I've learned something new. Cheers to you and nicolewhite and others who used rjust! trungskigoldberg + 2 comments Inspired by the string arithmetic, C++ also suport such feature as followed (maybe there's a little worry for the linear space consumption, but definitely we save an extra inner loop for printing '#') for (int i = 1; i <= n; i++){ string s(n-i, ' '); string p(i,'#'); cout << s << p << endl; } agnivabasak1 + 1 comment Please can someone explain me the code ,i have never used this kind of definition of string . And could someone please tell if there is any method to do it using setw() ,setfill() and stuff ,although i think setfill() wouldnt be required . viigihabe + 0 comments I just finished with the one that uses setw int main() { string::size_type n; cin >> n; auto width = n; while(n--) cout << setw(width) << string(width - n, '#') << endl; return 0; } Could use s.substring(0, witdth - n, '#') from string s(width, '#'). Not sure if compiler would generate different code. PhoenixFox + 1 comment Even easier in python n = int(raw_input().strip()) for i in reversed(xrange(n)): print (' ' * (i)) + ('#' * (n - i)) bhavini + 0 comments Here, we will use two for loops, outer for loop will print one row of star pattern ans second row will print space characters and # characters. In any row, the sum of spaces and stars are equal to N. Number of stars increases by one and number of spaces before stars decreases by 1 in consecutive rows. In any row R, we will first print N-R space characters then R star characters. benJephunneh + 0 comments I didn't know about the PadLeft option. 'Preciate your sharing this. Also, you can start the iterator at i=1 to avoid the "i+1" in the string constructor, then run it until i<=N. It's all the same, of course, but a little more intuitive for folks like me. trycatchnothing + 0 comments I used a slightly different approach: int size = Convert.ToInt32(Console.ReadLine()); for(int i=1;i<=size;i++){ Console.WriteLine(new string(' ', size - i) + new string('#', i)); } nsedley + 0 comments Interesting. I used Enumerable.Repeat and concatenating a string. I prefer your padleft approach though int n = Convert.ToInt32(Console.ReadLine()); for(int i = 0; i < n; i++) { string w = string.Join("", Enumerable.Repeat(" ", (n-1) - i)); w += string.Join("", Enumerable.Repeat("#", i + 1)); Console.WriteLine(w); } alfonsovgs + 0 comments I did not know that String overload :/ My solution: using System; class Solution { static void Main(String[] args) { int N = int.Parse(Console.ReadLine()); for (int i = 0; i < N; i++) Console.WriteLine(Clone('#', i).PadLeft(N, ' ')); } private static string Clone(char character, int count) { return "".PadLeft(count, character); } } alanthony333 + 2 comments Did it like this in C. Could this get shorter? int main(){ int n; scanf("%d",&n); int a = n; char space = ' '; char hash = '#'; for(int i = 0; i < n; i++) { for(int j = 0; j < a - 1; j++) { printf("%c", space); } a--; for(int k = 0; k < i + 1; k++) { printf("%c", hash); } printf("\n"); } return 0; } vhssa + 1 comment I did it similar void staircase(int n) { // Complete this function int k = n; while (k > 0){ for (int i = 0; i < k - 1; i++) printf(" "); for (int j = k - 1; j < n; j++) printf("#"); printf("\n"); k--; } } mycodeattempts + 1 comment I created a very simple version in C: void staircase(int n) { /* * Write your code here. */ char *str= malloc(n+1); memset(str, (int)'#', n); str[n] = '\0'; for(int i = 0; i < n; i++) { printf("%*.*s\n",n,i+1,str); } } Maciej_Macowicz + 0 comments Hi, I have also a simple version : void staircase(int n) { char *buf = malloc(n*sizeof(char)); for (int i = 0; i< n ; i++) { memset(buf, ' ', n-i-1); memset(buf+n-i-1, '#', i+1); puts(buf); } } mandeepch868 + 0 comments Its too easy with Python! :) here.. n=int(input()) for i in range(1,n+1): print(' '*(n-i)+'#'*i) michelbetancour1 + 0 comments more easy in python one line (lambda num: [print(('#'*x).rjust(num)) for x in range(num+1)])(int(input().strip())) ragulravi1999 + 0 comments include using namespace std; int main() { int n; cin >> n; for(int i=0;in-i-2) cout<<"#"; else cout<<" "; } cout< andrewsenner + 3 comments JavaScript one-liner (not the most efficient :P) const staircase = n => console.log(new Array(n).fill(0).map((_, idx) => "#".repeat(idx + 1).padStart(n)).join('\n')); alglaze132 + 1 comment nice solution! Can you explain what the underscore: "_" is doing that you pass to .map()? gregoryderner + 1 comment the underline is the first parameter of the calbback arrow function. It would return the value of the array in this case. As it is not what he wants but rather the index, he just named it that way. The same solution here, but I prefer something more readable. let result = []; for (let i = 0; i < n; i++){ result[i] = Array((n) - (i)).join(" "); result[i] = result[i].concat(`${Array(i+2).join('#')}`) } return result.join('\n') francisco_j_gtz + 0 comments Same solution: for(let symbols = 1, spaces = n - 1; symbols <= n; symbols++, spaces--) { const result = [...Array(spaces).fill(" "), ...Array(symbols).fill("#")] console.log(result.join("")); } alxshelepenok + 0 comments here is another very simple solution: function staircase(n) { const s = "#"; for (let i = 1; i <= n; i += 1) { const line = " ".repeat(n - i) + s.repeat(i); console.log(line); } } josephjeremiah51 + 0 comments i did something similiar const stairCase = (n) => { for (let i = 0; i < n; i++) { console.log("#".repeat(i + 1).padStart(n)); } Kanahaiya + 0 comments Hello All, There are a lot of sites and git hub repositories where you can find hackerRank solutions for most of the problems. But I would recommend... which is maintained by me. Here I may be biased but the trust me, you will find this repository useful. In this repository, I am adding video tutorial too which you will not find in any of the git-hub repositories out there over the internet. Here is the sample tutorial - which will teach you an amazing approaches to solve hackerrank staircase problem. Please watch the complete tutorial to know a tricky method to solve this problem. Here my goal is not only to provide the solution but to build the problem-solving skills. Would recommend you to go through the video tutorials which will increase your logical ability. Don't forget to share your feedback with others, if you like my work. :) Kanahaiya + 0 comments Hello Coding Lover, If you are looking for solution of hackerrank problems you can checkout the below link. It contains text solution as well as video explaination. Still there are many more solutions are in queue which needs to be added. And if you are preparing for coding interview feel free to join out community here Regards, Kanahaiya Gupta Git Hub URL | LIKE US | SUBSCRIBE US | EthanHunt1104 + 0 comments Javacript Clean and Sexy using repeat method of strings // Complete the staircase function below. function staircase(n) { for (let i = 1; i <= n; i++){ console.log(' '.repeat(n - i) + '#'.repeat(i)); } } karthak002 + 0 comments Why wont this be the correct output??? WHYY? include using namespace std; // Complete the staircase function below. void staircase(int n) {int i,j,k; for(i=0;ii;j--) { cout<<" "; } for(k=0;k<=i;k++) { cout<<"#"; } cout<> n; cin.ignore(numeric_limits::max(), '\n'); staircase(n); return 0; } _tesseractive_ + 0 comments Also you can initialize i to 1 and no need to add 1 to i in each loop iteration. I know as programmers we are accustomed to counting from 0 :) ismailkuet + 0 comments It's fun with python :: for i in range(1, n + 1): print(str('#'*i).rjust(n, ' ')) shivam_hackgod + 0 comments python: HOLD MY BEER ** def staircase(n): lis=[' '*(n-x)+'#'x for x in range(1,n+1)] print('\n'.join(lis))* dexterishere_18 + 0 comments Much more easy with Python: - def staircase(n): - h='#' - s=' ' - for i in range(1,n+1): - print(((n-i)*s)+(i*h)) mojemoron2009 + 0 comments In Python 3 def staircase(n): for i in range(1,n+1): print("{0}{1}".format(' '*n, "#"*i)) savagekid972 + 6 comments Simple javascript solution for (let i = 1; i <= n; i++) { console.log("#".repeat(i).padStart(n)); } fractalsandflow1 + 3 comments Love string arithmetic: n = int(input()) for m in range(n): print((n - m - 1) * ' ' + (m + 1) * '#') Sort 2859 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/staircase/forum
CC-MAIN-2020-29
refinedweb
5,041
66.13
Many things to note, grouped by categories Object Oriented Design and Layers Nowadays everybody knows that a well architectered application consists in a set of layers (such as UI - Model - Database)and a well designed object oriented model. Even I do! Since I am teaching Programming Methodology I have tried to put together some nice tiny examples of object oriented layered architectures, and I have found that actual programming languages are not well suited for this need. If you do a good object oriented analysis, you will end up with an object model. This object model reappears at the three different layers. That, let's say I am implementing a TODO-list management program. I will have a UI layer Task and a Model layer Task, both containing the description of the task. Furthermore, if I make an inheritance hierarchy such that ImplementationTask inherits from Task and so does DesignTask, I have to repeat this inheritance hierarchy at every layer. Sure there will be objects specific for each layer, but most domain model objects will be present in all of them. After thinking for a while I see two solutions to this: - Using a programming language that allows open classes. In Ruby you can think of having one (or a set of) .rb file for each layer. Since we can add functionality to objects, the UI layer will just reopen the model layer objects and add the UI related functionality. Since the model is implemented in separate files, there is no dependence between the model and the UI, but the UI shares the model object model. Drawbacks are that classes are to be open at every layer. - Providing usual OO languages such as Java with a construct and semantics to deal with layers and create an IDE that works with such constructs. While decoupling more and more from the filesystem, our way of programming and thinking of independence in programs is still very much influenced by how we store our code into files. In Java, for example, many times we assume everything under the same .java file as interdependant. We can break this by introducing (for example) a layer keyword that separates the different parts of the object that implement services for the different layers. So we will have something like public class A { layer Model //Model functionality layer UI //UI functionality } Plus a file layers.xml for the complete application, describing the dependencies between the layers. The IDE will know which layer we are working in and will show : - The underlying layers as blackbox services (just the methods we can call, not the code) - The code of the layer we are working in - Nothing about layers that we do not depend on Mass spectrometry classification Finally I have found some GNU software and a reasonably well written article about mass spectrometry classification. Network programming J2EE sucks. Reviewing I have been reviewing papers for several conferences and workshops this week. Undoubtedly, GTDT04 papers were the most interesting and high quality. I will be really sad if I am not able to be in NY. Family matters Next Thursday I am driving to Seville to see my little niece. I will be back on Barcelona on Monday. ZF inconsistent fxn, I cannot reach the paper. Do this means we can now positively confirm my long time unproven theorem that 2+2=5? :o)
http://www.advogato.org/person/cerquide/diary.html?start=45
CC-MAIN-2016-22
refinedweb
562
53.31
Template:DPL/doc This template generates a dynamic page list (DPL), a list of all pages meeting some specified criteria, with optional parameters for substantially the full range of DPL options, and default values suited to Wikibooks. Usage[edit] The following parameters are supported. Each one is optional, but there must be either a cat, or a not, or namespace; and there's an upper limit on the total number of cats and nots (when last noted, the limit was six). - cat1, cat2, cat3, cat4, cat5 — categories that a page must belong to in order to be listed. The more of these are specified, the fewer pages will qualify for the list. - not1, not2, not3, not4, not5 — categories that a page must not belong to, in order to be listed. The more of these are specified, the fewer pages will qualify for the list. - namespace — namespace a page must belong to in order to be listed; to restrict to mainspace, use main. - stable — how to treat pages that have at least one sighted revision; include treats them no differently than any other page, only lists only sighted pages, and exclude lists only unsighted pages; the default is usually include, but if namespace specifies Wikijunior the default is only. - Note, the Wikijunior default stable=only is relied upon at Wikijunior to prevent listing of unvetted pages. - offset — integer number of pages to omit at the start of the list, default being zero. - count — integer number of pages to list, default being the maximum list length allowed by the extension (a setting in the extension). - showerrors — if non-blank, errors are reported (mainly, "There are no pages matching this query"); by default, errors produce no visible output. - full — if false, the namespace of pages is not shown (only the {{PAGENAME}} of each page is listed); if any other non-blank value, the namespace of pages is shown; default depends on whether parameter namespace is specified — if it is specified, default is false, otherwise default is true. - method — how the list is ordered; default is categorysortkey, alternatives are categoryadd (when pages were mostly recently added to the first category in the query) and lastedit (when pages were most recently edited). - order — whether to show the list forward (ascending) or backward (descending); default is ascending. - showdate — if non-blank, shows the date when each page was added to the first category (even if method=lastedit). - mode — what kind of list to generate; default is an unordered list, i.e., each page is preceded by a bullet; alternatives are ordered, i.e., the pages are numbered (1, 2, 3, ...), and none. Internals[edit] Parameter defaults are coded in {{DPL/simple}}. Requests for extended maximum list length are dispatched to {{DPL/0}}. See also[edit] These templates have more limited functionality, and somewhat different interfaces; relatively unobvious interface differences, that could cause difficulties when converting to {{DPL}}, are listed below. - {{CategoryList}} - {{CategoryJunction}} - {{CategoryIntersection}} Parameter stable is unsupported by two of these, and called stablepages by {{CategoryJunction}} with default always include (no exception for Wikijunior). Parameter showerrors is called errors. Parameter full always defaults to false.
http://en.wikibooks.org/wiki/Template:DPL/doc
CC-MAIN-2014-35
refinedweb
519
51.78
In this case, a customer sent me a scene where Undo didn’t work. As soon as you opened the scene, nothing you did could be undone, and Undo wouldn’t start working again until you loaded some other scene. I thought it might be something in the scene, so I deleted practically everything under the Scene_Root, but Undo still didn’t work. Then I noticed that the .scn file was still pretty big (18MB) so I poked around a bit more in the scene and found thousands and thousands of materials (14K of materials, to be exact). There were over 13 thousand AutoCAD_Color_Index materials. Just opening the Material Manager took minutes, and deleting 13 thousand materials wasn’t as easy as you might think. I first tried with the DeleteAllUnusedMaterials command, but that took so long that I figured that Softimage was hung and I killed it. In the end, I deleted some manually (a hundred at a time) and then the rest with the Material Manager > Delete Unused Materials. But I could also have done it like this: import time start = time.clock() import win32com.client oObj = win32com.client.Dispatch( "XSI.Collection" ) oObj.Items = 'Sources.Materials.DefaultLib.AutoCAD_Color_Index_*' print oObj.count # 13782 Application.SetValue("preferences.General.undo", 0, "") for mat in oObj: Application.DeleteObj(mat) Application.SetValue("preferences.General.undo", 50, "") end = time.clock() Application.LogMessage( round( end - start, 3) ) # INFO : 264.117 Hi Stephen, It is funny but I ended up with a same scene myself yesterday, had 6914 materials and undo was greyed out. I used your script, the only thing I added was to before deleting I wanted to make sure that material was not used. So I added a line for that before the deletion: ___________ if mat.Model: continue ___________ Thanks for the scripts !
https://xsisupport.com/2012/05/29/the-case-of-the-scene-that-wouldnt-undo/
CC-MAIN-2020-45
refinedweb
301
58.28
Introduction Matplotlib is one of the most widely used data visualization libraries in Python. Typically, when visualizing more than one variable, you'll want to add a legend to the plot, explaining what each variable represents. In this article, we'll take a look at how to add a legend to a Matplotlib plot. Creating a Plot Let's first create a simple plot with two variables: import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() x = np.arange(0, 10, 0.1) y = np.sin(x) z = np.cos(x) ax.plot(y, color='blue') ax.plot(z, color='black') plt.show() Here, we've plotted a sine function, starting at 0 and ending at 10 with a step of 0.1, as well as a cosine function in the same interval and step. Running this code yields: Now, it would be very useful to label these and add a legend so that someone who didn't write this code can more easily discern which is which. Add Legend to a Figure in Matplotlib Let's add a legend to this plot. Firstly, we'll want to label these variables, so that we can refer to those labels in the legend. Then, we can simply call legend() on the ax object for the legend to be added: import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() x = np.arange(0, 10, 0.1) y = np.sin(x) z = np.cos(x) ax.plot(y, color='blue', label='Sine wave') ax.plot(z, color='black', label='Cosine wave') leg = ax.legend() plt.show() Now, if we run the code, the plot will have a legend: Notice how the legend was automatically placed in the only free space where the waves won't run over it. Customize Legend in Matplotlib The legend is added, but it's a little bit cluttered. Let's remove the border around it and move it to another location, as well as change the plot's size:='upper right', frameon=False) plt.show() This results in: Here, we've used the loc argument to specify that we'd like to put the legend in the top right corner. Other values that are accepted are upper left, lower left, upper right, lower right, upper center, lower center, center left and center right. Additionally, you can use center to put it in the dead center, or best to place the legend at the "best" free spot so that it doesn't overlap with any of the other elements. By default, best is selected. Add Legend Outside of Axes Sometimes, it's tricky to place the legend within the border box of a plot. Perhaps, there are many elements going on and the entire box is filled with important data. In such cases, you can place the legend outside of the axes, and away from the elements that constitute it. This is done via the bbox_to_anchor argument, which specifies where we want to anchor the legend to:='center', bbox_to_anchor=(0.5, -0.10), shadow=False, ncol=2) plt.show() This results in: The bbox_to_anchor argument accepts a few arguments itself. Firstly, it accepts a tuple, which allows up to 4 elements. Here, we can specify the x, y, width and height of the legend. We've only set the x and y values, to displace it -0.10 below the axes, and 0.5 from the left side ( 0 being the lefthand of the box and 1 the righthand side). By tweaking these, you can set the legend at any place. Within or outside of the box. Then, we've set the shadow to False. This is used to specify whether we want a small shadow rendered below the legend or not. Finally, we've set the ncol argument to 2. This specifies the number of labels in a column. Since we have two labels and want them to be in one column, we've set it to 2. If we changed this argument to 1, they'd be placed one above the other: Note: The bbox_to_anchor argument is used alongside the loc argument. The loc argument will put the legend based on the bbox_to_anchor. In our case, we've put it in the center of the new, displaced, location of the border box. Conclusion In this tutorial, we've gone over how to add a legend to your Matplotlib plots. Firstly, we've let Matplotlib figure out where the legend should be located, after which we've used the bbox_to_anchor argument to specify our own location, outside of the axes..
https://stackabuse.com/add-a-legend-to-figure-in-matplotlib/
CC-MAIN-2021-17
refinedweb
770
75.3
Hangman game by using Dalma Dalma is a workflow engine that lets you write conversational programs quickly. In my last blog about Dalma, I showed a little code snipet that explains the concept, but I wanted to have the real working code. So today, I added a little hangman game as a sample to Dalma. It's a daemon that handles hangman games with multiple users concurrently, via e-mail. The entry point looks like this: public class Main { public static void main(String[] args) throws Exception { ... // initialize the directory to which we store data File root = new File("hangman-data"); // set up an engine. // we'll create one e-mail endpoint from the command-line. final Engine engine = EngineFactory.newEngine(root,new ThreadPoolExecutor(1,true)); final EmailEndPoint eep = (EmailEndPoint)engine.addEndPoint("email", args[0]); eep.setNewMailHandler(new NewMailHandler() { /** * This method is invoked every time this endpoint receives a new e-mail. * Start a new game. */ public void onNewMail(MimeMessage mail) throws Exception { System.out.println("Starting a new game for "+mail.getFrom()[0]); engine.createConversation(new HangmanWorkflow(eep,mail)); } }); // start an engine engine.start(); System.out.println("engine started and ready for action"); } } Basically it sets up an engine with an e-mail connectivity, and whenever a "new" e-mail is received, it creates a new HangmanWorkflow instance and starts it as a conversation. (E-mails that are replies to existing conversations will be delivered to their waitForReply method.) HangmanWorkflow class looks like this: <xmp>public class HangmanWorkflow implements Runnable, Serializable { /** * {@link EndPoint} that we are talking to. */ private final EmailEndPoint ep; private final MimeMessage msg; // the first message public HangmanWorkflow(EmailEndPoint ep, MimeMessage msg) { this.ep = ep; this.msg = msg; } public void run() { // the answer String word = WordList.getRandomWord(); int retry = 6; // allow 6 guesses // the characters the user chose // true means selected boolean[] opened = new boolean[26]; MimeMessage mail = msg; // last e-mail received while(retry>0) { // send the hint mail = (MimeMessage)mail.reply(false); mail.setText( "Word: "+maskWord(word,opened)+"\n\n" + "You Chose: "+maskWord("abcdefghijklmnopqrstuvwxyz",opened)+"\n\n"+ retry+" more guesses\n"); mail = ep.waitForReply(mail); System.out.println("Received a reply from "+mail.getFrom()[0]); // pick up the char the user chose char ch =getSelectedChar(mail.getContent()); if(ch==0) continue; if(word.indexOf(ch)<0) { // bzzzt! retry--; } opened[ch-'a']=true; if(maskWord(word,opened).equals(word)) { // bingo! mail = (MimeMessage)mail.reply(false); mail.setText("Bingo! The word was\n\n "+word); ep.send(mail); return; } } MimeMessage reply = (MimeMessage)mail.reply(false); reply.setText("Bzzzt! The word was\n\n "+word); ep.send(reply); } } </xmp> As you see, all the conversational state is in the local variables, and there's no code that handles persistence explicitly (other than the "implements Serializable") This code is run as a conversation, meaning whenever the following line is invoked: mail = ep.waitForReply(mail); the state is persisted into a disk, and the Java thread that runs it is reused to run other conversations. The program is not event-driven at all, and I'm hoping that this brings the same productivity gain that the StAX API brought to Java compared to SAX. This sample does bytecode instrumentation as a part of the build process to make this magic happen. I deployed it at hangman at kohsuke dot org, so you can write an e-mail to this address to start a new hangman game. You can write multiple e-mails to run mutliple games in parallel (each on-going game has one HangmanWorkflow instance.) See this page for how to play the game. To show that it's actually persisting state, the daemon is killed every half an hour and restarted half an hour later. You'll see this as a delay in the e-mail reply from daemons, but other than that, you'll see that the JVM shutdown is transparent to the game you are running. See this page for more about this sample, and this sample is a part of the distribution, so you can run it locally if you want. - Login or register to post comments - Printer-friendly version - kohsuke's blog - 618 reads
http://weblogs.java.net/blog/2005/10/22/hangman-game-using-dalma
crawl-003
refinedweb
691
57.16
OpenTelemetry: A Quarkus Superheroes Demo of Observability This demo illustrates how to capture telemetry data with OpenTelemetry between distributed services and view interactions between microservices in a system. Join the DZone community and get the full member experience.Join For Free Are you building microservices? Do you struggle with observability and with capturing telemetry data between distributed services? This article shows how to quickly and easily introduce OpenTelemetry into a distributed system built on Java with Quarkus. This combination allows you to visualize the interactions between all the microservices within an overall system. The article introduces the official Quarkus sample application, Quarkus Superheroes, deploys it on the free Developer Sandbox for Red Hat OpenShift, and demonstrates how to collect and visualize telemetry data in order to observe microservices' behavior. What Is OpenTelemetry? The OpenTelemetry website states that: OpenTelemetry is a collection of tools, APIs, and SDKs. Use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software's performance and behavior. OpenTelemetry was created by merging the popular OpenTracing and OpenCensus projects. It is a standard that integrates with many open source and commercial products written in many programming languages. Implementations of OpenTelemetry are in varying stages of maturity. At its core, OpenTelemetry contains the Collector, a vendor-agnostic way to receive, process, and export telemetry data. Figure 1 displays the Collector's high-level architecture. For more information about observability and OpenTelemetry, check out the excellent article, Observability in 2022: Why it matters and how OpenTelemetry can help by Ben Evans. In addition, Daniel Oh's article about integrating OpenTelemetry into Quarkus applications running on Knative is a great read. Now let's discuss how OpenTelemetry can help you observe the Quarkus Superheroes application. Prerequisites for the Quarkus Superheroes Application You can easily deploy the Quarkus Superheroes application on any Kubernetes instance. Here we deploy the application on the Developer Sandbox for Red Hat OpenShift because it is easy to obtain a free account and the environment requires minimal setup. But you can adapt the instructions in this article to other Kubernetes environments. To follow along on your own with the steps in this demonstration, you will need: - A free Red Hat account to access the Developer Sandbox. Signing up does not require a credit card. - The Red Hat OpenShift occommand-line interface (CLI). - A Java development environment. In this article, we will use the Java 17 version of the application, but any of the other three versions (natively-compiled Java 11, JVM Java 11, or natively-compiled Java 17) would work the same. The Quarkus Superheroes Sample Application The Quarkus Superheroes application consists of several microservices which co-exist to form an extensive system. Some microservices communicate synchronously via REST. Others are event-driven, producing and consuming events to and from Apache Kafka. Some microservices are reactive, whereas others are traditional. All the microservices produce metrics consumed by Prometheus and export tracing information to OpenTelemetry. The source code for the application is on GitHub under an Apache 2.0 license. Figure 2 shows the overall architecture of this application. Detailed information about the application and its architecture can be found on the quarkus.io blog. One of the main requirements when building the application was that it should be simple to deploy on Kubernetes. A Prometheus instance scrapes metrics from all of the application services. Additionally, all of the services export telemetry data to the OpenTelemetry Collector. The Collector, in turn, exports telemetry data to a Jaeger instance, where the data can be analyzed and visualized. The gRPC protocol is used for communication between the applications and the OpenTelemetry Collector and between the OpenTelemetry Collector and Jaeger. Deploying the Application on the OpenShift Sandbox The OpenShift Sandbox provides a private OpenShift environment free for 30 days. It is in a shared, multi-tenant OpenShift cluster preconfigured with a set of developer tools. This private environment includes two projects (namespaces) and a resource quota of 7GB RAM and 15GB storage. The application's development and stage phases can be emulated using the two namespaces. All your Pods automatically scale to 0 after 12 hours. The following subsections set you up to use the Sandbox. Logging into the Sandbox You can access your Developer Sandbox with your Red Hat account. Follow these instructions to log in. Don't worry if you haven't created a Red Hat account yet. The instructions will guide you through creating and verifying a new account. Follow only the six steps in the "Get access to the Developer Sandbox" section of the instructions. Connecting your Local Machine to the Sandbox Follow these instructions to download the OpenShift CLI and run oc login with the token from your sandbox. Then your terminal should be in the <your-username>-dev project. If you already have a Developer Sandbox account and have existing workloads in your project, you might need to delete those before deploying the Quarkus Superheroes application. The Developer Sandbox limits the resources deployed at a single time for each user. Deploying the Quarkus Superheroes Application The deploy/k8s directory in the root of the application's GitHub repository contains Kubernetes descriptors for each of the four versions of the application: JVM 11, JVM 17, natively compiled with Java 11, and natively compiled with Java 17. If you'd like, run git clone to download the code from the Quarkus Superheroes GitHub repository. However, cloning is not necessary because you can apply Kubernetes resources directly from remote locations. Perform the following steps in your terminal to deploy the Java 17 version of the application container images and all the backing services. Wait for each step to complete before proceeding to the next one. Deploy the applications by executing: oc apply -f Deploy the monitoring stack by executing: oc apply -f That's it! Deploying the Superheroes is super simple! Once everything deploys, your browser should look something like Figure 3. The system as deployed is not considered production-ready. The deployed resources (databases, Prometheus instance, OpenTelemetry Collector instance, Jaeger instance, Kafka broker, and schema registry) are not highly available and do not use Kubernetes operators for management or monitoring. They also use ephemeral storage. Interacting with the Application Open the event statistics user interface (UI) by clicking the icon in the upper right corner of the event-statistics application, shown in Figure 4. Once open, you should see the event statistics UI shown in Figure 5. Similarly, open the Superheroes UI by clicking the icon in the upper right corner of the ui-super-heroes application, shown in Figure 6. Once open, you should see the Superheroes UI, shown in Figure 7. The following are clickable areas highlighted in green in Figure 7: - Expand or collapse the list of a Hero's or Villain's powers - Randomly select a new Hero and Villain for battle - Perform a battle Now you can perform a few battles with the same Hero and Villain and with different Heroes and Villains. Once you have completed a few battles, the table below the fighters displays a list of battles. If you switch back to the event statistics UI, the slider should have moved or stayed in the middle if there were equal wins. There is also a list of the top ten winners and the number of wins for each team (see the example in Figure 8). Analyzing Telemetry Data After performing a few battles, let's analyze the telemetry data. Open the Jaeger UI by clicking the icon in the upper right corner of the jaeger application, shown in Figure 9. Once open, you should see the Jaeger UI. Analyzing Requests for New Fighters First, let's analyze the traces generated when you requested new fighters. After you click the New Fighters button in the Superheroes UI, the browser makes an HTTP request to the /api/fights/randomfighters endpoint within the rest-fights application. These services and operations should already be available in the Jaeger UI. Next, in the Jaeger UI, select rest-fights for the Service and /api/fights/randomfighters for the Operation (see Figure 10). Then click Find Traces. A list of traces should appear on the right-hand side of the Jaeger UI, as shown in Figure 11. A trace consists of a series of spans. Each span is a time interval representing a unit of work. Spans can have parent/child relationships and form a hierarchy. Spans can also indicate the parallelization of work running concurrently. The bottom of Figure 11 shows that each trace contains 14 total spans: 6 spans in the rest-fights application, 4 spans in the rest-heroes application, and 4 spans in the rest-villains application. Each trace also provides the total round-trip time of the request into the /api/fights/randomfighters endpoint within the rest-fights application and the total time spent within each unit of work. Clicking on one of the traces will bring you to the trace timeline screen in Figure 12. outgoing HTTP calls from the rest-fights application to the rest-heroes and rest-villains applications called in parallel. The rest-heroes and rest-villains timelines even trace down to the database. For example, you can display the executed database query by clicking the second rest-heroes SELECT Hero or rest-villains SELECT villains_database.Villain span and expanding the tags (as shown in Figure 13). Each application in the system manages its own sets of traces and spans. The rest-fights application sends trace context information on the HTTP request so that the rest-heroes and rest-villains applications can read it. This way, the complete trace information can be accurately correlated when the rest-fights, rest-heroes, and rest-villains applications export telemetry data to the OpenTelemetry Collector. The Collector then correlates and aggregates all the trace and span information and sends everything to Jaeger. The Quarkus OpenTelemetry extension (integrated into all the applications in the system) handles the heavy lifting to make it work. Analyzing Fights Next, let's analyze the traces when performing a fight: - When you click the Fight button in the Superheroes UI, the browser makes an HTTP request to the /api/fightsendpoint within the rest-fightsapplication. These services and operations should already be available in the Jaeger UI. - Return to the main Jaeger UI by clicking JAEGER UI in the header at the top of the page. - Once you're back in the main Jaeger UI, select rest-fightsfor the Service and /api/fightsfor the Operation (see Figure 14). - Then click Find Traces. As before, a list of traces should appear on the right-hand side of the Jaeger UI as shown in Figure 15. The display shows that each trace contains 8 total spans: 4 spans in the rest-fights application and 4 spans in the event-statistics application. Each trace provides the total round-trip time of the request into the /api/fights endpoint within the rest-fights application and the total time spent within each unit of work. Clicking on one of the traces takes you to the trace timeline screen displayed in Figure 16. the rest-fights fights send span and the event-statistics fights receive child span. These spans are where the rest-fights application places a message on an Apache Kafka topic, and where the the events-statistics application consumes the message. Trace context information is sent along with the message on the Kafka topic from the rest-fights application and subsequently read by the event-statistics application when it consumes the message. This way, OpenTelemetry accurately correlates the trace context information when the rest-fights and event-statistics applications export telemetry data to the OpenTelemetry Collector. The Collector then correlates and aggregates all the trace and span information and sends everything to Jaeger. Similar to the previous section, if you click on a span and expand the tags, you can see additional information about each span. Again, the Quarkus OpenTelemetry extension (integrated into all the applications in the system) handles the heavy lifting to make it work. Quarkus and OpenTelemetry Take You Deep Inside Microservices Applications today are becoming more and more complex. Typically, multiple applications work together in a distributed fashion to form a usable system. What happens when things don't quite work? What if the system slows down? How do you perform root cause analysis across distributed applications to determine what's going on? Observability is paramount in these types of systems. It is an invaluable ability to look at distributed trace information to correlate traces and spans, log data, and metrics. This article demonstrated valuable telemetry data and tools to collect it. Want to learn more about the Quarkus Superheroes? Check out these awesome resources: - Quarkus Superheroes to the Rescue! - Quarkus Superheroes GitHub repository - Quarkus Superheroes: Managed services save the day Want to know more about observability and OpenTelemetry? Check out these great articles: - Observability in 2022: Why it matters and how OpenTelemetry can help - Distributed tracing with OpenTelemetry, Knative, and Quarkus - Quarkus OpenTelemetry guide Finally, if you're new to Quarkus, take a look at some of these resources: Published at DZone with permission of Eric Deandrea. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/opentelemetry-a-quarkus-superheroes-demo-of-observ
CC-MAIN-2022-40
refinedweb
2,212
54.63
Invoking Hi Sujit, Your posts are great! I was also wondering if you know whether arrays of objects can be passed the same way as described in your post? For example, if you have a datagrid of people and allow users to update their info. Can you send the list from flex back to your RemoteServiceHandler as java objects to receive them like this: public void update(List newInfoList){ //update database here… } I tried something similar but I can only receive a List of AS objects on the java side… 😦 I’ll really appreciate any input or thoughts you have on the matter! Thanks!! Hi, You can definitely send array of objects. I cannot understand, what you meant when you said that you are getting List of AS objects. 😦 I can suggest you to try casting the objects in the Java method or try Generics like update(List <Person> newInfoList). Please let me know if you are still facing problem implementing this. 🙂 Hi Sujit, This might be kind of a dumb question… Is there any way to set the class path within Flex 3 builder? I haven’t found an option for it or any reference on the internet. Also, when you said “recognizable by the BlazeDS” is there anything special besides setting the environment variable classpath? Thanks! Sorry, please disregard my previous post… Your code works like a charm! I found the issue I was previously experiencing. I was only displaying a few of the data fields of the action class in the datagrid. For example, I only want users to edit their name but not their id so I only displayed the name column. However, when I pass the datagrid dataProvider back to the RemoteServiceHandler it didn’t match the Person class anymore and gave a class cast exception. I suspect it’s because dataProvider only contains the name field so does not match the Person class. I will try to find a solution around this. If you have any thoughts around this I’ll appreciate it, too. Thanks so much for your help! Check out the description provided for the dataProvider property of the DataGrid in the Flex language reference. That might help you solve all problems you are facing 🙂 Hi Sujit, One issue I forsee with this implementation is that the Java instance variables are public and in turn the reflected ActionScript object then has public instance variables as well which I would like to know if there is a way to ensure they remain private? My very limited understanding is that the reflection happens on the getters, so for instance, Java > Person.getName = ActionScript Person.name?? Is that correct? Do you know if below would achieve my objective for the instance variables to remain private? package { [RemoteClass(alias=”com.adobe.objects.Person”)] public class Person { private var _name:String; public function Person() {} public function set name (name) { this._name = name; } public function get name () { return this._name } } } Also, would this work at both ends of the wire? ie, not just Java to AS, but then back AS to Java? Regards, Paul Hi Paul, Your understanding regarding reflection is correct 🙂 You can have your variables as private and just expose the getters and setters in both AS and Java classes. Just let me know if you have problem implementing this 🙂 […] Action Script objects to Java objects This entry was written by Sujit Reddy G. Bookmark the permalink. Follow any comments here with the RSS feed for this post.Content related […] Hi Sujit, Before I start, great great articles. It is a great resource. I have been doing Flex/ActionScript and BlazeDS for about 30 hours or so…. very very newbie and your articles really helped me along. I am writing a chat application (very basic but good way to learn) and am trying to populate a list of all online users (J2EE backend) into a mx:DataGrid. I am starting with this article to learn how to move data from Java to J2EE but the issue I am having is the person name is blank… and i am quite sure the Java code is right. Have you or anyone experience this many thanks Eric Hi Sujit. Your tutorials are very easy to follow. But I ran into a problem when I tried this. When I make the remote call, I get a null result back. The connectivity is working though, because I did write another public function in remoteservicehandler that returned string and it worked. Seems like a propblem only while returning data type Person. Any ideas? Hi Eric, Where exactly is the person name blank? is it in the datagrid? If it is in the datagrid, please make sure if the dataField property is set as required. If you can explain your problem with the code snippets, it will be easier to help you out 🙂 If you don’t want to post your code in my blog .. Please feel free to mail me 🙂 my mail id is sujitreddy.g@gmail.com Hi Shankar, It should be working when you return the person object also … can you please explain share the code snippets, where you are accessing the person object on the client. Sujit, Thanks much for your prompt response. I took your surety for granted, combed my code, and it didnt take too long to be ‘enlightened’ that all my public variables in the Java class were static. That will do it, isnt it? Thanks again. Now I know where to seek if I am stuck. Hi Shankar, Good to know that you got things working 🙂 You can definitely get back to me if you are stuck. 🙂 I will try to solve the problem by myself or by taking help from the our experts here 🙂 I would like to know if the Person object had a nested object say Address, would that get serialized while being sent back to flash? class Person { Address address; String name; Date birthDate; ….. } on the same token can we have Address as a static inner class of Person and would that be serialized properly? Hi SriJ, yes, the Address object will also be serialized. Any variable which is public will be serialized. If you have mapped the Address class also to some class in the AS, you will be accessing the Address object as object of the type of mapped class; Otherwise you will be accessing the Address as the object of the type Object in the Flex application. Hope this helps. 🙂 How about mapping Spring AOP proxy beans? How could they be mapped to AS objects? When modifying RemoteClass alias to an interface the mapping does not work. Is mapping to an interface (that includes the setter and getter methods) possible? I also meant to say thank you so much for your posts. I have found them extremely helpful in getting up to speed on flex development. Sujit, Do you know is there any open source utility out there that automate conversion of Java classes to Actionscript classes? Hi Joanne, I haven’t tried mapping to Spring AOP proxy beans. Mapping to an Interface is something which is possible. I tried this and it works. If you can provide more details on what problem u r facing, we can help you out. Hi Paul, Try out the URLs below. Looks like they have created application for converting Java classes to AS classes. Hope this helps Sujit, The code provided in the above example gives me the following error in FlexBuilder 3 when trying to print out either company or dateOfBirth: (example) 1119: Access of possibly undefined property company through a reference with static type Person. Any idea why the Builder cannot resolve either of these attributes, but name and id resolve fine? Thanks, Ted Hi Ted, Person.as code, which i provided doesn’t have the variable named company declared. Please add that to your Person.as file and then try accessing. dateOfBirth should be accessible, please try to clean your project or share the code, so that we can figure out if we are missing something 🙂 Hope this helps. Thanks for the example. Is there a way to send the Person object from Flex to Java? I’ve tried reversing the code, but when it gets sent, Flex nulls my object. Here is an example: DTOTest.as package testingonly { [Bindable] [RemoteClass(alias=”testingonly.DTOTest”)] public dynamic class DTOTest { public var firstName:String; public var lastName:String; public function DTOTest(fname:String,lname:String){ firstName = fname; lastName = lname; } } } DOTTest.java: package testingonly; import java.io.Serializable; public class DTOTest implements Serializable { private static final long serialVersionUID = 1L; public String firstName; public String lastName; public DTOTest(){ } public DTOTest(String fname, String lname){ firstName = fname; lastName = lname; } } TestingOnly.java: package testingonly; import java.util.*; import testingonly.DTOTest; public class TestingOnly { static DTOTest[] dtot;// = { public DTOTest[] sendTokenList(DTOTest d) { dtot[0] = d; return dtot; } } TestingOnly.mxml: Thanks for these great tutorials! Sorry, my last post got cut off. TestingOnly.mxml: I think it is the xml reference that is cutting this off. Here’s it with the xml piece removed: TestingOnly.mxml: Argh! Okay, only the CDATA section: import mx.rpc.events.ResultEvent; import testingonly.DTOTest; function sendTokenList():void { var dtot:DTOTest = new DTOTest(); dtot.firstName = “John”; dtot.lastName = “Smith”; testCommunication.sendTokenList(dtot); } function displayResult(event:ResultEvent):void { var names:Array = event.result as Array; var person:DTOTest = DTOTest(names[0]); myBtn.label = person.firstName + “” + person.lastName; } Hi Shawn, Yes, you can send objects from Flex to Java. Can you please mail me the source files, so that i can see what is going wrong 🙂 my mail id: sujitreddy.g@gmail.com 🙂 Hi Shawn, Couple of things went wrong. Firstly the object which are subject to serialization by BlazeDS are expected to have a default no argument constructor. Your DTOTest.class did not have a default no argument constructor. Second thing is the in your TestingOnly.class, you were using the array before initializing it, due to this there was an exception thrown. Thirdly, the DTOTest.as didn’t have a default no argument constructor. As AS3 will not allow you to overload constructors, you can make the constructor arguments optional. THANKS! So, just to recap for other readers, on the Java side, the class must have a no argument constructor such as: // required for BlazeDS public myClass(){ // do nothing } as well as the constructor that gets values passed to it such as: // constructor used public myClass( variableType variable) { // this is where the logic goes } on the Actionscript side of things it should read such as: public function myClass( variableType = null ) { // this is where the logic goes } By assigning the variables a null value, you are basically creating a constructor that is read both as a no argument constructor (since the values are applied with the default null if none are sent) as well as a constructor that receives a value that is passed to it (so the null value is overwritten by the passed value). Finally, to send private values from AS to Java you have to use the get/set methonds as found in Am I understanding correctly? Thanks SO much for this example! Shawn Hi Sujith I have another question. I have a editable data grid which is filled with values from objects (Server object) retrieved from server through BlazeDS and these values are data provider for my dataGrid. If one more client logs in and is looking at the same data set and a user modifies the values in the dataGrid, I will modify and send the data to Server Objects for updating, but is there way I can make this happen dynamically i.e if one user edits the dataGrid and clicks update, it will automatically for another user looking at the same dataGrid Regards -Chandu Hi chandu, This implementation requires code at the server to push the data to all the clients reading the data. This is something which is provided by the DataManagement service of LCDS. You can try creating a custom adapter of BlazeDS and push the data to all the clients when the data is modified. Hope this helps. Sujit – Great article and very helpful! I got your example working but was wondering if you had a code snippet that uses remoteObject from within an AS class as opposed to in the mxml directly. And, if you could, include how to send the result (maybe via bindable) to an mxml component? Thanks so much -Brian Sujit – Great article and very useful! I got your example working but was wondering if you have a code snippet that shows how to use remoteObject in an AS class instead of from the .mxml (for learning purposes)? Also, can you include how best to return the result from the AS class to the mxml component (maybe via bindable). Thanks, Brian PS – Sorry if this double posts. hey Sujit, I have a editable datagrid filled with Array of Objects from the server, am using blaze ds to get data from server through remote service(BlazeDS).. when i try to send the updated data back to server, am receiving the following error: [BlazeDS] Cannot create class of type ‘CardEconVar’. flex.messaging.MessageException: Cannot create class of type ‘CardEconVar’. Type ‘CardEconVar’ not found. at flex.messaging.util.ClassUtil.createClass(ClassUtil.java:65) at flex.messaging.io.AbstractProxy.getClassFromClassName(AbstractProxy.j ava:72) at flex.messaging.io.amf.Amf3Input.readScriptObject(Amf3Input.java:430).ArrayCollection.readExternal(ArrayCollection.java:8 7) at flex.messaging.io.amf.Amf3Input.readExternalizable(Amf3Input.java:528 ) at flex.messaging.io.amf.Amf3Input.readScriptObject(Amf3Input.java:455).amf.Amf3Input.readScriptObject(Amf3Input.java:473):132) at flex.messaging.io.amf.Amf0Input.readArrayValue(Amf0Input.java:323) at flex.messaging.io.amf.Amf0Input.readObjectValue(Amf0Input.java:136) at flex.messaging.io.amf.Amf0Input.readObject(Amf0Input.java:92) at flex.messaging.io.amf.AmfMessageDeserializer.readObject(AmfMessageDes erializer.java:217) at flex.messaging.io.amf.AmfMessageDeserializer.readBody(AmfMessageDeser ializer.java:196) at flex.messaging.io.amf.AmfMessageDeserializer.readMessage(AmfMessageDe serializer.java:120) at flex.messaging.endpoints.amf.SerializationFilter.invoke(Serialization Filter.java:114) at flex.messaging.endpoints.BaseHTTPEndpoint.service(BaseHTTPEndpoint.ja va:274) at flex.messaging.MessageBrokerServlet.service your help would be greatly appreciated. Hi Brian, I was busy with projects this week and so could not reply soon 🙂 RemoteObject code in AS3 var searchRemoteObject:RemoteObject = new RemoteObject(); searchRemoteObject.destination = “destinationName”; searchRemoteObject.showBusyCursor = true; searchRemoteObject.addEventListener(ResultEvent.RESULT, searchEmployeeResultHandler); searchRemoteObject.addEventListener(FaultEvent.FAULT, faultHandler); searchRemoteObject.searchForEmployees(searchString); //this is method call For returning result from the AS class, i would throw a custom event and include the result in the event object. Sample: private function searchEmployeeResultHandler(event:ResultEvent):void { searchRemoteObject.removeEventListener(ResultEvent.RESULT, searchEmployeeResultHandler); var employeeConnections:ArrayCollection = event.result as ArrayCollection; var helperEvent:ConnectionsHelperEvent = new ConnectionsHelperEvent(ConnectionsHelperEvent.SEARCH_EMPLOYEE_EVENT); helperEvent.employeeConnections = employeeConnections; dispatchEvent(helperEvent); } Hope this helps. 🙂 Hello Sujit, Thanks for the article; I am stuck at one place though. Actually, my java class has one ArrayList of Objects (which is another java class) for example (sorry in advance for long post however I wanted to make it clear) I think I am missing something here however I have no idea about it…please bare with me. Java class: public class UserAccountDTO{ private String loginID; private String name; private ArrayList assignedFunction; ……… …….. //getter and setter method //no args constructor // constructor to set all these values from outside public UserAccountDTO(String loginID,String name, ArrayList assignedFunction){ this.setLoginID(loginID); this.setName(name); this.setAssignedFunction(assignedFunction); } } public class AssignedFunctionDTO(){ private int function_id; private String function_name; //getter and setter methods //no args constructor //constructor to set all these value from out side } Then the method which Remote Object service calls public loginUtil(){ //fetches user records from database //creates arraylist of AssignedFunctionDTO by assignedFunction.add(new AssignedFunctionDTO(functioned,functionName); //returns UserAccountDTO object by return new UserAccountDTO(loginID,name,assignedFunciton); } I am using blazeds for remoting and cairngorm for structure and created two DTO classes in flex like this: package; { [RemoteClass(alias=”myPackage.UserAccountDTO”)] public class UserAccountDTO{ Public var loginID:String; Public var name:String; public var assignedFunctaion:Object //I tried using assignedFunctionDTO/ArrayCollection too } } package; { [RemoteClass(alias=”myPackage.AssignedFunctionDTO”)] public class AssignedFunctionDTO { public var function_id:int; public var function_name:String; } } In my command class I am not getting value of AssignedFunctionDTO its null I am doing : modelLocator.userAccountDTO = event.result; This gives me everything other then AssignedFunctionDTO don’t know why, if I pass a simple arraylist I get its value, however I do not get value of ArrayList which contains another object inside it… please help Hi Meena, Can you please send the Java files and Flex files required to reproduce this issue to my mail id (sujitreddy.g@gmail.com). From the code you pasted in your message, the method loginUtil() does not have a return type declared. It will be easy to track down the issue if you can share the files, so that i can deploy them and reproduce the issue. We can definitely send ArrayList with objects of any class type. Hi Sujit, We’re having the same problem as Meena. We’d just like to know if there was any conclusion or explanation for that behavior. We are successfully sending an ArrayList of DTO objects to the flex client but when we try to send that same ArrayList back to the Java Service we are getting the following exception: ClassCastException : flex.messaging.io.amf.ASObject cannot be cast to ourpackage.OurDTO Any help much appreciated. Chz Hi Sujit, I also have the same problem. ArrayList of java objects are null at Flex client using blazeDS. If you have found the solution could you please share it? Wolman, Prakash convert your ArrayList to Array. pp Hello, how pass an object to an Java class method ? Something like this : //AnneSearchVO is an ActionScript class with annotations : //[Bindable] //[RemoteClass(alias=”net.AnneSearchVO”)] //and the class AnagraficaNeonati::getListPatient(AnneSearchVO anneSearchVO) var anneSearchVO:AnneSearchVO = new AnneSearchVO(); var codiceUtente:Number = new Number(20130); anneSearchVO.setCodiceUtente(codiceUtente); AnagraficaNeonati.getListPatient(anneSearchVO); I get this Exception : UndeclaredThrowableException. Any help much appreciated. Hi Carlo, Looks like you are throwing some exception from the Java class which you are invoking. It will be helpful to have a glance at the Java class. Please try to share the Java class code with us. Feel free to mail me your code if you are not comfortable posting your code on the blog page 🙂 Hi Wolman, Please make sure you are mapping the objects in the ArrayCollection you are passing and the Java objects in the ArrayList which the Java method is expecting. Hope this helps. Sujit – I have an app that calls a remote service, gets the objects, and displays them. I then go to another page, then come back to the first page and for some reason the objects do not get cast – they are null. For example, I get an error : TypeError: Error #1034: Type Coercion failed: cannot convert com.spinnaker.model::ExchangeVO@537d089 to com.spinnaker.model.ExchangeVO Where it looks like the right type of object. Not sure what the @537d089 is that is appended to the first object. Is that the serialized id? THanks for you help. Hi Don, You said you are navigating from one page to another, can you please explain this. Are the objects in scope when you are moving to another screen? As you said the objects are becoming null, please make sure the objects scope is not lost. You need not worry about the @53** 🙂 Hope this helps. Hi Sujit, Need your help with a problem I am facing while binding Java Class with Flex Class. I have DesktopSearch.as model object and a similar DesktopSearch.java. When i try to send my Flex data which is a DesktopSearch object, I get this following error. onGetSearchFailure [FaultEvent fault=[RPC Fault faultString=”Cannot invoke method ‘getSearchResults’.” faultCode=”Server.ResourceUnavailable” faultDetail=”The expected argument types are (com.citi.sps.domain.DesktopSearch) but the supplied types were (flex.messaging.io.amf.ASObject) and converted to (null).”] messageId=”FAE91AC4-780D-D913-7FEE-3853FA82A4F7″ type=”fault” bubbles=false cancelable=true eventPhase=2] Please let me know what might be the cause for this problem. Thanks & Regards, Deepa Hi Deepa, This might be because the DesktopSearch.as is not converted to DesktopSearch.java type when invoking the Java method. This might be because the mapping of the Class is not proper. Can you please make sure the [RemoteClass] tag is there in your DesktopSearch.as and has the Java class name. If this is already done and still you are facing the problem, please share the code of DesktopSearch.as and DesktopSearch.java. So that we can find out the problem. Either host the files in some URL and share links or email the source files to sujitreddy.g@gmail.com Hope this helps. Sujit – I am not sure what you mean by scope. When I make the service call, I get a list of objects. Each object also has another object as an attribute. For example, the BackTest Object has a Stock object as an attribute. So on the call, I get a list of BackTest objects. I then get the Stock object from the BackTest object to display some information. The first time I execute the call – it displays fine. Subsequent calls result in the Stock Object being null on the BackTest Object. Again – thanks for your input. Hi Sujit: I am trying to implement flex remoting by having flex instantiate a remote java object but it is complaining that it is not able to create a java remote object. I am sorry what I am giving you is a bit long. But I wanted to make the problem I am facing clear. I am using live cycle with flex 2.0.1 —- Here are the contents of my remoting-config.xml file: pmmcFlex.ServerDetailRO request —— Here is the actionscript code snippet I am using for remote object: At this point, I am just using alert box to give me info on the size of the returned array collection BUT I GET AN ERROR Fault string (also shown on the alert box) private function serverGroupsClickHandler(event:ServerGroupsClickEvent):void { _serverGroupNameVal = String(event.group); _groupCategory = String(event.groupCategory); var serverDetailsRO:RemoteObject = new RemoteObject(); serverDetailsRO.destination = “serverDetail”; serverDetailsRO.getServerDetails.addEventListener(“result”, serverDetailsResultHandler); serverDetailsRO.addEventListener(“fault”, serverDetailsFaultHandler); serverDetailsRO.getServerDetails(_serverGroupNameVal, _groupCategory); } private function serverDetailsFaultHandler(event:FaultEvent):void { //Deal with event.fault.faultString, etc. var errors:String = “The following error occurred:\n”; errors += “\n Fault code is:\n” + event.fault.faultCode; errors += “\n Fault detail is:\n” + event.fault.faultDetail; errors += “\n Fault string is:\n” + event.fault.faultString; Alert.show(errors, ‘Error’); } private function serverDetailsResultHandler(event:ResultEvent):void { // Process server details list var serverDetailsList:Object = event.result; _serverGroupDetailArrayCol = serverDetailsList as ArrayCollection; Alert.show(“The size of server details list was ” + _serverGroupDetailArrayCol.length); } **************** In the above code snippet, the variables beginning with underscore are declared private. Some are of type String and some of type ArrayCollection **************** Here is the code snippet of my Java remote object class. The method returns an ArrayList. public class ServerDetailRO { public ServerDetailRO() {} public List getServerDetails(String name, String category) { System.out.println(“Message from class ServerDetailRO – method getServerDetailsRO was called”); PmmcXmlGenerator pmmcXmlGenerator = new PmmcXmlGenerator(); String data = pmmcXmlGenerator.getEcnHandlerXml(name); ServerDetailDTO serverDetailDTO = null; PmmcClientSessionsDTO pmmcClientSessionsDTO = null; List serverDetailList = new ArrayList(); List serverDetailClientSessionsList = new ArrayList(); Document doc = null; DocumentBuilder builder = null; XPathFactory xPathFactory = null; XPath xpath = null; String commandNameAndOrParam = “”; . . . ——- Here is my ActionScript DTO: package { [Bindable] [RemoteClass(alias=”pmmcFlex.ServerDetailDTO”)] public class ServerDetailDTO { public var ecnHandler:String; public var routingExchange:String; public var exchangeStatus:String; public var ipAddress:String; public var port:String; public var senderID:String; public var targetID:String; public var comments:String; public var cloudName:String; public var cloudLocation:String; public var cmd:String; } } AND here is my corresponding Java DTO: package pmmcFlex; public class ServerDetailDTO implements java.io.Serializable { private static final long serialVersionUID = 5672447577075475118L; private String ecnHandler; private String routingExchange; private String exchangeStatus; private String ipAddress; private String port; private String senderID; private String targetID; private String comments; private String cloudName; private String cloudLocation; private String cmd; public String getEcnHandler() { return ecnHandler; } public void setEcnHandler(String ecnHandler) { this.ecnHandler = ecnHandler; } public String getRoutingExchange() { return routingExchange; } public void setRoutingExchange(String routingExchange) { this.routingExchange = routingExchange; } public String getExchangeStatus() { return exchangeStatus; } public void setExchangeStatus(String exchangeStatus) { this.exchangeStatus = exchangeStatus; } public String getIpAddress() { return ipAddress; } public void setIpAddress(String ipAddress) { this.ipAddress = ipAddress; } public String getPort() { return port; } public void setPort(String port) { this.port = port; } public String getSenderID() { return senderID; } public void setSenderID(String senderID) { this.senderID = senderID; } public String getTargetID() { return targetID; } public void setTargetID(String targetID) { this.targetID = targetID; } public String getComments() { return comments; } public void setComments(String comments) { this.comments = comments; } public String getCloudName() { return cloudName; } public void setCloudName(String cloudName) { this.cloudName = cloudName; } public String getCloudLocation() { return cloudLocation; } public void setCloudLocation(String cloudLocation) { this.cloudLocation = cloudLocation; } public String getCmd() { return cmd; } public void setCmd(String cmd) { this.cmd = cmd; } } I really need to resolve this problem. Thanks in advance. -Manish ______ Contents of remoting-config.xml file: pmmcFlex.ServerDetailRO request Hi Manish, Looks like the configuration XML content got cut. Can you please email the files? Also the details of the error you are getting in the web server log files 🙂 my email id: sujitreddy.g@gmail.com 🙂 Hi Don, I am sorry, I did not understood the problem properly. can you please Email me the files which I can use to reproduce this? My Email id: sujitreddy.g@gmail.com 🙂 Hi Sujit, Just stumbled on this post while searching for a solution. Some of the answers on the comment section helped me get thru it. Thanks a ton. Hi SM, Good to know that you got the solution 🙂 Web 2.0 rocks 🙂 Hi Sujit, Great Article ! I have a doubt, Is it possible to invoke a Java Servlet through Remote Object from Flex ? If so can you please provide me any useful links. Thank you for your time in Advance. Hi Gijo, Try creating channels dynamically and set the end point URL to your servlet. Set this channel to your RemoteObject. You can also try changing the end point URL in the services-config.xml, but this might affect all other destinations/services using that channel. Please find details on how to create channels dynamically at the URL below. Hope this helps. 🙂 Hi Sujith, It was great to see a thorough support you are offering to Flex developers. Hats off to you. There is a post similar to my problem but I was not able to find the solution for it. Kindly request you to share the same. It is related to Post no: 38 written by Meena. The deserialization of a java object that contains an arraylist member inside it is not happening on the actionscript side. It is coming as null. Let me know if I need to paste my code in fact it is similar to Meena’s. Thanks in advance K. Arun Hi Arun, Please try to send the files to my email id sujitreddy.g@gmail.com. I will try to solve the problem. This should be a minor problem as ArrayList de-serialization is very common and its working fine in my projects 🙂 hi sujit, I have an array of objects in an ActionScript class to map to array of objects in the the corresponding java class in the server side. I am using flex3 with BlazeDs and cairngorm framework. The mapping of array values from java class to actionscript class works smoothly. But when i am sending the populated action script class object to server side, in the corresponding java object in server side ,for some reason the array is set to null. But whe i am using the array list in the java side and ArrayCollection in the actionscript side, it works fine Any clue? Thanks in advance. Hi Sujith, Thanks for your support in resolving my problem and enlightening me about the Marshall-Plan of Flex3. It helped me a whole lot. Hi sujit, can you please send me a sample application of flex with java and hibernate, I have an application, which i need to configure, i tried a lot but i am not configuring properly, when i am trying to compile the xx.mxml file it in browser it showing xx.html file not available. Thanks in advance Hi Dayakar, What exactly are u trying to do? How are u compiling the MXML file? Hi Sujit… I am working on mapping ActionScript with Java and when my mxml compiles it gives me the following error. Reference Error #1056. My declared objects in AS is also declared public. Any idea where could these problems be resolved ? Sincerely Tanmoy Hi Sujit.. I am having a reference error #1056 when i am trying to map ActionScript objects to Java. Could you let me know where could i be getting wrong. Hi Tanmoy, Are you getting this error when you compile your application? Yes. My application basically tries to work on XML generated from java which i keep as a XML List collection. I planned to use remote objects to access them. I used HTTP service and it worked fine. I however want to know where could i be wrong with Remote Object. Do you think sending the code would be of any help ? Sincerely Tanmoy Also, I am actually trying to use trace to access my xml data but am getting this reference error. Sincerely Tanmoy Hi Tanmoy, What exactly is your Java method returning? Can you access a simple String returned by Java method on server using RemoteObject on the client? Sharing code sample to replicate this issue will definitely help in solving the problem. You can email the code to sujitreddy.g@gmail.com Hi, Great Post. I have a Problem. I when I send a object as a Parameter from FLEX to JAVA, I get the object in JAVA with all the properties in null or 0. Is this a commmon mistake?? Thanks!!! -Charles Hi Sujit, I am having a problem, I am returning a TO from my server side which holds a list and that customizedTo holds a string,customizedObject1 and list. While making the service call I am getting the string and list but i am not getting the customizedObject1 which is inside the customizedTo. Provided I have replicated all the java classes in the flex side also with the metaTag Remote alias and in the places of list i have added arrayElementType also.But I have not received the particular object inside the list. In the back end i have serialized all the required To’s I am damn confused why the particular object alone is not coming? Please help me out with this. Hi sujit, i am new to flex and your blog was a big help for me.. i have some basic queries. i want to create a role based login system. Once the user is logged in successfully different layouts are shown to him depending on different roles..I know it can be done using the concept of states. There are two ways- either i create the states in my mxml files and change it according to role, or i can create it using actionscript at runtime… which is the better way to do.. Regards Moiaz Jiwani Hi Charles, Are your properties public? Hi Veeru, Please send me files to reproduce this. I will try to find out what’s going wrong. Hi Sujit, Thank you for the informative posts. What I have here is Flex-BlazeDS-Spring-J2EE. Getting data (List) from server-side and displaying them (ArrayCollection) in a DataGrid is smooth. Problem is when I try to save these data back. I have a remote method which accepts List, and I am passing an ArrayCollection from client side. I am getting a ClassCastException like RPC Fault faultString=”java.lang.ClassCastException : flex.messaging.io.amf.ASObject cannot be cast to com.MyDTO” faultCode=”Server.Processing” faultDetail=”null”] at mx.rpc::AbstractInvoker/ at mx.rpc::Responder/fault() at mx.rpc::AsyncRequest/fault() at NetConnectionMessageResponder/statusHandler() at mx.messaging::MessageResponder/status() Have you experienced this while you are working? How do I solve this problem? Thanks…. Hi Das, Looks like you have class named MyDTO and the class casting is failing. Please make sure you are mapping the object you are sending to MyDTO class. Hope this helps. Hi, I am using Blaze DS for communication between Flex and Java. I have an Array in ActionScript, which contains objects of some class (mapped to Java class). I have confirmed the mapping and it looks fine. When sending the data back to Java side, I am receiving a ArrayList, but that ArrayList contains objects of type ASObject and not of the mapped Java Class. I am not able to find out the reason behind this behaviour. If I try to send only the mapped Java class (instead of Array), I get the mapped class perfectly on the server side. This is highly urgent as I am stuck since last 2 days!! Please reply ASAP. Regards, Karan Hi Karan, I replied to your email. Hope that solved your problem. Hi Sujit, Thanks for your reply. Unfortunately it didn’t solve my problem. But anyhow, I moved ahead and did an explicit type casting on the java side from AS Object to the desired server side object. Still, thanks for the help! Karan I’m having issues mapping a class I want to use in a tree. as class: package com.mycompany.FlexA5Test.utils { import mx.collections.ArrayCollection; import mx.collections.ICollectionView; import mx.controls.treeClasses.ITreeDataDescriptor; [RemoteClass(alias=”com.mycompany.FlexA5Test.RollupNode”)] public class RollupNode implements ITreeDataDescriptor { public var name:String; public var nodes:ArrayCollection=new ArrayCollection(); public function addChildAt(parent:Object, newChild:Object, index:int, model:Object = null):Boolean{ return false; } public function getChildren(node:Object, model:Object = null):ICollectionView{ return nodes; } public function hasChildren(node:Object, model:Object = null):Boolean{ return nodes==null || nodes.length==0; } public function getData(node:Object, model:Object = null):Object{ return name; } public function isBranch(node:Object, model:Object=null):Boolean{ return nodes==null || nodes.length==0; } public function removeChildAt(parent:Object,child:Object,index:int,model:Object=null):Boolean{ return false; } } } java: package com.mycompany.flexa5test.util; import java.util.ArrayList; import java.util.List; import java.util.Set; import com.mycompany.beans.Rollup; public class RollupNode { String name; List nodes = new ArrayList(); public RollupNode(String name, Set nodes) { super(); this.name = name; if(nodes!=null){ for(Rollup node : nodes){ this.nodes.add(new RollupNode(node.getNode().getName(),node.getRollups())); }} } } function for handling event: private function categoryHandler(event:ResultEvent):void{ var value:RollupNode = RollupNode(event.result); tree.dataProvider = value; //tree.dataProvider = new RollupNode(“TEST”,new ArrayCollection()); } and the error: TypeError: Error #1034: Type Coercion failed: cannot convert mx.utils::ObjectProxy@15825f71 to com.mycompany.FlexA5Test.utils.RollupNode. Hi Sujit, I am looking for a solution for getting images/binary objects thru BlazeDS, and the images are on a server which require basic authentication. I understand HTTPService does not support binary data, so, I assume this rules out using the BlazeDS Http Proxy service (I’ve tried, but, it always returns a Java exception). I could use a RemoteObject call (the images/objects can be retrieved thru a database call or URL), but, how should the mapping be, as a byte array from Java, I assume, but how to load on the Flex side? Also, thanks for the very informative postings! David Hi Robert, You AS class has this [RemoteClass(alias=”com.mycompany.FlexA5Test.RollupNode”)] and the Java class is in a different package com.mycompany.flexa5test.util Try correcting this. Hope this helps. Hi David, byte Array byte[] in Java is converted to flash.utils.ByteArray on AS. Hope this helps. Hi Sujit, I just went through the new enhancements for FileReferance class in flash player 10. With load() function we can load the ByteArray to Flex application. I want to use this to develop a File Upload component. I want to save/get images to/from server(Java) using remote object. I am able to send the file ByteArray to server. However, when i try to get byte[] from java to flex (using remote object) i get null. i am getting exception below: TypeError: Error #1034: Type Coercion failed: cannot convert to flash.utils.ByteArray I am not sure if i can achieve this, should i use upload, download? or i can use load/save and data communication through remoteObject? Need your help on this. Regards, Sachin Patil. Hi Sachin, Please try returning java.lang.Byte[] Hope this helps. Hi Sujith, I am new to flex and trying some example using yours. When I am trying to display object variables, I am getting null. But I am able to display if a string is returned from remote Java class. I am having a problem only while displaying an object variables. Can you please let me know where I am going wrong. In the following function: private function nodeResultHandler(evt:ResultEvent):void { var node1:Node = Node(evt.result); Alert.show(evt.message.body.toString()); Alert.show(node1.nodeId.toString()); } For the first Alert statement I am getting Object Node as my response. But for the second Alert, I am getting 0 instead of 11. Can you please help. Aruna. Hi Sujit, I got it resolved. Silly mistake. I had my java class variables as private instead of public. Hi Sujit, I am getting a strange error. I am using Flex SDK3.2 and BlazeDS 3. My two remote methods are not working. It comes in the fault handler and show below error message : faultCode = “Server.ResourceUnavailable” faultDetail = “The expected argument types are (com.common.transferobjects.SearchDataTO) but the supplied types were (flex.messaging.io.amf.ASObject) and converted to (null).” faultString = “Cannot invoke method ‘searchData’.” Rest of the remote methods are wokring fine. When I compiled the source with Flex SDK3.1, all remote methods are fine but for SDK 3.2 or SDK 3.3, the two methods are not working. Please help. Thanks, Gagan Regrads, Gagan Hi Gagan, Looks like the mapping is not done properly, pleas make sure you have everything in place. Hope this helps. Hello, For all you people who are having collections of ASObjects on the server side : I had the same issue. In my case it seemed to be caused by the fact that when I sent it from server to client, I did not actively use the items in the collection. This meant (I think) that the objects in there were never properly transformed to their flex-side counterpart. When being sent back to the server, this caused them to become (or remain?) ASObjects. When I iterated through the collection on the flex-side, casting them to their flex-side types, all worked fine later on when going back to the server: for (var i:Number = 0; i < objecta.objectbcollection.length; i++) var objectb:Objectb = Objectb(objecta.objectbcollection.getItemAt(i)); Interested to see if this works for you. Hi Sujit, I am using RemoteObject calls in flex. When I am returning a custom object (which contains two list properties) to the flex i am getting null as the response at flex side. What could be done to resolve the above problem. Similer posts i have seen above but i didn’e see any answers for the same. Can you replay on the same. Thanks in advance. Hi Sujit, I had seen your blog very recently and very much impressed with ur blogs.I had seen ur code ,communication between flex and java with remoteobject and Blaze.I tried exactly with ur code by copy pasre, but unable to get the output.Currently im using Flex Builder 3.0 with WTP.Can u just help me out in this regard.My mail id is udayshankar.tummala@gmail.com Hello, I’ve a similar problem mapping java class in flex. I’d like to map a java inner class in a flex class as follow: And this is the flex class: But I got this error message: 2009-07-03 15:33:26,531 INFO [STDOUT] [Flex] 07/03/2009 15:33:26.531 [ERROR] [Endpoint.AMF] Unable to create a new instance of type ‘myPack.MyClass$MySubClass’. flex.messaging.MessageException: Unable to create a new instance of type ‘myPack.MyClass$MySubClass’. Types cannot be instantiated without a public, no arguments constructor. at flex.messaging.util.ClassUtil.createDefaultInstance(ClassUtil.java:143) at flex.messaging.io.AbstractProxy.createInstance(AbstractProxy.java:86) at flex.messaging.io.amf.Amf3Input.readScriptObject(Amf3Input.java:409) [etc] Suggestion? Thanks in advance I have used blazeds+flex builder in my project where i have to collect around more than 100000 records from the database. Data brought to the client as list of java bean object through remoting where there is an action script object corresponding to java bean as explained in this blog. But the problem is that it takes more than a minute to bring data from the server to the client. Is there anyway to display data in a datagrid as the data is coming from the server. With thanks Manu Hi, thank you so much for this blog posting. It has been a great help. I have got my flex remoting up and running, but I am having a problem with collections. I have data that is more or less in a tree like structure. The class that gets sent from the server to flex has ArrayList’s of other classes. These lists have data in them when the server method returns, but on the flex side they have length = 0. The classes in the lists are also mapped to actionscript classes. The List fields themselves are mapped in actionscript as ArrayCollection’s. Other’s have posted similar issues, but I have not seen a comment post that explains a resolution. Any help would be greatly appreciated… this blog post has helped me a lot already! Thanks. I’ve problem on the nested remote object also. I want to populate the object in a DataGrid, say: Class Person { Address address; String name; } The name can be retrieve by using datafield=”name”. But when I use datafield=”address.street”, the output is empty, please kindly help… Hi Jesse, Please try sharing your Java and AS3 VO classes. Hi Ivan, You should be writing a labelFunction to handle this case. Please find details at this URL Hope this helps. Simple solution for below exception when you using List list in your java code: (for OurDTOFlex in flex side) “ClassCastException : flex.messaging.io.amf.ASObject cannot be cast to ourpackage.OurDTO” Just type this whenever in flex side: var dummy:OurDTOFlex = new OurDTOFlex (); Strange but it works 🙂 Hi Kalondar, This is because a class definition is not compiled into a swf unless it is being at least once in the code. Hi Thanks for the blog.I have a query.I am working on the serialization of amf to java object.I have written a method which is taking amf data in the form of bytearray and then it should be returning after decoding amf databyte array to the Java object in the form bytearray. I am not able to resolve this problem as I am getting the below expception while running my class.I have got the solution that I will need to pass the type of class but here whicgh type of class should I pass here. Thats my class which have been taking a amf data as bytearray: import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.InputStream; import flex.messaging.io.MessageDeserializer; import flex.messaging.io.SerializationContext; import flex.messaging.io.amf.ActionContext; import flex.messaging.io.amf.ActionMessage; import flex.messaging.io.amf.Amf3Output; import flex.messaging.io.amf.AmfMessageDeserializer; import flex.messaging.io.amf.AmfTrace; import flex.messaging.io.ClassAliasRegistry; public class AMFConverter { public Object fromAmf(final byte[] decodedDataBytesStr) throws IOException, ClassNotFoundException { SerializationContext context = getSerializationContext(); ActionContext blazeDSActionContext = new ActionContext(); InputStream bIn = new ByteArrayInputStream(decodedDataBytesStr); int contentLength = decodedDataBytesStr.length; blazeDSActionContext.setDeserializedBytes(contentLength); ActionMessage requestMessage = new ActionMessage(); blazeDSActionContext.setRequestMessage(requestMessage); String alias = “TestAMF_RI”; ClassAliasRegistry.getRegistry().getClassName(alias); MessageDeserializer deserializer = new AmfMessageDeserializer(); deserializer.initialize(context, bIn, new AmfTrace()); Object y = null; try { deserializer.readMessage(requestMessage, blazeDSActionContext); y = deserializer.readObject(); } catch (Throwable t) { throw new IOException(t); } return y; } public byte[] toAmf(final Object source) throws IOException { SerializationContext context = getSerializationContext(); // final StringBuffer buffer = new StringBuffer(); final ByteArrayOutputStream bout = new ByteArrayOutputStream(); final Amf3Output amf3Output = new Amf3Output(context);// creating the context instance amf3Output.setOutputStream(bout); amf3Output.writeObject(source); amf3Output.flush(); amf3Output.close(); return bout.toByteArray(); } public static SerializationContext getSerializationContext(){ SerializationContext serializationContext = SerializationContext.getSerializationContext(); serializationContext.enableSmallMessages = true; serializationContext.instantiateTypes = true; //use _remoteClass field serializationContext.supportRemoteClass = true; //false Legacy Flex 1.5 behavior was to return a java.util.Collection for Array //ture New Flex 2+ behavior is to return Object[] for AS3 Array serializationContext.legacyCollection = false; serializationContext.legacyMap = false; //false Legacy flash.xml.XMLDocument Type //true New E4X XML Type serializationContext.legacyXMLDocument = false; //determines whether the constructed Document is name-space aware serializationContext.legacyXMLNamespaces = false; serializationContext.legacyThrowable = false; serializationContext.legacyBigNumbers = false; serializationContext.restoreReferences = false; serializationContext.logPropertyErrors = false; serializationContext.ignorePropertyErrors = true; serializationContext.createASObjectForMissingType); */ } } Exception : java.io.IOException: flex.messaging.MessageException: Cannot create class of type ‘DSK’. Type ‘DSK’ not found. at AMFConverter.fromAmf(AMFConverter.java:52) at TestAMF_RI.interpret(TestAMF_RI.java:40) at com.recordingtool.DebugWebScript.executeRequest(Unknown Source) at com.recordingtool.DebugWebScript.handleRequest(Unknown Source) at com.recordingtool.WebScript.actionOnAgendaView(Unknown Source) at com.recordingtool.DebugWebScript.execute(Unknown Source) at com.recordingtool.DebugScriptRunner.run(Unknown Source) Caused by:.AbstractProxy.getClassFromClassName(AbstractProxy.java:85) at flex.messaging.io.AbstractProxy.createInstanceFromClassName(AbstractProxy.java:125) at flex.messaging.io.AbstractProxy.createInstance(AbstractProxy.java:148) at flex.messaging.io.amf.Amf3Input.readScriptObject(Amf3Input.java:437) AMFConverter.fromAmf(AMFConverter.java:49) … 6 more Please reply on the above query.I had tried so many things but it is not working Hi Sujit, I Like ur posts on blogs.Here I have the requirement like autogeneration of Actionscript classes(Value Objects) from Java classes(DAOs).I tried using graniteDS plugin for eclipse, but it is generating double the classes what we required.Do we have any tool or plugin like that?Please let me know. Hi Sonu, To solve your problem (flex.messaging.MessageException: Cannot create class of type ‘DSK’. Type ‘DSK’ not found.), you need to register the DSK alias to AcknowledgeMessageExt: ClassAliasRegistry.getRegistry().registerAlias(“DSK”, “flex.messaging.messages.AcknowledgeMessageExt”); ClassAliasRegistry.getRegistry().registerAlias(“DSC”, “flex.messaging.messages.CommandMessageExt”); ClassAliasRegistry.getRegistry().registerAlias(“DSA”, “flex.messaging.messages.AsyncMessageExt”); Check this default implementation: Hi , I have following classes Flex::::: package com.mycompany { import flash.utils.Dictionary; [Bindable] [RemoteClass(alias=”com.mycompany.bean.MessageBundle”)] [Bindable] public class MessageBundleVO { public var messages:Object ; public function getMessage(key:String):String{ return messages.key as String; } } } Java:::: package com.mycompany.bean; import java.io.Serializable; import java.util.Map; public class MessageBundle implements Serializable { private static final long serialVersionUID = 1L; private Map messages; public Map getMessageBundle() { return messages; } public void setMessageBundle(Map messageBundle) { this.messages = messageBundle; } public String toString(){ return messages.toString(); } } The Everything is properly linked and instance for MessageBundleVO is not null but the its attribute messages is coming null. Can you please tell me what is the Flex equivalent of Map if we use RemoteClass tag as give in the above code. This is comming null on the flex side. MessageBundleVO.messages The Aodbe is suggesting Array (sparse)—->java.util.Map–>java.util.Map Does anybody know how to solve this issue. Regards Jatin Hi Sujit, First of all, thank you for this great resource. I was trying to test if I can access a nested object in your Person example. For this I made the following changes: 1. Added a variable in the Person.java: public Person son; 2. The same in Person.as: public var son:Person; 3. Changed getPerson() method in RemoteServiceHandler.java: public Person getPerson() { Person person = new Person(); person.id = 1; person.dateOfBirth = new Date(); person.name = “Sujit”; person.company = “Adobe”; Person son = new Person(); son.id = 2; son.name = “Amarjeet”; person.son = son; return person; } 4. Changed displayPersonDetails in RPC.mxml: private function displayPersonDetails(event:ResultEvent):void { var person:Person = Person(event.result); var son:Person = person.son; Alert.show(person.name + “, ” + person.dateOfBirth.toDateString() + “, ” + person.id); Alert.show(“Son: ” + son.name + “, ” + person.id); } With these changes, I still see only the first alert. The second one is silentlly “ignored”. Regards, Anatoliy Hi Anatoliy, This will definitely work. Please try moving the window, you should be able to see the other one right behind it. Hope this helps. HI.. in my application i have a form to fill person detail,i want to store person detail to database on server side, how can i typecast person.as into person.java at the server side java class. I am using this two files, it throws me typecast error at client side. whats wrong with this? //EmployeeInfoObj.as package { [RemoteClass(alias=”remoteObj.Employee”)] /*[RemoteClass(alias=””)]*/ public class EmployeeInfoObj { public function EmployeeInfoObj() {} public var firstName:String; public var lastName:String; public var city:String public var empCode:String } } //Employee.java package remoteObj; public class Employee { public String firstName; public String lastName; public String city; public String empCode; public Employee(){} public Employee(String firstName,String lastName,String city,String empCode) { this. firstName = firstName; this.lastName = lastName; this.city = city; this.empCode = empCode; } public String getFirstName() { return this.firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return this.lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getCity() { return this.city; } public void setCity(String city) { this.city = city; } public String getEmpCode() { return this.empCode; } public void setEmpCode(String empCode) { this.empCode = empCode; } } Hi Nirav, As explained in this article adding [RemoteClass(alias=”com.adobe.objects.Person”)] tag to Person.as will do. Once you add this metadata tag, Flash Player will take care of converting Person.as instances to Person.java instances and other way round too. Hi Nirav, Can you please make sure a reference to EmployeeInfoObj.as is compiled to your application. You can check this by intentionally adding a compiler error into EmployeeInfoObj.as and compile your application. If you see a compiler error then the class is getting compiled. Hope this helps. Hi Sujit, I am Jeyabalan first of all very thanks to you, i am familiar with Flex&PHP but new in Flex with Java, i need simple database program(add, delete, update) using Flex, Java, MySQL Connector, Spring or Hibernate. I humble requeste to you Please send me this program file to my email id (giribala14@gmail.com). please Thank you Actually I am a beginner.. It was very useful to me..
https://sujitreddyg.wordpress.com/2008/01/16/mapping-action-script-objects-to-java-objects/
CC-MAIN-2018-34
refinedweb
8,478
51.24
MotivationPWM is used for things like controlling servos, motors and LEDs. Output from a micro-controller is easy, and the hardware usually handles it. Remote control receivers also output it, as they are used to control these things as well. RC transmitters often output the closely related PPM (aka CPPM). It's not unusual to want to read those values with an Arduino microcontroller, but this is not as easy, as common - meaning ATmega328 and similar - hardware doesn't do it directly. So let's look at some options to do that. Code can be found in the blog repository, in the Arduino/PWM_input folder. A word of warning here. All the method I'm discussing are fine for RC usage. They are not suitable for full range PWM signals! RC signals - even at their extremes - are always between 4 and 2.5 milliseconds of a roughly 20 millisecond frame. Full range signals can go from 0 milliseconds - no pulse - to the full frame length - all pulse. None of these methods deals with those two cases. The optionsIf you look on the web, you'll find things like this, listing some of the options. I've as yet to see all four I know of in the same place, in part because the fastest of them isn't supported by an Arduino library. So I'll discuss them all here, providing code examples for the harder three. pulseInpulseIn is the easy option. It works fine if you don't mind having your CPU tied up to do the input, and can do everything else between pulses or don't mind missing a pulse. I'm not going to spend much more time on this. Pin change interruptsThis is the most general solution. Just set up an interrupt on pin changes, and then when it changes record the current time on a rising edge, and subtract that value from the time on the falling edge to get the PWM pulse length. So, here's my code: #include <EnableInterrupt.h> #define MY_PIN 5 // we could choose any pin uint16_t pwm_value = 0; void change() { static unsigned long prev_time = 0; if (digitalRead(MY_PIN)) prev_time = micros(); else pwm_value = micros() - prev_time; } void setup() { pinMode(MY_PIN, INPUT_PULLUP); enableInterrupt(MY_PIN, &change, CHANGE); Serial.begin(115200); } void loop() { uint16_t pwmin; noInterrupts(); pwmin = pwm_value ; interrupts(); Serial.println(pwmin); delay(500); } If you compare this to the version I referenced earlier, you'll notice I used the enableInterrupt library instead of the pinChangeInt library. The latter has recently been depreciated in favor of the former. I also used an if in an ISR that handles both cases instead of one ISR per case and changing the interrupt each time. I presume this is because attachInterrupt is faster than digitalRead, as the two interrupts are generated in the same amount of time. This is an artifact of the Arduino Hardware Abstraction Layer (HAL), which has to translate from an Arduino pin number to an AVR port and bit number to read it's value. If you accessed the hardware directly, things would be different. Given the amount of time lost just by using the Arduino HAL, I decided to go with the more maintainable code. External interruptsExternal interrupts are essentially identical to pin change interrupts. You arrange to get an interrupt when a pin changes, save the time on a rising edge and calculate the interval on a falling edge. The difference is in how the interrupts are handled in the hardware, and that's all hidden by the HAL. Common Arduino boards only have two external interrupts, each of which can occur on a single pin. Pin change interrupts have three interrupts, each shared by 7 or 8 pins, which the HAL sorts outs for you. In any case, here's the code: #define MY_PIN 3 // Must be pin 2 or 3 // Work around bug in Arduino 1.0.6 #define NOT_AN_INTERRUPT (-1) uint16_t pwm_value = 0; void change() { static unsigned long prev_time = 0; if (digitalRead(MY_PIN)) prev_time = micros(); else pwm_value = micros() - prev_time; } void setup() { pinMode(MY_PIN, INPUT_PULLUP); attachInterrupt(digitalPinToInterrupt(MY_PIN), change, CHANGE); Serial.begin(115200); } void loop() { uint16_t pwmin; noInterrupts(); pwmin = pwm_value; interrupts(); Serial.println(pwmin); delay(500); } Input capture interruptsThere's another thing consuming in both of these routines: micros()! While it's doesn't seem like much, it has to disable interrupts to safely copy the current value into your variable - which is a waste, since they are disabled by being in the interrupt handler. Turns out there's a way to get rid of that, though. The input capture interrupt will snapshot the value of a timer when an interrupt happens. Unfortunately, this hardware capability isn't wrapped by the Arduino HAL, so you have to implement things by hand. The upside of that is that it'll be a lot faster than the HAL version, as most of the HAL calls wind up just needing a few instructions. The more important downside is that doing this uses the 328P's single 16 bit timer. So you have to use a specific pin and lose the two PWM outputs that use that timer. Here's the code: #define MY_PIN 8 // Must be pin 8 on 328P's. static uint16_t pulse_length; // in ticks #define ICESB _BV(ICES1) ISR(TIMER1_CAPT_vect) { if (TCCR1B & ICESB) // On rising edge, start of pulse & frame TCNT1 = 0; // Reset the counter else // Falling edge pulse_length = ICR1; // Save pulse length in ticks TCCR1B ^= ICESB; // Detect other edge next time TIFR1 |= _BV(ICF1); } void setup() { TCCR1A = 0 ; // Not doing anything here. TCCR1B = _BV(CS11) | ICESB; // Enable with rising edge capture, prescaler 8. TIMSK1 = _BV(ICIE1); // And unmask this interrupts. pinMode(MY_PIN, INPUT_PULLUP); Serial.begin(115200); } void loop() { uint16_t pwm_value; TIMSK1 &= ~_BV(ICIE1); // Turn off my interrupt to grab the value pwm_value = pulse_length; TIMSK1 |= _BV(ICIE1); Serial.println(clockCyclesToMicroseconds(pwm_value * 8)); delay(500); } SummarypulseIn works, but only if you can do everything else that needs doing between pulses. In particular, other input devices that need prompt handling cause problems. And this method doesn't work if you want to handle C/PPM inputs, which use a much larger percentage of the frame, even for RC. Pin change interrupts are the most flexible solution, but require coordination with other things that might be using them since multiple pins go through the same interrupt vector. I may well use this. On the Uno, all the pins are available, but that isn't true on newer Arduinos. Check the docs. External interrupts get a dedicated interrupt vector, so will be slightly faster than pin change interrupts. But that also means the number of pins that can use them is limited. This is probably my choice for applications that don't need every µ-second. Finally, the input capture interrupt provides a very fast alternative, but the pin choices are even more limited, and you lose a couple of PWM output pins since it uses a timer. And the reason it's fast is that you don't have the convenience of the Arduino HAL. That could be done with the others two choices as well. And pulseIn, for that matter, but that would just mean you're waiting very quickly.
http://blog.mired.org/2015/10/a-close-look-at-pwm-input.html
CC-MAIN-2020-50
refinedweb
1,198
63.09
Red Hat Bugzilla – Bug 1064895 [ patch ] - enable running of test suite through python-coverage Last modified: 2014-06-18 00:46:29 EDT Created attachment 862789 [details] blivet + coverage patch Patch tested on master and f20-branch. Please push to Rawhide as well. I'm working with other test suite issues so think I should pick this one up, too. acked. Fixed In Version: python-blivet-0.18.26-1 # make coverage ... ====================================================================== ERROR: udev_test (unittest.loader.ModuleImportFailure) ---------------------------------------------------------------------- ImportError: Failed to import test module: udev_test Traceback (most recent call last): File "/usr/lib64/python2.7/unittest/loader.py", line 252, in _find_tests module = self._get_module_from_name(name) File "/usr/lib64/python2.7/unittest/loader.py", line 230, in _get_module_from_name __import__(name) File "/root/blivet-0.18.33/tests/udev_test.py", line 4, in <module> import mock ImportError: No module named mock # coverage report Name Stmts Miss Branch BrMiss Cover -------------------------------------------------------------------- blivet/__init__ 1757 1734 938 936 1% tests/action_test 450 448 14 14 1% tests/formats_test/__init__ 0 0 0 0 100% tests/formats_test/labeling_test 95 92 20 20 3% tests/formats_test/selinux_test 62 57 2 2 8% tests/partitioning_test 55 53 2 2 4% tests/sanity_check_test 21 19 2 2 9% tests/size_test 70 68 14 14 2% tests/tsort_test 37 35 6 6 5% tests/udev_test 101 99 12 12 2% -------------------------------------------------------------------- TOTAL 2648 2605 1010 1008 1% The particular issue of having a make target to execute the test suite under python-coverage has been verified with python-blivet-0.18.33-1.el7. However python-mock is not available in the RHEL 7 repos. I will open another bug for that. This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request.
https://bugzilla.redhat.com/show_bug.cgi?id=1064895
CC-MAIN-2018-30
refinedweb
297
55.34
Back to: C#.NET Tutorials For Beginners and Professionals Abstraction in C# with Real-Time Examples In this article, I am going to discuss Abstraction in C# with examples. Please read our previous article before proceeding to this article where we discussed Encapsulation in C# with examples. The Abstraction in C# is one of the fundamental OOPs principles which acts as a supporting principle. That means the Abstraction principle in C# makes sure that all other three principles (Encapsulation, Polymorphism, and Inheritance) are working together to give the final shape of the project. What is Abstraction in C#? The process of defining a class by providing the necessary and essential details of an object to the outside world and hiding the unnecessary things is called abstraction in C#. It means we need to display what is necessary and compulsory and need to hide the unnecessary things from the outside world. In C# we can hide the member of a class by using private access modifiers. Let us understand Abstraction with a real-time example. Let us understand this with a car example. As we know a car is made of many things, such as the name of the car, the color of the car, gear, breaks, steering, silencer, diesel engine, the battery of the car, engine of the car, etc. Now you want to ride a car. So to ride a car what are things you should know. The things a car driver should know are as follows. - Name of the Car - The color of the Car - Gear - Break - Steering So these are the things that should be exposed and know by the car driver before riding the car. The things which should be hidden to a Car rider as are follows - The engine of the car - Diesel Engine - Silencer So these are the things which should be hidden from a car driver. Now let’s implement what we discussed with a program using C# namespace AbstractionDemo { public class Car { private string _CarName = "Honda City"; private string _CarColur = "Black"; public string CarName { set { _CarName = value; } get { return _CarName; } } public string CarColur { set { _CarColur = value; } get { return _CarColur; } } public void Steering() { Console.WriteLine("Streering of the Car"); } public void Brakes() { Console.WriteLine("Brakes of the Car"); } public void Gear() { Console.WriteLine("Gear of the Car"); } private void CarEngine() { Console.WriteLine("Engine of the Car"); } private void DiesalEngine() { Console.WriteLine("DiesalEngine of the Car"); } private void Silencer() { Console.WriteLine("Silencer of the Car"); } } } As shown in the above example, you can see that the necessary methods and properties are exposed by using the “public” access modifier whereas the unnecessary methods and properties hidden from outside the world by using the “private” access modifier as shown in the below image. As you can see in the above image, the methods and properties which we want to expose to the outside world are created using the public access specifier. Now from outside the class, we can create the object of this Car class and can access the above methods and properties that we will see after a while. Have a look at the following image. As shown in the above image, the methods and variables which don’t want to expose to the outside world are created using the private access modifier. Now from outside the class, we can create the instance of the Car class but we cannot access the above methods and variables. Consuming the Car Class: Let us create an instance of the Car class within the Main method of the Program class and then let’s try to invoke the public and private members of the Car class. public class Program { public static void Main() { //Creating an instance of Car Car CarObject = new Car(); //Accessing the Public Properties and methods string CarName = CarObject.CarName; string CarColur = CarObject.CarColur; CarObject.Brakes(); CarObject.Gear(); CarObject.Steering(); //Try to access the private variables and methods //Compiler Error, 'Car._CarName' is inaccessible due to its protection level CarObject._CarName; //Compiler Error, 'Car.CarEngine' is inaccessible due to its protection level CarObject.CarEngine(); } } As you can see, we can access the public members of the Car class using the Car instance. But while we are accessing the private members of the Car class using the same Car instance, we get Compiler Error. Hence, this proofs that, we have exposed the necessary methods and properties to outside the world or class by hiding the unnecessary members of the class by using the Abstraction in C#. What is the difference between Abstraction and Encapsulation in C#? Encapsulation is the process of hiding irrelevant data from the user or you can say Encapsulation is used to protect the data. For example, whenever we buy a mobile, we can never see how the internal circuit board works. We are also not interested to know how the digital signal converts into the analog signal and vice versa. So from a Mobile user’s point of view, these are some irrelevant pieces of information, This is the reason why they are encapsulated inside a cabinet. In C# programming, we will do the same thing. We will create a cabinet and keep all the irrelevant information that should not be exposed to the user of the class. Coming to abstraction in C#, It is just the opposite of Encapsulation. What it means, it is a mechanism that will show only the relevant information to the user. If we consider the same mobile example. Whenever we buy a mobile phone, we can see and use many different types of functionalities such as a camera, calling function, recording function, mp3 player, multimedia, etc. This is nothing but an example of abstraction in C#. The reason is we are only seeing the relevant information instead of their internal working. In the next article, I am going to discuss Inheritance in C# with examples. Here, in this article, I try to explain the Abstraction in C# with Examples. I hope this article will help you with your need. I would like to have your feedback. Please post your feedback, question, or comments about this Abstraction in C# with Examples article. 5 thoughts on “Abstraction in C#” Whatever difference you have explained it means there is no different between abstraction and encapsulation ???? Abstraction(generalizing) lets you focus on what the object does instead of how it does, while Encapsulation means hiding the internal details of how an object works. Reference: Abstraction is encapsulation that provides a public interface, but are independent of any particular implementation. The interface is given in the specifications and can be thought of as a kind of contract that says: If you use the ADT* according to the specified interface, it will perform the operations given in the specifications. Note that the code itself should contain the specification of this interface in terms of the preconditions and postconditions for each function. Also an ADT* is like a class of objects. There can be many different instances of the type. Each instance shares some properties, such as types of data and operations on the data, but they each represent different data. Thus Abstraction is a special form of encapsulation which provides polymorphism together with encapsulation. ADT: Abstarct Data Type Relevent data is Abstraction like which user want. Irrelevant data is Encapsulation like which user don’t care. hello
https://dotnettutorials.net/lesson/abstraction-csharp-realtime-example/
CC-MAIN-2021-31
refinedweb
1,224
53.61