text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
what about method and data constructors? data public: Foo a = private: Bob | public: Baz class Foo a where private: foo :: a public: baz :: a I really like haskell's current module system. A whole lot. other than the minor tweaks that have been mentioned. A really nice thing is that it is all about the namespace. unlike other languages, like java where your namespace is intrinsically linked to your class hierarchy, or C++ where bringing names into scope implies textual inclusion of arbitrary code, haskell has the wonderful property that the module system is purely about matching identifiers to their meaning. A side effect of this is that you can determine what names are in scope purely by by looking at the export/import lists of your modules and there is never a need nor a reason to look anywhere else. it is all nice and self-contained right there at the top of each module. moving to individual annotations would be a large step backwards IMHO. John -- John Meacham - ⑆repetae.net⑆john⑈
http://www.haskell.org/pipermail/haskell-prime/2006-February/000778.html
CC-MAIN-2014-15
refinedweb
173
71.34
so autocomplete and navigation for std haxe classes not work how to fix it? Strange place to have haxe installed, what os is that? I think the “haxe toolkit home path” should be the path were there is the haxe std folder. this path i got from which haxe my os is ArchLinux how i can find haxe std lib folder then? If you installed then the path should be /usr/share/haxe/ or maybe /usr/share/haxe/std/ not sure which one is expected. Just install IDEA and got the same error. Add /usr/share/haxe/std to SRC and class path fix it Use locate command to find haxe sdk David has the right solution, though /usr/share/haxe/std may not the be place that the std library is installed on your computer. Once you have the right path, add it to both the Class Path and Source path tabs on the SDK configuration screen you show above. Also, it must be the only entry in both the class and source paths. You will also need to find the neko executable and fill out that field, because haxelib relies upon it, and the plugin relies upon haxelib. it works. thanks have error for this imports - UNresolved symbols import php.Global; import php.SuperGlobal; import php.Syntax; but import php.NativeArray; import php.Lib; is ok You need the -D php7 define to use the new php target with haxe 3 (it is the default and only one on haxe 4). There should be some project parameter window were you can add this. didn’t found this parameter in IDEA settings Maybe adding it to your hxml would be enough, not sure how idea gets its project parameters. have this file build.hxml -main Main -php bin -D php7 –php-prefix ModuleName1 Putting the define into the hxml is the right way to do it. Be sure that your module settings (File->Project Structure->Module->Haxe) are set to use the correct hxml file. If IDEA isn’t finding the symbols, browsing, or highlighting correctly, you will also have to add php7 to the ‘macros’ setting in the module settings page (they are project-wide, actually). found it when recreated project and installed EAP edition IDEA You don’t have to add the -D in the project macros. When the compiler is called, -D is added already. So, you are probably getting (possibly hidden) errors of the type: “-D -D” Invalid command line argument. That will mess up compiler completions and compilation requests. It won’t affect the unresolved symbol name, though. If imports for NativeArray and Lib are working and Global, SuperGlobal, and Syntax are not, then you have a pre-4.0 Haxe SDK (or std library) installed. You should see Global.hx, SuperGlobal.hx, and Syntax.hx in your directory listing under /usr/share/haxe/std/php. This is what the 3.4.2 directory listing looks like: 06/29/2017 10:25 AM <DIR> . 06/29/2017 10:25 AM <DIR> .. 06/29/2017 10:25 AM 25,005 Boot.hx 06/29/2017 10:25 AM 2,996 BytesData.hx 06/29/2017 10:25 AM <DIR> db 06/29/2017 10:25 AM 1,848 Exception.hx 06/29/2017 10:25 AM 1,421 HException.hx 06/29/2017 10:25 AM 1,394 IteratorAggregate.hx 06/29/2017 10:25 AM 6,462 Lib.hx 06/29/2017 10:25 AM 1,199 NativeArray.hx 06/29/2017 10:25 AM 1,199 NativeString.hx 06/29/2017 10:25 AM 10,719 NativeXml.hx 06/29/2017 10:25 AM <DIR> net 06/29/2017 10:25 AM 5,585 Session.hx 06/29/2017 10:25 AM 14,053 Web.hx 06/29/2017 10:25 AM <DIR> _std 11 File(s) 71,881 bytes 5 Dir(s) 1,024,650,338,304 bytes free And, this is the 4.0 version: 07/16/2018 02:42 PM <DIR> . 07/16/2018 02:42 PM <DIR> .. 07/16/2018 02:42 PM 330 ArrayAccess.hx 07/16/2018 02:42 PM 24,555 Boot.hx 07/16/2018 02:42 PM 296 Closure.hx 07/16/2018 02:42 PM 11,083 Const.hx 07/16/2018 02:42 PM <DIR> db 07/16/2018 02:42 PM 1,956 Error.hx 07/16/2018 02:42 PM 2,075 ErrorException.hx 07/16/2018 02:42 PM 1,989 Exception.hx 07/16/2018 02:42 PM 38,863 Global.hx 07/16/2018 02:42 PM 1,424 IteratorAggregate.hx 07/16/2018 02:42 PM 5,740 Lib.hx 07/16/2018 02:42 PM 2,294 NativeArray.hx 07/16/2018 02:42 PM 1,575 NativeAssocArray.hx 07/16/2018 02:42 PM 1,719 NativeIndexedArray.hx 07/16/2018 02:42 PM 1,342 NativeString.hx 07/16/2018 02:42 PM 1,742 NativeStructArray.hx 07/16/2018 02:42 PM <DIR> net 07/16/2018 02:42 PM 153 Ref.hx 07/16/2018 02:42 PM <DIR> reflection 07/16/2018 02:42 PM 141 Resource.hx 07/16/2018 02:42 PM 93 RuntimeException.hx 07/16/2018 02:42 PM 224 Scalar.hx 07/16/2018 02:42 PM 5,049 Session.hx 07/16/2018 02:42 PM 456 SessionHandlerInterface.hx 07/16/2018 02:42 PM 122 StdClass.hx 07/16/2018 02:42 PM 2,051 SuperGlobal.hx 07/16/2018 02:42 PM 8,366 Syntax.hx 07/16/2018 02:42 PM 711 Throwable.hx 07/16/2018 02:42 PM 133 Traversable.hx 07/16/2018 02:42 PM 13,332 Web.hx 07/16/2018 02:42 PM 382 _polyfills.php 07/16/2018 02:42 PM <DIR> _std 28 File(s) 128,196 bytes 6 Dir(s) 1,024,650,338,304 bytes free If the new classes aren’t in your …/std/php directory, then IDEA can’t find them.
https://community.haxe.org/t/idea-error-haxe-sdk-has-no-valid-root-set-up-or-change-sdk/969
CC-MAIN-2022-21
refinedweb
1,012
78.75
I am trying to perform certain operations on a single image, while in a training loop. In case of batch_size = 1 , it could be easily done by using torch.squeeze but I am unable to think of a way when I can do it for other batch sizes. Below is the minimum code for representation - def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path): for epoch in range(1, n_epochs+1): for batch_idx, (data, target) in enumerate(final_train_loader): # Here the target shape would be B*H*W*N B: Batch size H,W,N:Height,width,no. of channels I want it in a form of H * W * N
https://discuss.pytorch.org/t/how-to-remove-batch-size-in-training-loop-to-perform-certain-operations-on-a-single-image/69020
CC-MAIN-2022-21
refinedweb
109
60.45
#include <buffer.h> #include <buffer.h> List of all members. For a type-safe buffer, template over the specific object Type you want to put in it. You can use this container for storing smart pointers (e.g. IceUtil smart pointers). In this case the container will only store the pointers and will not perform a deep copy. -1 BufferTypeQueue [inline] Buffer depth, i.e. the maximum number of objects this buffer can hold: References gbxiceutilacfr::Buffer< Type >::purge(). BufferTypeCircular Typically is called before the buffer is used, or if, for some reason, the configuration information was not available at the time when the constructor was called. Careful: all data currently in the buffer is lost, because purge() is calledfirst. NOTE: can do smarter by trancating queue only as much as needed. Non-popping and non-blocking random-access read. Returns n-th element from the buffer. Indexing starts at 0. Non-popping and non-blocking read from the front of the buffer. Calls to get() on an empty buffer raises an gbxutilacfr::Exception exception. You can catch these and call getWithTimeout() which will block until new data arrives. Same as get() but calls pop() afterwards. Same as getWithTimeout but calls pop afterwards. References gbxiceutilacfr::Buffer< Type >::getWithTimeout(), and gbxiceutilacfr::Buffer< Type >::pop(). If there is an object in the buffer, sets the object and returns 0; If the buffer is empty, getWithTimeout() blocks until a new object is pushed in and returns the new value. By default, there is an infinite timeout (negative value). Returns 0 if successful. If timeout is set to a positive value and the wait times out, this function returns -1 and the object argument itself is not touched. In the rare event of spurious wakeup, the return value is 1. Referenced by gbxiceutilacfr::Buffer< Type >::getAndPopWithTimeout(). Pops the front element off and discards it (usually after calling get() ). If the buffer is empty this command is quietly ignored. Adds an object to the end of the buffer. If there is no room left in a finite-depth circular buffer, the front element of the buffer is quietly deleted and the new data is added to the end. If there is no room left in a finite-depth queue buffer the new data is quietly ignored. References gbxiceutilacfr::BufferTypeCircular.
http://gearbox.sourceforge.net/classgbxiceutilacfr_1_1Buffer.html
CC-MAIN-2017-30
refinedweb
384
59.5
Hi! The answer should be 'undeterministic'. Servlet are created and initialize once and repeated use of it employ the SAME instance. However, the servlet engine may choose to unload some of the servlets whenever she thinks appropriate (maybe the servlet has not been used for a long time and the memory is running out). Arion. Manpreet wrote: > HI all, I have a small servlet code like this: import java.io.*;import > javax.servlet.*;import java.servlet.http/*; public class Test extends > HttpServlet { int count=0; public void > doGet(HttpServletRequest req , HttpServletResponse res) throws > IOException , ServletException { PrintWriter > out=res.getWriter(); out.write(count++); }} Is it > possible that each request to this servlet gets the initial value of > count i.e. 0 thanks in advance ,Manpreet Singh.
http://mail-archives.apache.org/mod_mbox/tomcat-users/200006.mbox/%3C393F5A5C.6856B99E@talentinfo.com.hk%3E
CC-MAIN-2017-30
refinedweb
125
63.7
16 August 2010 11:02 [Source: ICIS news] SINGAPORE (ICIS)--Here is Monday’s end-of-day ?xml:namespace> CRUDE: Sep WTI $75.84/bbl, up 45 cents ; Sep BRENT $75.52/bbl, up 41 cents Crude futures strengthened on Monday, regaining ground lost in the previous session, ahead of the expiry of the September ICE Brent futures contract later in the day. However, a slowdown in economic growth in NAPHTHA: $665.00-669.00/tonne CFR Asian first-half October contract ended higher on Monday at $665.00-669.00/tonne CFR Japan, up $6.50-7.50 from early trade as global crude futures bounced higher. The second-half October contract was valued at $664.00-669.00/tonne CFR BENZENE: $840-850/tonne, up $5-10 Prices remained on an upward track with a deal for any October loading reported at $853/tonne FOB TOLUENE: $755-765/tonne, up $5 Prices firmed in line with higher benzene values, with an offer for second-half September lifting heard at $767
http://www.icis.com/Articles/2010/08/16/9385251/evening-snapshot-asia-markets-summary.html
CC-MAIN-2014-49
refinedweb
172
65.93
Ok, You already know what kernel is The first part of writing an operating system is to write a bootloader in 16 bit assembly (real mode). Bootloader is a piece of program that runs before any operating system is running. it is used to boot other operating systems, usually each operating system has a set of bootloaders specific for it. Go to following link to create your own bootloader in 16 bit assembly Bootloaders generally select a specififc operating system and starts it's process and then operating system loads itself into memory. If you are writing your own bootloader for loading a kernel you need to know the overall addressing/interrupts of memory as well as BIOS. Mostly each operating system has specific bootloader for it. There are lots of bootloaders available out there in online market. But there are some proprietary bootloaders such as Windows Boot Manager for Windows operating systems or BootX for Apple's operating systems. But there are lots of free and open source bootloaders.see the comparison, Among most famous is GNU GRUB - GNU Grand Unified Bootloader package from the GNU project for Unix like systems. We will use GNU GRUB to load our kernel because it supports a multiboot of many operating systems. multiboot header not found error. Xorriso :- A package that creates, loads, manipulates ISO 9660 filesystem images.(man xorriso) grub-mkrescue :- Make a GRUB rescue image, this package internally calls the xorriso functionality to build an iso image. QEMU :- Quick EMUlator to boot our kernel in virtual machine without rebooting the main system. Alright, writing a kernel from scratch is to print something on screen. So we have a VGA(Visual Graphics Array), a hardware system that controls the display. VGA has a fixed amount of memory and addresssing is 0xA0000 to 0xBFFFF. 0xA0000 for EGA/VGA graphics modes (64 KB) 0xB0000 for monochrome text mode (32 KB) 0xB8000 for color text mode and CGA-compatible graphics modes (32 KB) First you need a multiboot bootloader file that instruct the GRUB to load it. Following fields must be define. Magic :- A fixed hexadecimal number identified by the bootloader as the header(starting point) of the kernel to be loaded. flags :- If bit 0 in the flags word is set, then all boot modules loaded along with the operating system must be aligned on page (4KB) boundaries. checksum :- which is used by special purpose by bootloader and its value must be the sum of magic no and flags. We don't need other information, but for more details Ok lets write a GAS assembly code for above information. we dont need some fields as shown in above image. boot.S # set magic number to 0x1BADB002 to identified by bootloader .set MAGIC, 0x1BADB002 # set flags to 0 .set FLAGS, 0 # set the checksum .set CHECKSUM, -(MAGIC + FLAGS) # set multiboot enabled .section .multiboot # define type to long for each data defined as above .long MAGIC .long FLAGS .long CHECKSUM # set the stack bottom stackBottom: # define the maximum size of stack to 512 bytes .skip 512 # set the stack top which grows from higher to lower stackTop: .section .text .global _start .type _start, @function _start: # assign current stack pointer location to stackTop mov $stackTop, %esp # call the kernel main source call KERNEL_MAIN cli # put system in infinite loop hltLoop: hlt jmp hltLoop .size _start, . - _start We have defined a stack of size 512 bytes and managed by stackBottom and stackTop identifiers. Then in _start, we are storing a current stack pointer, and calling the main function of a kernel. As you know, every process consists of different sections such as data, bss, rodata and text. You can see the each sections by compiling the source code without assembling it. e.g.: Run the following command gcc -S kernel.c and see the kernel.S file. And this sections requires a memory to store them, this memory size is provided by the linker image file. Each memory is aligned with the size of each block. It mostly require to link all the object files together to form a final kernel image. Linker image file provides how much size should be allocated to each of the sections. The information is stored in the final kernel image. If you open the final kernel image(.bin file) in hexeditor, you can see lots of 00 bytes. the linker image file consists of an entry point,(in our case it is _start defined in file boot.S) and sections with size defined in the BLOCK keyword aligned from how much spaced. linker.ld /* entry point of our kernel */ ENTRY(_start) SECTIONS { /* we need 1MB of space atleast */ . = 1M; /* text section */ .text BLOCK(4K) : ALIGN(4K) { *(.multiboot) *(.text) } /* read only data section */ .rodata BLOCK(4K) : ALIGN(4K) { *(.rodata) } /* data section */ .data BLOCK(4K) : ALIGN(4K) { *(.data) } /* bss section */ .bss BLOCK(4K) : ALIGN(4K) { *(COMMON) *(.bss) } } Now you need a configuration file that instruct the grub to load menu with associated image file grub.cfg menuentry "MyOS" { multiboot /boot/MyOS.bin } Now let's write a simple HelloWorld kernel code. kernel_1 :- kernel.h #ifndef _KERNEL_H_ #define _KERNEL_H_ #define VGA_ADDRESS 0xB8000 #define WHITE_COLOR 15 typedef unsigned short UINT16; UINT16* TERMINAL_BUFFER; #endif Here we are using 16 bit, on my machine the VGA address is starts at 0xB8000 and 32 bit starts at 0xA0000. An unsigned 16 bit type terminal buffer pointer that points to VGA address. It has 8*16 pixel font size. see above image. kernel.c #include"kernel.h" static UINT16 VGA_DefaultEntry(unsigned char to_print) { return (UINT16) to_print | (UINT16)WHITE_COLOR << 8; } void KERNEL_MAIN() { TERMINAL_BUFFER = (UINT16*) VGA_ADDRESS; TERMINAL_BUFFER[0] = VGA_DefaultEntry('H'); TERMINAL_BUFFER[1] = VGA_DefaultEntry('e'); TERMINAL_BUFFER[2] = VGA_DefaultEntry('l'); TERMINAL_BUFFER[3] = VGA_DefaultEntry('l'); TERMINAL_BUFFER[4] = VGA_DefaultEntry('o'); TERMINAL_BUFFER[5] = VGA_DefaultEntry(' '); TERMINAL_BUFFER[6] = VGA_DefaultEntry('W'); TERMINAL_BUFFER[7] = VGA_DefaultEntry('o'); TERMINAL_BUFFER[8] = VGA_DefaultEntry('r'); TERMINAL_BUFFER[9] = VGA_DefaultEntry('l'); TERMINAL_BUFFER[10] = VGA_DefaultEntry('d'); } The value returned by VGA_DefaultEntry() function is the UINT16 type with highlighting the character to print with white color. The value is stored in the buffer to display the characters on a screen. First lets point our pointer TERMINAL_BUFFER to VGA address 0xB8000. Now you have an array of VGA, you just need to assign specific value to each index of array according to what to print on a screen as we usually do in assigning the value to array. See the above code that prints each character of HelloWorld on a screen. Ok lets compile the source. type sh run.sh command on terminal. run.sh #assemble boot.s file as boot.s -o boot.o #compile kernel.c file gcc -c kernel.c -o kernel.o -std=gnu99 -ffreestanding -O2 -Wall -Wextra #linking the kernel with kernel.o and boot.o files gcc -T linker.ld -o MyOS.bin -ffreestanding -O2 -nostdlib kernel.o boot.o -lgcc #check MyOS.bin file is x86 multiboot file or not grub-file --is-x86-multiboot MyOS.bin #building the iso file mkdir -p isodir/boot/grub cp MyOS.bin isodir/boot/MyOS.bin cp grub.cfg isodir/boot/grub/grub.cfg grub-mkrescue -o MyOS.iso isodir #run it in qemu qemu-system-x86_64 -cdrom MyOS.iso the output is As you can see, it is a overhead to assign each and every value to VGA buffer, so we can write a function for that, which can print our string on a screen (means assigning each character value to VGA buffer from a string). kernel_2 :- kernel.h #ifndef _KERNEL_H_ #define _KERNEL_H_ #define VGA_ADDRESS 0xB8000 #define WHITE_COLOR 15 typedef unsigned short UINT16; int DIGIT_ASCII_CODES[10] = {0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39}; unsigned int VGA_INDEX; #define BUFSIZE 2200 UINT16* TERMINAL_BUFFER; #endif DIGIT_ASCII_CODES are hexadecimal values of characters 0 to 9. we need them when we want to print them on a screen. VGA_INDEX is the our VGA array index. VGA_INDEX is increased when value is assigned to that index. BUFSIZE is the limit of our VGA. following function prints a string on a string by assigning each character to VGA. void printString(char *str) { int index = 0; while(str[index]){ TERMINAL_BUFFER[VGA_INDEX] = VGA_DefaultEntry(str[index]); index++; VGA_INDEX++; } } To print an 32 bit integer, first you need to convert it into a string. int digitCount(int num) { int count = 0; if(num == 0) return 1; while(num > 0){ count++; num = num/10; } return count; } void itoa(int num, char *number) { int digit_count = digitCount(num); int index = digit_count - 1; char x; if(num == 0 && digit_count == 1){ number[0] = '0'; number[1] = '\0'; }else{ while(num != 0){ x = num % 10; number[index] = x + '0'; index--; num = num / 10; } number[digit_count] = '\0'; } } void printInt(int num) { char str_num[digitCount(num)+1]; itoa(num, str_num); printString(str_num); } To print a new line, you have to skip some bytes in VGA pointer(TERMINAL_BUFFER) according to the pixel font size. For this we need another variable that stores the current Y th index. static int Y_INDEX = 1; void printNewLine() { if(Y_INDEX >= 55){ Y_INDEX = 0; Clear_VGA_Buffer(&TERMINAL_BUFFER); } VGA_INDEX = 80*Y_INDEX; Y_INDEX++; } And in KERNEL_MAIN(), just call the functions, void KERNEL_MAIN() { TERMINAL_BUFFER = (UINT16*) VGA_ADDRESS; printString("Hello World!"); printNewLine(); printInt(1234567890); printNewLine(); printString("GoodBye World!"); } As you can see it is the overhead to call each and every function for displaying the values, that's why C programming provides a printf() function with format specifiers which print/set specific value to standard output device with each specifier with literals such as \n, \t, \r etc. kernel_3 :- VGA provides 15 colors, BLACK = 0, BLUE = 1, GREEN = 2, CYAN = 3, RED = 4, MAGENTA = 5, BROWN = 6, LIGHT_GREY = 7, DARK_GREY = 8, LIGHT_BLUE = 9, LIGHT_GREEN = 10, LIGHT_CYAN = 11, LIGHT_RED = 12, LIGHT_MAGENTA = 13, YELLOW = 14, WHITE = 15, Just change the function name VGA_DefaultEntry() to some another with UINT8 type of color parameter with replacing the WHITE_COLOR to it. For keyboard interrupt, you have inX function provided by gas, where X could be byte,word,dword or long etc. The BIOS keyboard interrupt value is 0x60, which is in bytes, passed to the parameter as to inb instruction. UINT8 IN_B(UINT16 port) { UINT8 ret; asm volatile("inb %1, %0" :"=a"(ret) :"Nd"(port) ); return ret; } We can also create a simple linked list data structure, as a starting point of an file system. let's say we have following record, typedef struct list_node{ int data; struct list_node *next; }LIST_NODE; but we need memory to allocate this block because there is no malloc() function exists. Instead we use a memory address assigning to pointer to structure for storing this data block. well you can use any memory address but not those addresses who are used for special purposes. In above addresses range, 0x00000500 - 0x00007BFF or 0x00007E00 - 0x0009FFFF can be used to store our linked list data. You can access the whole memory(RAM) if you know the limit of it or can be stored in a stack. So here's a function that return a allocated LIST_NODE memory block with starting at address 0x00000500, LIST_NODE *getNewListNode(int data) { LIST_NODE *newnode = (LIST_NODE*)0x00000500 + MEM_SIZE; newnode->data = data; newnode->next = NULL; MEM_SIZE += sizeof(LIST_NODE); return newnode; } This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) -Wl,--build-id=none Quote:GCC :- GNU Compiler Collection a cross compiler. A newer version of GCC. I am using 7.2.0 version of GCC. The most important thing. If you use old version you may face multiboot header not found error. [root@archserver kernel_2]# sh run.sh /usr/bin/ld: boot.o: relocation R_X86_64_32 against `.multiboot' can not be used when making a PIE object; recompile with -fPIC /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: error: ld returned 1 exit status grub-file: error: cannot open `MyOS.bin': No such file or directory. cp: cannot stat 'MyOS.bin': No such file or directory grub-mkrescue: warning: Your xorriso doesn't support `--grub2-boot-info'. Some features are disabled. Please use xorriso 1.2.9 or later.. grub-mkrescue: error: `mformat` invocation failed . qemu-system-x86_64: -cdrom MyOS.iso: Could not open 'MyOS.iso': No such file or directory as --32 boot.s -o boot.o gcc -m32 -c kernel.c -o kernel.o -std=gnu99 -ffreestanding -O2 -Wall -Wextra gcc -m32 -T linker.ld -o MyOS.bin -ffreestanding -O2 -nostdlib kernel.o boot.o -lgcc sudo apt-get install gcc-multilib mov ah, 0x03 ; load third stage to memory General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1225196/Create-Your-Own-Kernel-In-C
CC-MAIN-2018-47
refinedweb
2,100
66.03
Many programmers are unaware that there is a perfectly good PWM driver that will let you do just about anything you want to, without having to resort to any clever programming.> One way around the problem of getting a fast response from a microcontroller is to move the problem away from the processor. In the case of the Pi's processor there are some built-in devices that can use GPIO lines to implement protocols without the CPU being involved. In this chapter we take a close look at pulse width modulation (PWM) including generating sound and driving LEDs. When performing their most basic function, i.e. output, the GPIO lines can be set high or low by the processor. How quickly they can be set high or low depends on the speed of the processor. Using the GPIO line in its Pulse Width Modulation (PWM) mode you can generate pulse trains up to 4.8MHz, i.e. pulses just a little more than 0.08µs. The reason for the increase in speed, a factor of at least 100,µs, it can only change the pulses it produces each time that the processor can modify them. For example, you can't use PWM to produce a single 0.1µs pulse because you can't disable the PWM generator in just 0.1µs. This said, hardware generated PWM is available on the Pi and there is a good PWM driver that makes it very easy to use. All Pi models have two PWM channels implemented in hardware, but on models earlier than the Pi 4 these are also used to generate audio. What this means is that if you want to use hardware PWM on a Pi Zero or earlier you have to disable or at least not use audio at the same time. You can use both on a Pi 4, but notice that the PWM channels and the audio share the same clock signal and this can still cause problems. The two PWM hardware modules can be configured to drive different GPIO lines. For the Pi the standard configuration is to have PWM0 drive either GPIO18 or GPIO12 and PWM1 drive either GPIO13 or GPIO19. There are two PWM drivers available. One that activates a single PWM channel and one that activates both the available channels. The documentation for both is: Name: pwm Info: Configures a single PWM channel only one available on all platforms, and it is the one used by the I2S audio interface. Pins 12 & 13 might be better choices on A+/B+/Pi2. 2) The onboard analogue audio output uses both PWM channels. 3) So be careful mixing audio and PWM. 4) Currently the clock must have been enabled and configured by other means. Load: dtoverlay=pwm,<param>=<val> Params:pin Output pin (default 18) - see table func Pin function (default 2 = Alt5) - see above clock PWM clock frequency (informational) Name: pwm-2chan Info: Configures both PWM channels the only one available on all platforms, and it is the one used by the I2S audio interface. Pins 12 and 13 might be better choices on an A+/B+/Pi2. 2) The onboard analogue audio output uses both PWM channels. 3) So be careful mixing audio and PWM. 4) Currently the clock must have been enabled and configured by other means. Load: dtoverlay=pwm-2chan,<param>=<val> Params: pin Output pin (default 18) - see table pin2 Output pin for other channel (default 19) func Pin function (default 2 = Alt5) - see above func2 Function for pin2 (default 2 = Alt5) clock PWM clock frequency (informational) Note: There is a relatively recent (late 2019) patch to the driver that fixes a problem at high PWM frequencies. Use: sudo apt update followed by sudo apt full-upgrade to make sure you have the up-to-date driver. In simple terms you can use one or two channels of PWM and you would be well advised to use GPIO18 for PWM0 and GPIO19 for PWM1 on all modern Pis. Notice that you cannot currently use the driver to set the frequency of the PWM clock, but this is automatically enabled at a default frequency. You can find what the frequency is using: vcgencmd measure_clock pwm at the command prompt. It only reports an accurate value if the PWM driver is loaded and enabled. On a Pi Zero and a Pi 4 it reports 99995000, i.e. approximately 100Mhz, and empirical measurements give the effective clock rate, after division by 5, as 20MHz for both the Pi 4 and Pi Zero. The clock rate is important because it determines the resolution of the duty cycle – see later. If you load either driver by adding: dtoverlay=pwm or dtoverlay=pwm-2chan to boot/config.txt, you will discover that on reboot you have a new pwmchip0 folder in the /sys/pwm folder. The pi driver is configured to see the PWM hardware as a single PWM “chip”. To work with either PWM channel you have to export it. In this context exporting means that you claim sole use of the channel. To do this you have to write a “0” or a “1” to the export file in the pwmchip0 folder. To unexport you do do the same to the unexport file in the pwmchip0 folder. After you have exported the channel you will see new folders, pwm0 and pwm1 in the pwmchip0 folder. Of course you only see the folders you have exported and you can only export pwm0 if you have use the PWM driver. Within the pwmx folder you will find the following important files: period period in nanoseconds duty_cycle duty cycle in nanoseconds enable write 1 to enable, 0 to disable So all you have to do is: export the channel write to period write to duty_cycle write “1” to enable Notice that as this is hardware PWM, once you have set and enabled the channel, the PWM generation continues after the program ends. A simple program to use PWM0 is: #define _DEFAULT_SOURCE #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <unistd.h> #include <fcntl.h> #include <string.h> #include <fcntl.h> FILE *doCommand(char *cmd) { FILE *fp = popen(cmd, "r"); if (fp == NULL) { printf("Failed to run command %s \n\r", cmd); exit(1); } return fp; } void checkPWM() { FILE *fd = doCommand("sudo dtparam -l"); char output[1024]; int txfound = 0; char indicator[] = "pwm-2chan"; char command[] = "sudo dtoverlay pwm-2chan"; while (fgets(output, sizeof(output), fd) != NULL) { printf("%s\n\r", output); fflush(stdout); if (strstr(output, indicator) != NULL) { txfound = 1; } } if (txfound == 0) { fd = doCommand(command); sleep(2); } pclose(fd); } int main(int argc, char **argv) { checkPWM(); int fd = open("/sys/class/pwm/pwmchip0/export", O_WRONLY); write(fd, "0", 1); close(fd); sleep(2); fd = open("/sys/class/pwm/pwmchip0/pwm0/period", O_WRONLY); write(fd, "10000000", 8); close(fd); fd = open("/sys/class/pwm/pwmchip0/pwm0/duty_cycle", O_WRONLY); write(fd, "8000000", 7); close(fd); fd = open("/sys/class/pwm/pwmchip0/pwm0/enable", O_WRONLY); write(fd, "1", 1); close(fd); } The checkPWM function dynamically loads the pwm-2chan driver – you can change it to pwm if you only need one channel. It exports the channel and then sets the period to 100Hz with an 80% duty cycle. A delay of 1 second is included after the export to allow the system to create the folders and files. A better solution is to test for an error on the first open and keep looping until it works. This program doesn’t need root permissions to run, only to dynamically install the driver. You can use the other channel in the same way. <ASIN:B06XFZC3BX> <ASIN:B0748MPQT4> <ASIN:B08R87H4RR> <ASIN:B07V5JTMV9>
https://i-programmer.info/programming/hardware/14565-pi-iot-in-c-using-linux-drivers-the-pwm-driver.html
CC-MAIN-2022-40
refinedweb
1,283
61.46
#include <nanomsg/nn.h> #include <nanomsg/bus.h> Broadcasts messages from any node to all other nodes in the topology. The socket should never receive messages that it sent itself. This pattern scales only to local level (within a single machine or within a single LAN). Trying to scale it further can result in overloading individual nodes with messages. For bus topology to function correctly, user is responsible for ensuring that path from each node to any other node exists within the topology. Raw (AF_SP_RAW) BUS socket never sends the message to the peer it was received from. NN_BUS There are no options defined at the moment. nn_pubsub(7) nn_reqrep(7) nn_pipeline(7) nn_survey(7) nn_pair(7) nanomsg(7)
https://man.linuxreviews.org/man7/nn_bus.7.html
CC-MAIN-2021-39
refinedweb
119
59.19
View demo Download source Today we’d like to share some tiny hover effect ideas with you.. Stack Motion Effect Markup & Styles The markup for the items is as follows: <div class="grid grid--effect-vega"> <a href="#" class="grid__item grid__item--c1"> <div class="stack"> <div class="stack__deco"></div> <div class="stack__deco"></div> <div class="stack__deco"></div> <div class="stack__deco"></div> <div class="stack__figure"> <img class="stack__img" src="img/1.png" alt="Image"/> </div> </div> <div class="grid__item-caption"> <h3 class="grid__item-title">anaerobic</h3> <div class="column column--left"> <span class="column__text">Period</span> <span class="column__text">Subjects</span> <span class="column__text">Result</span> </div> <div class="column column--right"> <span class="column__text">2045</span> <span class="column__text">133456</span> <span class="column__text">Positive</span> </div> </div> </a> <a href="#" class="grid__item grid__item--c2"><!-- ... --></a> <a href="#" class="grid__item grid__item--c2"><!-- ... --></a> </div><!-- /grid --> We use a specific class for the grid to create individual effects. The four stack__deco divisions are the decorative elements that we animate along with the stack__figure and the image. The grid itself is the container that has perspective. The grid caption has a title and two columns that optionally get animated, too. For the grid we use a flexbox layout (see demo.css). For the decorative cards of the stack and the figure with its image will have the following styles: .stack { position: relative; width: 100%; height: 200px; transform-style: preserve-3d; } .stack__deco { position: absolute; top: 0; left: 0; width: 100%; height: 100%; background-color: currentColor; transform-origin: 50% 100%; } .stack__deco:first-child { opacity: 0.2; } .stack__deco:nth-child(2) { opacity: 0.4; } .stack__deco:nth-child(3) { opacity: 0.6; } .stack__deco:nth-child(4) { opacity: 0.8; } .stack__figure { position: relative; display: flex; justify-content: center; align-items: center; overflow: hidden; width: 100%; height: 100%; cursor: pointer; transform-origin: 50% 100%; } .stack__img { position: relative; display: block; flex: none; } For some effects we want some special styles: /* Individual effects */ /* Vega */ .grid--effect-vega .column { opacity: 1; } /* deneb */ .grid--effect-deneb { perspective: none; } .grid--effect-deneb .stack__figure, .grid--effect-deneb .stack__deco { transform-origin: 50% 50%; } .grid--effect-deneb .column { opacity: 1; } /* ... */ And here is an example for an effect animation (hovering in and out): HamalFx.prototype._in = function() { var self = this; this.DOM.stackItems.map(function(e, i) { e.style.opacity = i !== self.totalItems - 1 ? 0.2*i+0.2 : 1 }); anime({ targets: this.DOM.stackItems, duration: 1000, easing: 'easeOutExpo', translateY: function(target, index) { return -1*index*5; }, rotate: function(target, index, cnt) { if( index === cnt - 1 ) { return 0; } else { return index%2 ? (cnt-index)*1 : -1*(cnt-index)*1; } }, scale: function(target, index, cnt) { if( index === cnt - 1 ) { return 1; } else { return 1.05; } }, delay: function(target, index, cnt) { return (cnt-index-1)*30 } }); anime({ targets: this.DOM.img, duration: 1000, easing: 'easeOutExpo', scale: 0.7 }); anime({ targets: [this.DOM.columns.left, this.DOM.columns.right], duration: 1000, easing: 'easeOutExpo', translateX: function(target, index) { return index === 0 ? -30 : 30; } }); }; HamalFx.prototype._out = function() { var self = this; anime({ targets: this.DOM.stackItems, duration: 500, easing: 'easeOutExpo', translateY: 0, rotate: 0, scale: 1, opacity: function(target, index, cnt) { return index !== cnt - 1 ? 0 : 1 } }); anime({ targets: this.DOM.img, duration: 1000, easing: 'easeOutElastic', scale: 1 }); anime({ targets: [this.DOM.columns.left, this.DOM.columns.right], duration: 500, easing: 'easeOutExpo', translateX: 0 }); }; We hope you like these hover effects and find them inspirational! References and Credits - Idea based on the effect seen on the projects page of Merci-Michel - Anime.js by Julian Garnier - Images made with a design by Freepik.com - Typeface Overpass Mono by Delve Withrington View demo Download source Really nice! I want to use some in my projects. :D Wow! I don’t have any uses for something like that just now but I will definitely keep those in mind, awesome. I wonder if similar animation can be tried with multiple box-shadows. Worth trying… I just can say WOW It is great….I kind of made it work. But in my case after the hover animation, the images disappears. If I hover on the empty space I can see the animation again (firefox and chrome). Oh well. Too advanced for me. Those are fantastic hover effect! I will use one of this in my personal landing page! Congrats! very interesting, I love to use it. Amazing hover effects for slow motion. I can use all of these effects in my project. Thank you. Can you help me to implement it? i try to use it but im just seeing the items without hover effects wow! i have no words ! unfortunately i cant fix it on my projectt , it was good if you make a short clip for install that or teach a lit better or more … Happy new spirit \m/
https://tympanus.net/codrops/2017/03/15/stack-motion-hover-effects/
CC-MAIN-2018-09
refinedweb
796
51.65
Below is the error received. Could the SPI be damaged? root@loraradio2:/home/pi# python rfm9x_spi.py Traceback (most recent call last): File "rfm9x_spi.py", line 10, in <module> import adafruit_rfm9x File "/usr/local/lib/python3.7/dist-packages/adafruit_rfm9x.py", line 142, in <module> class RFM9x: File "/usr/local/lib/python3.7/dist-packages/adafruit_rfm9x.py", line 265, in RFM9x crc: bool = True NameError: name 'SPI' is not defined below is the script used: - Code: Select all | TOGGLE FULL SIZE # SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries # SPDX-License-Identifier: MIT # Simple demo of sending and recieving data with the RFM95 LoRa radio. # Author: Tony DiCola import board import busio import digitalio import adafruit_rfm9x # Define radio parameters. RADIO_FREQ_MHZ = 915.0 # Frequency of the radio in Mhz. Must match your # module! Can be a value like 915.0, 433.0, etc. # Define pins connected to the chip, use these if wiring up the breakout according to the guide: CS = digitalio.DigitalInOut(board.D5) RESET = digitalio.DigitalInOut(board.D6) # Or uncomment and instead use these if using a Feather M0 RFM9x board and the appropriate # CircuitPython build: # CS = digitalio.DigitalInOut(board.RFM9X_CS) # RESET = digitalio.DigitalInOut(board.RFM9X_RST) # Define the onboard LED LED = digitalio.DigitalInOut(board.D13) LED.direction = digitalio.Direction.OUTPUT # Initialize SPI bus. spi = busio.SPI(board.SCK, MOSI=board.MOSI, MISO=board.MISO) # Initialze RFM radio rfm9x = adafruit_rfm9x.RFM9x(spi, CS, RESET, RADIO_FREQ_MHZ) # Note that the radio is configured in LoRa mode so you can't control sync # word, encryption, frequency deviation, or other settings! # You can however adjust the transmit power (in dB). The default is 13 dB but # high power radios like the RFM95 can go up to 23 dB: rfm9x.tx_power = 23 # Send a packet. Note you can only send a packet up to 252 bytes in length. # This is a limitation of the radio packet size, so if you need to send larger # amounts of data you will need to break it into smaller send calls. Each send # call will wait for the previous one to finish before continuing. rfm9x.send(bytes("Hello world!\r\n", "utf-8")) print("Sent Hello World message!")
https://forums.adafruit.com/viewtopic.php?f=57&p=925657&sid=c231c026147ed47bce192ca2a425fe32
CC-MAIN-2022-27
refinedweb
358
51.55
Performance of QQuickWidget Hi, I've created a small application using a C++ class that derives from QQuickWidget, and this launches a qml file. The qml file has a Repeater and creates 20 VU meters. The VU meter is contained inside its own qml file and has a timer which changes the height of a clip rectangle using a random number......so the meter is being constantly repainted. However, if I have 20 meters visible, but only one of these is getting its height changed the CPU usage is the same as when all 20 are getting their height changed. Why is this? I would have thought that the CPU usage would be much lower because the other 19 meters are not getting their height changed, and are therefore not being repainted? Any advice / tips is appreciated! Thanks. Hi @zepfan, I don't have the exact reason as I don’t have enough experience with using QQuickWidgetbut the documentation mentions few points related to performance hit that may occur. Check out the 3 notes under that topic. Can you try using QQuickViewand createWindowContainer()as according to this blog too: Well, unless there is a very good reason for using it and you know what you are doing. If the stacking and other limitations do not apply and the application is targeting desktop platforms only and the platform in question is able to use the Qt Quick Scenegraph’s threaded render loop, then having a QQuickView embedded via createWindowContainer() will always lead to better performance when compared to QQuickWidget Thanks for the response. I tried using a QQuickView and yes the performance is better, but it still uses the same CPU % whether there is one meter active or if there are 20 active. So it seems there is still unnecessary repainting happening on the other 19 meters. Is there a way to prevent repainting of elements that don't need it? Thanks. So it seems there is still unnecessary repainting happening on the other 19 meters. Are you sure they are repainted everytime ? How did you check ? Is this VU meter a C++ class derived from QQuickPaintedItem? No I'm not sure that they're getting repainted, but if I have just one active meter visible and nothing else, the CPU % goes down. But if I have 20 visible and only one active then the CPU is much higher. Since the other 19 shouldn't be getting repainted why would the CPU still be higher? The 19 meters that are inactive just have a black opaque background. The VU meters are built just using qml. Only the QQuickView derived class is C++ and it has basically nothing in it. This is just a small test app I've built with no frills. Any ideas? Thanks. @zepfan I guess there could be some other problem then. May be some connections or properties that are getting updated. Even if it gets repainted I think CPU usage wont go that much high as QML uses scenegraph which in turn uses OpenGL so most of the rendering is done by the hardware. Can you post a complete minimal runnable example to test ? Here is the qml file that defines the meter. I use a Repeater to create multiple instances of these and give each one a unique index (meterIndex). In the code you can see I check to see if it is the first meter (meterIndex = 0) and if so then only that one's clip rectangle is active. I've since realised that setting the rest to invisible actually brings down the CPU %. But if I leave them visible then the CPU increases, even though there's no activity. Is it normal to set items to invisible if you don't want anything rendered for them? import QtQuick 2.0 Rectangle { id: myMeter width: 20 height: 100 property int meterIndex: 0 color: "black" Rectangle { id: clipRect width: parent.width height: 0 clip: true Timer { id: myTimer interval: 20 repeat: true running: true triggeredOnStart: true onTriggered: { if(myMeter.meterIndex == 0) { clipRect.height = myMeter.height * Math.random(); } else { clipRect.visible = false; myTimer.stop(); } } } color: "blue" } } - p3c0 Moderators @zepfan I see you are starting a Timerof interval 20 and it seems you are starting it for all the other remaining meters. A Timerof such a low interval and that too for 20 items is ofcourse going to be CPU intensive. I would suggest here to not start the Timerfor the rest in first place. You can add s condition like: Timer { running: myMeter.meterIndex == 0 } So the Timerwill start only for the Item with meterIndex = 0 Is it normal to set items to invisible if you don't want anything rendered for them? Yes it reduces the cost of drawing again. See over-drawing-and-invisible-elements. So if I have other elements on screen but they have not changed, does that mean they will be re-rendered every time also? I can't change them to (visible: false) as obviously I need to see them! Is there a flag/setting that you can set that will make sure only items that are 'dirty' will be rendered? Thanks @zepfan No change no rendering again. Is there a flag/setting that you can set that will make sure only items that are 'dirty' will be rendered? I'm not aware of any such in QML. But as I explained earlier your timer is eating the CPU. Even if Item is invisible timer will be active. 20 ms timeout for 20 items is CPU intensive. But as I explained earlier your timer is eating the CPU. Even if Item is invisible timer will be active. 20 ms timeout for 20 items is CPU intensive. True, but if I only have the timer running for one meter there is still quite a difference in CPU % between having the idle meters' clipRect visible or not visible. If they're not being repainted (since they're not active) then there shouldn't be a difference in CPU %? @zepfan How much CPU is being utilized on your system ? Can you post a complete example test it out ? If you use the code above for the actual Meter (MyMeter), and here is its parent which uses a Repeater to construct 20 of them. Try having the timer just running in the first meter, compared with having the timer running in all of them. I don't notice much change in CPU when having one of them getting its clipRect.height changed or having all of them get their clipRect.height changed. I should state that the reason I've made this test application is because I'm doing something similar in work (which I obviously can't post) and am getting similar results. My suspicion is that the whole scene is getting re-rendered rather than just the item that needs it. import QtQuick 2.0 Rectangle { visible: true width: 900 height: 700 color: "green" Column { id: myColumn spacing: 10 Repeater { model: 2 Row { id: myRow x: 50 y: 50 spacing: 40 property int rowIndex: index Repeater { id: innerRepeater model: 10 MyMeter { objectName: "Meter " + (index + (myRow.rowIndex * innerRepeater.model)) meterIndex: (index + (myRow.rowIndex * innerRepeater.model)) width: 15 height: 150 } } } } } } @zepfan Just tested. I do notice change in CPU %. For single Meter running it is around 3-3.5% while with all meters running it is around 7.5-8.3%. How much on your system ? I don't have the exact figure on me at the moment, but the figure itself isn't relevant. For my test app the figure wasn't too high, but my concern is that the CPU % doesn't change much whether I have one meter active or 20 meters active. Once they are all visible it's pretty much the same. And obviously this affects the real project I'm working on even more. I've actually also just posted in the QtQuick forum just to see if anyone has any thoughts on rendering performance/improvements. Thanks - p3c0 Moderators @zepfan As told earlier the rendering is all done by OpenGL which in turn is handled by the GPU so that wont matter. The Timeris what is affecting the CPU. QtQuick renderer is capable of handling thousands of items at a time. Please check it here. Also here are some benchmark examples. Extremetable example loads 100000items as and when required. Anyway here are some more link that you may fine useful: Try profiling your example.
https://forum.qt.io/topic/55762/performance-of-qquickwidget/8
CC-MAIN-2018-30
refinedweb
1,415
73.17
Subject: Re: [boost] Formal review request: static size matrix/vector linear algebra library (Boost) LA From: Agustín K-ballo Bergé (kaballo86_at_[hidden]) Date: 2010-02-04 19:49:45 El 02/02/2010 06:35 p.m., Emil Dotchevski escribió: > On Tue, Feb 2, 2010 at 12:42 PM, Thomas Klimpel > Nothing has changed because nobody pointed out anything that needed to > be changed in the preliminary review. There are a few concerns, like > the use of operator| ("pipe") to implement type-safe casting but I am > not aware of a better solution. I have been happily using this library for a while, and I have a couple of suggestions/observations. · I've often found the need to forward the result of some Boost.LA operation. To do so, I'm using types from the detail namespace to specify the return type. Until auto and decltype, and for C++03 support, a result_of namespace ala Boost.Fusion would be nice. · Support for subscript operator may be introduced via vref/mref. So vref( any_conforming_vertex_type )[ I ] could be used for both direct indexing and swizzling (as mentioned at your blog). · Support for "complex swizzling expressions" would be nice. I've just made that name up, but I'm referring to things like 'v1[Y,-X]' or 'v1|(Y,-X)' (again, as mentioned at your blog). · More algorithms to operate on vectors and matrices are needed, otherwise people would be reinventing the same generic algorithms. Ideally, I would like to see everything that is available at GLSL, including component-wise boolean operations. Finally, I'm not keen on the library name, but I don't have a better one to suggest. Let me say I have in the past unsuccessfully tried to implement a library like yours myself. I find the abstraction incredibly useful, and I am glad that you managed to write it. K-ballo.- Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2010/02/161575.php
CC-MAIN-2019-22
refinedweb
335
65.52
@) 2013 - 4 - 8 / 12:21 pm / tutorials FlashDevelop with HaXe NME and HaXePunk I spent most of yesterday diving into a whole new world of magic and fun: Haxe. I was hesitant at first because I was dreading the whole ordeal of setting up a new development environment, but it turned out to be way more straight forward than most set ups I've experienced. I ended up getting a quick demo of thousands of entities rotating, scaling, and alpha blending at a steady 50-60fps with the Windows build of HaxePunk, and that has me pretty excited! Follow along as I take you on a journey of code and game development! (For reference, I'm using Windows 7.) Before You Start I'm assuming that you are a flash developer that's using FlashDevelop and you want to migrate to Haxe NME. More specifically, this will be about getting HaxePunk up and running as quickly and as easily as possible. Oh, and I'm also assuming you're using Windows. Download NME Installer Go on over to the NME downloads page and download the installer for your OS. For Windows, you want to download this installer. The NME installer will take care of a lot of stuff for you, and actually we're already almost done after installing just this. Command Line Setup After the installer is finished running you'll have to do one more thing to actually finish the set up completely. Open up a command prompt window (if you go to the start menu and just type 'cmd' you'll find it) and type 'nme setup windows' into it. You'll be presented with a prompt to install Visual Studio C++ Express. Go ahead and do that, as this is necessary for FlashDevelop to build your project to a native windows application. FlashDevelop Template Now that NME is installed and the Windows stuff is set up, you'll want to install some FlashDevelop project templates. This enables you to start a new NME project in FlashDevelop when you launch it. New NME Project This is where things get a little weird since we're not following the typical HaxePunk install route. We're going to start a new NME project, and then dump HaxePunk into the source folder of that project to use it. There are other ways to do it, but after struggling with some stuff yesterday, this is the best way I could get working for now. Start a new project in FlashDevelop and choose an NME Project. Click OK to create the project. It generates some default files and folders for you, and we're going to mess with those a little bit now. HaxePunk Setup Almost done! Now go download the latest version of HaxePunk from the Git page. Just click on the "ZIP" button to download a zip file of the entire repository. Inside that zip file that you just downloaded there is a "src" folder. Inside that folder is a "com" folder. Take the "com" folder and add it to the "src" folder of the NME project we just created. You also need to take the contents of the "assets" folder in the HaxePunk zip file and place them in the "assets" folder of your new project. Hooking up Assets There's one more important thing that you'll need to change, and that's the "application.nmml" file. For those of you that have worked in AIR, this is somewhat similar to the application.xml file for AIR projects. In the assets section (line 19) you'll need to add some lines for the HaxePunk assets we just added. I used this file as a guide to figure out what to modify in here. My assets section of my application.nmml ends up looking like this. A New Main.hx The final thing that needs to be changed is the Main.hx file in the project. Since HaxePunk works differently than the native Flash style code, we'll need a new Main.hx. Here's the complete code for Main.hx that I based off of this file. package ; import com.haxepunk.Engine; import com.haxepunk.HXP; class Main extends Engine { public function new() { super(640, 480, 60, true); } override public function init():Dynamic { super.init(); HXP.console.enable(); trace("HaxePunk is running!"); } public static function main() { new Main(); } } 11 Comments Now with any luck you should be able to build this project into anything you want! Flash, Windows, HTML5, and more. I haven't done much with Android or iOS building yet, so I'm not really sure what's involved for those... but this should be enough to get you started with HaxePunk with NME. Pro Tips There's an option in FlashDevelop that makes autocomplete kinda clunky with Haxe for some reason. To fix this issue, go to Tools -> Project Settings. Go to the "HaxeContext" section, and set "Disable Completion on Demand" to False. If you have issues compiling to a Windows build, try setting up Visual Studio again with the command line "nme setup windows" Make sure you're using the nme library when importing classes. For example, you want to use "nme.geom.Point" for the Point class and NOT "browser.geom.Point" or any other. FlashDevelop will autocomplete for you alphabetically, so just double check that you're getting the nme versions of things when importing. Downloads Recap Download the latest HaxePunk zip. Download the NME 3.5.5 Windows installer. Download the verison 4.3.0 of FlashDevelop. Download the FlashDevelop project templates for HaXe NME. about >>IMAGE - 8 2:00 PM Feyfausto and a tutorial how can we conver a FlashPunk project for Haxepunk? :3 2013 - 4 - 11 12:47 PM ratking I guess the struggles you experienced are related to my own? See here: 2013 - 4 - 12 1:39 AM Kyle Actually strangely enough I didn't encounter that specific problem. I actually just wanted direct access to the HaxePunk source inside my project rather than having it live in the Haxelib folder in a faraway place. I'm actually putting Haxepunk down for now because it has a lot of shortcomings that I didn't notice when I first started experimenting with it. It's a bummer because I'm really excited about the prospect of better performing games on Win/Mac/Linux, but it's just not polished enough for a big game project. 2013 - 4 - 29 3:35 PM Makai Are these shortcomings related to HaxePunk's base in FlashPunk or are they issues with HaxePunk itself? Care to elaborate a bit more on this? 2013 - 5 - 3 3:45 PM Kyle The short comings are with HaxePunk itself. I love FlashPunk and if I could use it forever, I would. HaxePunk is missing some of the core functionality of drawing (right now at least) that I use a lot in FlashPunk. Stuff like drawing lines, circles, rectangles, manipulating bitmapdata and using it for image rendering, and certain image features like sprite.flipped are broken or not yet implemented for native targets. I believe it's also not ported with the latest version of FlashPunk in mind (which is maintained by Draknek and not Chevy Ray anymore) If there was a complete copy of FlashPunk as it stands right now with way better performance and compilation to native targets then I would be all over it. HaxePunk is close, but it's not there yet for my needs unfortunately. 2013 - 9 - 30 9:05 AM salaniojr Hi Kyle, I've been researching the avaiable options in Haxe and, as you pointed out, it seems that they lack some maturity. The one ahead is HaxeFlixel in community activity, commits and docs. It can be a good start. Out of curiosity... are you still using Haxe? If you do, what are you going for? Creating your own framework? Thanks 2013 - 9 - 30 3:35 PM Kyle Hey! I am not working in Haxe anymore. I called it quits on Haxe after I realized that a lot of basic functionality that I was used to was not present yet in HaxePunk. I did look into HaxeFlixel a little bit, but after some more experimentation I landed on C# and SFML which I'm currently using to build a framework. 2014 - 2 - 9 3:31 PM Andre Are you able to provide the source code for the test app in the post? The one with a lot of rectangles 2014 - 2 - 10 12:02 AM Kyle Unfortunately it's been a long time since I've used Haxe, and I don't have the source for that test application anymore. 2014 - 2 - 10 8:16 AM Andre Can you remember how you drew the squares and rotated them? Was it an HaxePunk API or AS3 directly? 2014 - 2 - 10 12:00 PM Kyle I'm pretty sure I just made a square graphic and used HaxePunk to create thousands of them. When they were created they would get a random color and alpha and scale, and during their update they would rotate. Post your comment!
http://kpulv.com/107/FlashDevelop_with_HaXe_NME_and_HaXePunk/
CC-MAIN-2015-22
refinedweb
1,524
73.47
FitPara-INI The Levenberg-Marquardt iterative algorithm requires initial values to start the fitting procedure. Good parameter initialization results in fast and reliable model/data convergence. When defining a fitting function in the Function Organizer, you can assign the initial values in the Parameter Settings box, or enter an Origin C routine in the Parameter Initialization box with which the initial values can be estimated. The NLFit in Origin provides automatic parameter initialization code for all built-in functions. For user-defined functions, you must add your own parameter initialization code. If no parameter initialization code is provided, all parameter values will be missing values when NLFit starts. In this case, you must enter "guesstimated" parameter values to start the iterative fitting process. Note that initial parameter values estimated by parameter initial routines will be used even if different initial values are specified in Parameter Settings. Click the button beside Parameter Settings box to bring up the Parameter Settings dialog. Then you can enter proper initial values for the parameters in the Value column of Parameters tab: To initialize parameters by initial formula (column statistics values, label rows, .etc), you can check the Initial Formula column check box and select the desired initial formula or metadata from the fly-out menu. The text box in Parameter Initialization contains the parameter initialization code. For built-in functions, these routines can effectively estimate parameter values prior to fitting by generating dataset-specific parameter estimates. When defining a new Origin C fitting function, you can edit the initial codes in the code builder by clicking the button. Although there many methods to estimate the parameter initial values, in general we will transform the function and deduce the value based on the raw data. For example, we can define a fitting model, named MyFunc, as: (This is the same function like the built-in Allometric1 function in Origin) And then transform the equation by: After the transformation, we will have a linear relationship between ln(y) and ln(x), and the intercept and slope is ln(a) and b respectively. Then we just need to do a simple linear fitting to get the estimated parameter values. The initial code can be: #include <origin.h> void _nlsfParamMyFunc( // Fit Parameter(s): double& a, double& b, // Independent Dataset(s): vector& x_data, // Dependent Dataset(s): vector& y_data, // Curve(s): Curve x_y_curve, // Auxilary error code: int& nErr) { // Beginning of editable part sort( x_y_curve ); // Sort the curve Dataset dx; x_y_curve.AttachX(dx); // Attach an Dataset object to the X data dx = ln(dx); // Set x = ln(x) x_y_curve = ln( x_y_curve ); // Set y = ln(y) vector coeff(2); fitpoly(x_data, y_data, 1, coeff); // One order (simple linear) polynomial fit a = exp( coeff[0] ); // Estimate parameter a b = coeff[1]; // Estimate parameter b // End of editable part } In the Code Builder, you just need to edit the function body. The parameters, independent variables and dependent variables are declared in the function definition. In addition, a few Origin objects are also declared: A dataset object is declared for each of the independent and dependent variables and a curve object is declared for each xy data pair: vector& x_data; vector& y_data; Curve x_y_curve; The vectors represent cached input x and y values, which should never be changed. And the curve object is a copy of the dataset curve that is comprised of an x dataset and a y dataset for which you are trying to find a best fit. Below these declaration statements, there is an editable section; the white area; which is reserved for the initialization code. Note that the function definition follows the C syntax. Initialization is accomplished by calling built-in functions that take a vector or a curve object as an argument. Once the initialization function is defined, you should verify that the syntax is correct. To do this, click the Compile button at the top of the workspace. This compiles the function code using the Origin C compiler. Any errors generated in the compile process are reported in the Code Builder Output window at the bottom of the workspace. Once the initialization code has been defined and compiled, you can return to the Function Organizer interface by clicking on the Return to Dialog button at the top of the workspace. area Get the area under a curve. Curve_MinMax Get X and Y range of the Curve. Curve_x Get X value of Curve at specified index. Curve_xfromY Get interpolated/extrapolated X value of Curve at specified Y value. Curve_y Get Y value of Curve at specified index. Curve_yfromX For given Curve returns interpolated/extrapolated value of Y at specified value of X. fitpoly Fit a polynomial equation to a curve (or XY vector) and return the coefficients and statistical results. fit_polyline Fit the curve to a polyline, where n is the number of sections, and get the average value of X-coordinates of each section. fitpoly_range Fit a polynomial equation to a range of a curve. find_roots Find the points with specified height. fwhm Get the peak width of a curve at half the maximum Y value. get_exponent This functions is used to estimate y0, R0 and A in y = y0 + A*exp(R0*x). get_interpolated_xz_yz_curves_from_3D_data Interperate 3D data and returns smoothed xz, yz curves. ocmath_xatasymt Get the value of X at a vertical asymptote. ocmath_yatasymt Get the value of Y at a horizontal asymptote. peak_pos This function is used to estimate the peak's XY coordinate, peak's width, area.etc. sort Use a Curve object to sort a Y data set according to an X data set. Vectorbase::GetMinMax Get min and max values and their indices from the vector xatasymt xaty50 Get the interpolated value of X at the average of minimum and maximum of Y values of a curve. xatymax Get the value of X at the maximum Y value of a curve. xatymin Get the value of X at the minimum Y value of a curve. yatasymt Get the value of Y at a horizontal asymptote. yatxmax Get the value of Y at the maximum X value of a curve. yatxmin Get the value of Y at the minimum X value of a curve. Sample Function Equation Initialization Code int sign; t1 = get_exponent(x_data, y_data, &y0, &A1, &sign); t1 = t2 = -1 / t1; A1 = A2 = sign * exp(A1) / 2; Description Because most of the exponential curves are similar, we can use a simple exponential function to approach complex equations. This EXPDEC2 function can be treated as the combination of two basic exponential function, whose parameters comes from get_exponent. xc = peak_pos(x_y_curve, &w, &y0, NULL, &A); A *= 1.57*w; Description In this initialization code, we firstly evaluate the peak width , baseline value , peak center and peak height, which is originally assign to the variable , by the peak_pos function. And then compute the peak area, A, by the following deduction: Let is the value where , then we have: and Where H is the peak height. sort(x_y_curve); A1 = min( y_data ); A2 = max( y_data ); LOGx0 = xaty50( x_y_curve ); double xmin, xmax; x_data.GetMinMax(xmin, xmax); double range = xmax - xmin; if ( yatxmax(x_y_curve) - yatxmin(x_y_curve) > 0) p = 5.0 / range; else p = -5.0 / range; Description Knowing the parameter meanings is very helpful and important for parameter initialization. In the does-response reaction, A1 and A2 means the bottom and top asymptote respectively, so we can initialize them by the minimum and maximum Y values. The parameter LOGx0 in the reaction is a typical value that 50% reaction happens, that's why we use the xaty50 function here. Regards to the slope, p, it doesn't matter how you compute this value, however, the sign of the slope is important. Curve x_curve, y_curve; bool bRes = get_interpolated_xz_yz_curves_from_3D_data(x_curve, y_curve, x_data, y_data, z_data, true); if(!bRes) return; xc = peak_pos(x_curve, &w1, &z0, NULL, &A); yc = peak_pos(y_curve, &w2); Description One idea for evaluating surface function initial values is solving the problem in a plane. Take this Gauss2D function as example, it's quite obviously that the maximum Z value is on the point (xc, yc), so we can use the get_interpolated_xz_yz_curves_from_3D_data function to get the characteristic curve on XZ and YZ plane. And then use the peak_pos function to evaluate the other peak attributes, like peak width, peak height, etc.
http://cloud.originlab.com/doc/en/Origin-Help/FitPara-INI
CC-MAIN-2022-21
refinedweb
1,378
52.29
Flask”. Create a fresh SD card To avoid conflicts with other software you may have installed I would recommend starting off with a fresh SD card by writing the latest Raspbian image to it. I use etcher to write Raspberry Pi images and for my initial experiments with Flask I used the Jessie Lite image from the official download page. Enable SSH By default SSH is disabled. If you want to configure the Pi over the network from another computer it can be enabled by either : - Creating a blank file named “ssh” in the boot partition (In Windows this is the only partition you can access) - Using the raspi-config utility to enable SSH with a monitor and keyboard attached to the Pi Take a look at the Enabling SSH on the Pi guide for more information. Find IP Address Find out the IP address of your Pi. If you are using a monitor and keyboard you can run: ifconfig It will most likely be of the form 192.168.#.#. When writing this tutorial my Pi was using 192.168.1.19. If you are connecting remotely via SSH then you can use an IP scanner to find it or it will listed somewhere in your router settings. Update & Change Password When enabling SSH I would strongly recommend changing the default password from “raspberry”! Use : passwd Set the new password and then run : sudo raspi-config Select “Advanced” followed by “Expand Filesystem”. To ensure we will be installing the latest packages run the following two commands : sudo apt-get update sudo apt-get -y upgrade This process may take 5-10 mins. Install pip Before we can install Flask we need to install pip, the Python package manager : sudo apt-get -y install python3-pip Install Flask Now it’s time to install Flask : sudo pip3 install flask I received some errors in the output but at the end it reported “Successfully installed flask”. Create test Flask app Now that Flask is installed we need to create a small test site to check everything is working. In this tutorial I will assume the test site is called “testSite”. You can use whatever name you like but you will need to swap all references to “testSite” with your name. Create a new folder : cd ~ mkdir testSite Navigate to the new folder and use the following command to create a new Python script : cd testSite sudo nano testSite.py Then paste in the following code : from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "<html><body> <h1>Test site running under Flask</h1> </body></html>" if __name__ == "__main__": app.run(host='0.0.0.0',debug=True) Press “CTRL-X”, “Y” and “Enter” to save and return to the command prompt. This script defines a simple one page website. Testing the Python web server You can now run the script using : python3 testSite.py If you visit the IP address of your Pi in a browser the test site should be visible : Note that Flask uses port 5000 by default and you need to replace 192.168.1.19 with your Pi’s actual IP address. Adding additional pages The script can be modified to add additional “pages”. Take a look at the example below : from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "<html><body> <h1>Test site running under Flask</h1> </body></html>" @app.route("/hello") def hello(): return "<html><body> <h1>This is the hello page</h1> </body></html>" if __name__ == "__main__": app.run(host='0.0.0.0',debug=True) It adds an additional “route” called “hello”. This page will be displayed when you visit the hello sub-directory : Even more routes You can also pull information from the URL into your script to create more elaborate page combinations. In this example we’ve added /user/<username> and /post/<post_id> routes. from flask import Flask,render_template app = Flask(__name__) @app.route("/") def index(): data=['Index Page','My Header','red'] return render_template('template1.html',data=data) @app.route("/hello") def hello(): data=['Hello Page','My Header','orange'] return render_template('template1.html',data=data) @app.route('/user/<username>') def show_user(username): # show the user profile for that user return 'User %s' % username @app.route('/post/<int:post_id>') def show_post(post_id): # show the post with the given id, the id is an integer return 'Post %d' % post_id if __name__ == "__main__": app.run(host='0.0.0.0',debug=True) Enter <ip_addr>:5000/user/john or <ip_addr>:5000/post/42 and a page is displayed with either the name or the post ID as part of the content. Using templates pages Rather than define your HTML page within the script you can use template files to hold the bulk of the HTML. This makes the script much easier to handle when your pages are a bit more complicated. Flask looks for templates in the “templates” directory. Create a new directory for templates : mkdir /home/pi/testSite/templates cd /home/pi/testSite/templates nano template1.html Then paste in this example template : <!DOCTYPE html> <html> <head> <title>{{ data[0] }}</title> <link rel="stylesheet" href='/static/style.css' /> </head> <body> <h1>{{ data[1] }}</h1> Favourite Colour : {{ data[2] }} </body> </html> The testSite.py can then be updated with : nano testSite.py and the content replaced with : from flask import Flask,render_template app = Flask(__name__) @app.route("/") def index(): data=['Index Page','My Header','red'] return render_template('template1.html',data=data) @app.route("/hello") def hello(): data=['Hello Page','My Header','orange'] return render_template('template1.html',data=data) if __name__ == "__main__": app.run(host='0.0.0.0',debug=True) When the two “routes” are activated the same template is used but the values passed to it are differnet. So the visitor sees a slightly different page. You can enhance the templates with HTML and CSS. The great thing with templates is that they keep the main Python script focused on functionality and leave the layout and aesthetics to the template file. Debug Mode In the examples the “debug” flag is set to True. This runs Flask in debug mode which automatically reloads Flask when you update the script. It also provides error messages if the page fails to load. If you expose the site to the internet the debug flag should be set to False. Auto-running Script on Boot If you want the Python script to automatically run when the Pi boots you can use this technique: crontab -e If prompted select an editor to use. I tend to use “nano”. Insert the following line at the bottom of the comments block : @reboot /usr/bin/python3 /home/pi/testSite/testSite.py & Press “CTRL-X”, “Y” and “Enter” to save and return to the command prompt. When you reboot this will run “testSite.py”. The “&” ensures it runs in the background. sudo raspi-config Select “Boot options” and “Desktop/CLI”. The select “Console Autologin”. This means when the Pi boots it will automatically login as the Pi user. Reboot using : sudo reboot and your webpages should be available at the Pi’s IP address on your network. Download Scripts The example scripts and templates in this tutorial are available in my BitBucket repository. They can be downloaded using these links : testSite1.py testSite2.py testSite3.py testSite4.py template1.py You can download directly to your Pi using : wget <url> where <url> is one of the script URLs above. Remember to download the files to the correct directory. Templates should go in the “templates” directory. Official Documentation & Other Resources There is a lot more information on the official Flask documentation page. It’s also worth taking a look at the Raspberry Pi Foundation – Build a Python Web Server with Flask tutorial. There’s a small typo. When you’re creating the template, you have: nano index.html but you really mean nano template1.html Other than that, a good introduction to a lot of neat stuff. — Graeme Thanks for spotting that! I’ve updated the article. Not to mention that the first example’s HTML creates a header with the test “Test site running under Flask” while the display output shows “This is a test site.” Ooops! I’ve updated the screenshot to make it more consistent. I did the original and then tidied up the HTML 🙂 Do you need to set the document type with Flask? I tried the same thing with a NodeJS based approach and the HTML rendered fine on some browsers but gave me raw text on a phone browser. I’m not entirely sure but the example templates in the Flask documentation set it.
https://www.raspberrypi-spy.co.uk/2017/07/create-a-basic-python-web-server-with-flask/
CC-MAIN-2019-09
refinedweb
1,441
64.51
As part of my homework, actually this part is for extra credit. I am supposed to write a program that prompts the user to input a decimal number and outputs the number to the nearest integer. Using only things I have learned to this point. I couldn't figure it out. I read ahead in the book about if else statements and came up with the following, but is there a way to do it with out if, else statements?(I am only 1 week into this class, so I can't use anything like the Math round method. I am supposed to do this programmatically): //Chapter 2 programming exercises 6.) import java.util.*; public class Ch2_PrExercises6 { public static void main(String[] args) { Scanner tv = new Scanner(System.in); int numResult = 0; double numEntered; System.out.println("Enter in a decimal number to be rounded to the nearest whole number, then press Enter: "); System.out.println(); numEntered = tv.nextDouble(); if(numEntered % 1 >= (1/2)) { numResult = (int)(numEntered) + 1; } else { numResult = (int)(numEntered); } System.out.println("You number rounded to the nearest whole number = " + numResult); } } thank you in advance!
http://www.javaprogrammingforums.com/whats-wrong-my-code/10578-brand-new-java-programming.html
CC-MAIN-2015-06
refinedweb
188
58.38
CSS3 Interview Questions And Answers For Experienced CSS3 Interview Questions And Answers For Experienced. Here Coding compiler sharing a very good list of 75 CSS3 interview questions asked in various UI development interviews by MNC companies. We are sure that these CSS interview questions will help you to crack your next CSS job interview. All the best for your future and happy CSS learning. CSS3 Interview Questions - What is the difference between CSS2 and CSS3? - What are the new features of CSS3? - What are the CSS3 modules? - What are CSS3 media queries? - What are CSS3 media types? - What are CSS3 Selectors? - How can you create Rounded corners in CSS3? - What are the associated border-radius properties? - How can you create a CSS3 property for each corner? - Is it possible to create border as an Image in CSS3? CSS3 Interview Questions And Answers Let’s start learning about various CSS3 interview questions and answers for experienced. Difference Between CSS2 and CSS3 - Modules - Media Queries - Namespaces - Selectors - Color 1) CSS3 Modules 1) The main difference between CSS2 and CSS3 is that CSS3 divided into two different sections Called Modules. 2) In CSS2 everything is submitted as a single document with all the Cascading Style Sheets information within it. 3) These Modules are much easier for different browsers to accept various aspects of CSS3 and implement. 4) There is a wider range of browser support for CSS3 Modules over CSS and CSS2. CSS3 Modules List - Selectors - Box Model - Backgrounds - Image Values and Replaced Content - Text Effects - 2D Transformations - 3D Transformations - Animations - Multiple Column Layout - User Interface 2) CSS3 Media Queries 1) In CSS2, we have Media Types, users can define different style rules for different media types like computer screens, printers, and handled devices. 2) But in CSS3, instead of using Media Types, extended the CSS2 Media Types idea with Media Queries. 3) Unlike looking for a type of device in CSS2 media type, CSS3 Media Queries look at the capability of the device. 4) CSS3 media queries look for width and height of the viewport, width, and height of the device, orientation, and resolution of the screen. CSS3 Media Types all – Used for all media type devices print – Used for printers screen – Used for computer screens, tablets, smart-phones etc. speech – Used for screenreaders that “reads” the page out loud CSS3 Media Query Example If the viewport is minimum 480 pixels or widee then the body backgroud color will be changed to blue. @media screen and (min-width: 480px) { body { background-color: blue; } } 3) CSS3 Namespaces This CSS Namespaces module defines syntax for using namespaces in CSS. It defines the @namespace rule for declaring a default namespace and for binding namespaces to namespace prefixes. @namespace “”; @namespace svg “”; The first rule declares a default namespace to be applied to names that have no explicit namespace component. The second rule declares a namespace prefix svg that is used to apply the namespace where the svg namespace prefix is used. 4) CSS3 Selectors In CSS3, there are few new Selectors and pseudo-elements are introduced, let’s discuss them. 1) attribute beginning matches exactly element[foo^=”bar”] The element has an attribute called foo that begins with “bar” e.g. 2) attribute ending matches exactly element[foo$=”bar”] The element has an attribute called foo that ends with “bar” e.g. 3) attribute contains the match element[foo*=”bar”] The element has an attribute called foo that contains the string “bar” e.g. CSS3 new pseudo-classes: :root The root element of the document. In HTML this is always match the last child element of the parent :first-of-typematch the first sibling element of that type :last-of-typematch the last sibling element of that type :only-childmatch the element that is the only child of its parent :only-of-typematch the element that is the only one of its type :emptymatch the element that has no children (including text nodes) :targetmatch an element that is the target of the referring URI :enabledmatch the element when it’s enabled :disabledmatch the element when it’s disabled :checkedmatch the element when it’s checked (radio button or checkbox) :not(s) match when the element does not match the simple selectors New CSS3 Style Properties Many graphics related properties are introduced in CSS3. 1) Border-radius or box-shadow, flexbox or even CSS Grid are newer styles introduced in CSS3. 2) In CSS3 the box model not changed but using new style properties users can change background, border and styles of a box. 3) In CSS3 using properties like background-image, background-position, and background-repeat styles users can specify multiple background images to be placed on top of one another. 4) CSS3 background-clip property defines how the background image should be clipped. 5) CSS3 background-origin property determines whether the background should be placed in the padding box, the border box, or the content box. 6) CSS3 background-size property allows you to indicate the size of the background image. This property allows users to stretch smaller images to fit the page. 7) CSS borders can be the styles of solid, double, dashed, and image. In addition to existing boarder properties CSS3 brings in the ability to create rounded corners. 8) There are some new border-radius properties are introduced in CSS3. 9) border-top-right-radius, border-bottom-right-radius, border-bottom-left-radius, border-top-left-radiusThese properties allow you to create rounded corners on your borders. 10) border-image-source – Specifies the image source file to be used instead of border styles already defined. 11) border-image-slice – Represents the inward offsets from the border image edges 12) border-image-width – Defines the value of the width for your border image 13) border-image-outset – Specifies the amount that the border image area extends beyond the border box 14) border-image-stretch – Defines how the sides and middle parts of the border image should be tiled or scaled 15) border-image – The shorthand property for all the border image properties 16) column-width – Defines the width of your columns should be. 17) column-count – Defines the number of columns on the page. 18) columnsShorthand property where you can define either the width or number. 19) column-gap Defines the width of the gaps between the columns. 20) column-rule-color Defines the color of the rule. 21) column-rule-style Defines the style of the rule (solid, dotted, double, etc.) 22) column-rule-width Defines the width of the rule 23) column-rule A shorthand property defining all three column rule properties at once. 24) CSS Template layout module and CSS3 Grid positioning module- creating grids with CSS 25) CSS3 Text module – outline text and even create drop-shadows with CSS 26) CSS3 Color module – with opacity 27) Changes to the box model – including a marquee property that acts like the IE tag 28) CSS3 User Interface module – giving you new cursors, responses to actions, required fields, and even resizing elements 29) CSS3 Ruby module – provides support for languages that use textual ruby to annotate documents 30) CSS3 Paged Media module – for even more support for paged media (paper, transparencies, etc) 31) Generated content – running headers and footers, footnotes, and other content that is generated programmatically, especially for paged media 32) CSS3 Speech module – changes to aural CSS 33) CSS3 supports additional color properties like RGBA colors, HSL colors, HSLA colors, Opacity. CSS3 Interview Questions And Answers For Experienced 1) How can you create Rounded corners in CSS3? A) By using CSS3 border-radius property, we can create rounded corners to body or text. Sample CSS3 Code to create Rounded corners: Boarder-Radius Syntax: #roundcorners { border-radius: 60px/15px; background: #FF0001; padding: 10px; width: 200px; height: 150px; } 2) What are the associated border-radius properties? A) There are four border-radius properties are there, they are: - border-radius Use this element for setting four boarder radius property - border-top-left-radius Use this element for setting the boarder of top left corner - border-top-right-radius Use this element for setting the boarder of top right corner - border-bottom-right-radius Use this element for setting the boarder of bottom right corner - border-bottom-left-radius Use this element for setting the boarder of bottom left corner 3) How can you create CSS3 property for each corner? A) We can create property for each corner by defining style for each corner, see below example: <style> #roundcorners1 { border-radius: 15px 50px 30px 5px; background: #a44170; padding: 20px; width: 100px; height: 100px; } #roundcorners2 { border-radius: 15px 50px 30px; background: #a44170; padding: 20px; width: 100px; height: 100px; } #roundcorners3 { border-radius: 15px 50px; background: #a44170; padding: 20px; width: 100px; height: 100px; } </style> 4) Is it possible to create border as a Image in CSS3? A) Yes it is possible, by using CSS3 border image property we can use image as a border. 5) What are the associate boarder image properties in CSS3? A) There are four major boarder image properties are there, they are: - border-image-source Used to set the image path - border-image-slice Used to slice the boarder image - border-image-width Used to set the boarder image width - border-image-repeat Used to set the boarder image as rounded, repeated and stretched 6) Can you write CSS3 code for creating boarder image? A) Here is the CSS3 code for creting boarder as image: #borderimg { border: 10px solid transparent; padding: 15px; border-image-source: url(/css/images/border-bg.png); border-image-repeat: round; border-image-slice: 30; border-image-width: 10px; } 7) What is Multi Background property in CSS3? A) Multi background property is used to add one or more images to the background in CSS3. 8) What are the most commonly used Multi Backgroud properties in CSS3? A) There are four most commonly used multi background properties, they are: - background-clip Used to declare the painting area of the background - background-image Used to specify the background image - background-origin Used to specify position of the background images - background-size Used to specify size of the background images 9) Can you write CSS3 code for creating Multi Background Images? A) Here is the CSS3 code for creating multi background images. <style> #multibackgroundimg { background-image: url(/css/images/logo1.png), url(/css/images/border1.png); background-position: left top, left top; background-repeat: no-repeat, repeat; padding: 75px; } </style> 10) What are the new color properties introduced in CSS3? A) In CSS3, there are few Color properties are introduced they are: - RGBA colors - HSL colors - HSLA colors - Opacity Advanced CSS3 Interview Questions And Answers 11) What RGBA stands for in CSS3? A) RGBA stands for Red Green Blue Alpha. 12) What HSL stands for in CSS3? A) HSL stands for hue, saturation, lightness. 13) What HSLA stands for in CSS3? A) HSLA stands for hue, saturation, lightness and alpha. 14) What is gradient in CSS3? A) Gradients displays the combination of two or more colors in one grid. 15) What are the types of Gradients in CSS3? A) In CSS3 there are mainly two types of gradients are there, they are: Linear Gradients(down/up/left/right/diagonally) Radial Gradients 16) How can you add gradients to your project? A) All gradients are read from a gradients.json file which is available in this project’s repo. Simply add your gradient details to it and submit a pull request. 17) How can you create shadow effets in CSS3? A) We can create shadow effects for text using text-shadow and for boxes using box-shadow properties. 18) Can you write CSS3 code to create shadow effect? A) Here is the sample code for shadow effects: Text shadow for text element: H1 { text-shadow: 2px 2px; } Box shadow for box element: <style> div { width: 300px; height: 100px; padding: 15px; background-color: red; box-shadow: 10px 10px; } </style> 19) What are the newly introduced Text related features in CSS3? A) There are mainly three Text related features are introduced, they are: text-overflow text-emphasis text-align-last word-wrap word-break 20) What is text-overflow property used in CSS3? A) The text-overflow property determines how overflowed content that is not displayed is signaled to users. Example 1: p.text1 { white-space: nowrap; width: 400px; border: 2px solid #000000; overflow: hidden; text-overflow: clip; //It wont show overflow text. } Example 2: p.text2 { white-space: nowrap; width: 300px; border: 2px solid #000000; overflow: hidden; text-overflow: ellipsis; //It indicates overflow text with dots … } Real-Time CSS3 Interview Questions And Answers 21) What is word-break property used in CSS3? A) In CSS3 word-break is used to break the line. Example 1: <style> p.text1 { width: 150px; border: 2px solid #000000; word-break: keep-all; //It breaks the word with hyphens at line break } Example 2: p.text2 { width: 150px; border: 2px solid #000000; word-break: break-all; // It breaks the work without hyphens in line break } </style> 22) What is CSS3 word-wrap property? A) In CSS3 word-wrap is used to break the line and wrap onto next line. 23) What are the different web fonts formats in CSS3? A) Web fonts allows users to use the fonts in CSS3, which are not installed on local system. There are five types of web fonts formats are there, they are: 1) TTF – TrueType Fonts 2) OTF – OpenType Fonts 3) WOFF – The Web Open Font Format 4) SVG Fonts 5) EOT – Embedded OpenType Fonts 24) What are 2D transforms in CSS3? A) In CSS3, by using 2D transforms we can re-change the element structure as translate, rotate, scale, and skew. 25) What are the common values used in 2D Transforms? A) Here are the some commonly used values in 2D Transforms, matrix(n,n,n,n,n,n) – Used to defines matrix transforms with six values translate(x,y)- Used to transforms the element along with x-axis and y-axis translateX(n) – Used to transforms the element along with x-axis translateY(n) – Used to transforms the element along with y-axis scale(x,y) – Used to change the width and height of element scaleX(n) – Used to change the width of element scaleY(n) – Used to change the height of element rotate(angle) – Used to rotate the element based on an angle skewX(angle) – Used to defines skew transforms along with x axis skewY(angle) – Used to defines skew transforms along with y axis 26) What are 3D transforms in CSS3? A) By using 3D transforms, we can move element to x-axis, y-axis and z-axis. 27) What are the common values used in 3D Transforms? A) Here are the some commonly used values in 3D Transforms, matrix3d(n,n,n,n,n,n,n,n,n,n,n,n,n,n,n,n) – Used to transforms the element by using 16 values of matrix translate3d(x,y,z) – Used to transforms the element by using x-axis,y-axis and z-axis translateX(x) – Used to transforms the element by using x-axis translateY(y) – Used to transforms the element by using y-axis translateZ(z) – Used to transforms the element by using y-axis scaleX(x) – Used to scale transforms the element by using x-axis scaleY(y) – Used to scale transforms the element by using y-axis scaleY(y) – Used to transforms the element by using z-axis rotateX(angle) – Used to rotate transforms the element by using x-axis rotateY(angle) – Used to rotate transforms the element by using y-axis rotateZ(angle) – Used to rotate transforms the element by using z-axis 28) What are the CSS3 Animations? A) In CSS3 Animation is process of making shape changes and creating motions with elements. @keyframes – Keyframes will control the intermediate animation steps in CSS3. 29) How can you create Multi Columns in CSS3? A) In CSS3, Multi Columns feature allows users to create, text as news paper structure in multi columns. 30) What are the values associated with multi columns? A) Here is the list of most commonly used multi column values, they are: CSS3 Technical Interview Questions And Answers 31) Can you write CSS3 code to arrange text in multi columns? A) Here is the code for arranging text in multi columns, <style> .multi { /*; } </style> 32) What is CSS3 Outline offset? A) CSS3 outline, draws a line around the element at outside of boarder. Sample Code for creating Outline: <style> div { margin: 20px; padding: 10px; width: 300px; height: 100px; border: 5px solid pink; outline: 5px solid green; outline-offset: 15px; } </style> 33) What is Box sizing property? A) Box sizing property is using to change the height and width of element. Example Code for CSS3 Box sizing: <style> .div1 { width: 300px; height: 100px; border: 1px solid blue; box-sizing: border-box; } .div2 { width: 300px; height: 100px; padding: 50px; border: 1px solid red; box-sizing: border-box; } </style> 34) What is CSS3 Responsive Web Design? A) CSS3 Responsive web design provides an optimal experience for the user. Responsive design allows users easy reading and easy navigation with a minimum of resizing on different devices. The best thing about web responsive design is, it will changes the height and width of the website automatically to fit the device screen (desktop, laptop, tablets and mobiles) to provide best user experience to the user. 35) What is CSS unicode-bidi Property? A) The unicode-bidi property is used together with the direction property to set or return whether the text should be overridden to support multiple languages in the same document. CSS unicode-bidi property example: div { direction: rtl; unicode-bidi: bidi-override; } 36) What is CSS transition-timing-function Property? A) The transition-timing-function property specifies the speed curve of the transition effect. This property allows a transition effect to change speed over its duration. Example: div { transition-timing-function: linear; } 37) What is CSS text-indent Property? A) The text-indent property specifies the indentation of the first line in a text-block. Example: p { text-indent: 50px; } 38) What is CSS transform-origin Property? A) The transform-origin property allows you to change the position of transformed elements. 2D transformations can change the x- and y-axis of an element. 3D transformations can also change the z-axis of an element. Example: div { transform: rotate(45deg); transform-origin: 20% 40%; } 39) What is CSS hanging-punctuation Property? A) The hanging-punctuation property specifies whether a punctuation mark may be placed outside the line box at the start or at the end of a full line of text. Example: p { hanging-punctuation: first; } 40) What is CSS counter-increment Property? A) The counter-increment property increases or decreases the value of one or more CSS counters. The counter-increment property is usually used together with the counter-reset property and the content property. Example: body { /* Set “my-sec-counter” to 0 */ counter-reset: my-sec-counter; } h2:before { /* Increment “my-sec-counter” by 1 */ counter-increment: my-sec-counter; content: “Section ” counter(my-sec-counter) “. “; } CSS Interview Questions For Experienced 41) What is CSS background-attachment Property? A) The background-attachment property sets whether a background image scrolls with the rest of the page, or is fixed. Example: body{ background-image: url(“img_tree.gif”); background-repeat: no-repeat; background-attachment: fixed; } 42) What is CSS backface-visibility Property? A) The backface-visibility property defines whether or not the back face of an element should be visible when facing the user. The back face of an element is a mirror image of the front face being displayed. This property is useful when an element is rotated. Example: #div1 { backface-visibility: hidden; } #div2 { backface-visibility: visible; } 43) What are CSS functions? A) CSS functions are used as a value for various CSS properties. attr() calc() cubic-bezier() hsl() hsla() linear-gradient() radial-gradient() repeating-linear-gradient() repeating-radial-gradient() rgb() rgba() var() 44) What is CSS attr() funtion? A) The attr() function returns the value of an attribute of the selected elements. Example: a:after { content: ” (” attr(href) “)”; } 45) What is CSS calc() function? A) The calc() function performs a calculation to be used as the property value. Example: #div1 { position: absolute; left: 50px; width: calc(100% – 100px); border: 1px solid black; background-color: yellow; padding: 5px; text-align: center; } 46) What is the cubic-bezier() function? A) The cubic-bezier() function defines a Cubic Bezier curve. Example: div { width: 100px; height: 100px; background: red; transition: width 2s; transition-timing-function: cubic-bezier(0.1, 0.7, 1.0, 0.1); } 47) What is the CSS3 hsl() function? A) The hsl() function define colors using the Hue-saturation-lightness model (HSL). HSL stands for hue, saturation, and lightness – and represents a cylindrical-coordinate representation of colors. Example: #p1 {background-color:hsl(120,100%,50%);} /* green */ #p2 {background-color:hsl(120,100%,75%);} /* light green */ #p3 {background-color:hsl(120,100%,25%);} /* dark green */ #p4 {background-color:hsl(120,60%,70%);} /* pastel green */ 48) What is CSS3 hsla() Function? A) The hsla() function define colors using the Hue-saturation-lightness-alpha model (HSLA). HSLA color values are an extension of HSL color values with an alpha channel – which specifies the opacity of the color. Example: #p1 {background-color:hsla(120,100%,50%,0.3);} /* green */ #p2 {background-color:hsla(120,100%,75%,0.3);} /* light green */ #p3 {background-color:hsla(120,100%,25%,0.3);} /* dark green */ #p4 {background-color:hsla(120,60%,70%,0.3);} /* pastel green */ 49) What is CSS linear-gradient() Function? A) The linear-gradient() function sets a linear gradient as the background image. To create a linear gradient you must define at least two color stops. Example: #grad { background: linear-gradient(red, yellow, blue); } 50) What is CSS radial-gradient() Function? A) The radial-gradient() function sets a radial gradient as the background image. A radial gradient is defined by its center. To create a radial gradient you must define at least two color stops. Example: #grad { background: radial-gradient(red, green, blue); } References: Thoughtco | Tutorialspoint | W3Schools OTHER - DB2 Interview Questions 2 thoughts on “CSS3 Interview Questions And Answers For Experienced” That is not boarder. It is ” border”. check this page : Great post.
https://codingcompiler.com/css3-interview-questions-answers-experienced/
CC-MAIN-2019-51
refinedweb
3,713
52.49
#include <deal.II/base/patterns.h> Test for the string being one of a sequence of values given like a regular expression. For example, if the string given to the constructor is "red|blue|black", then the match function returns true exactly if the string is either "red" or "blue" or "black". Spaces around the pipe signs do not matter and are eliminated. Definition at line 382 of file patterns.h. Constructor. Take the given parameter as the specification of valid strings. Definition at line 534 of file patterns.cc. Return true if the string is an element of the description list passed to the constructor. Implements Patterns::PatternBase. Definition at line 546 of file patterns.cc. Return a description of the pattern that valid strings are expected to match. Here, this is the list of valid strings passed to the constructor. Implements Patterns::PatternBase. Reimplemented in Patterns::Bool. Definition at line 578 of file patterns.cc. Return a copy of the present object, which is newly allocated on the heap. Ownership of that object is transferred to the caller of this function. Implements Patterns::PatternBase. Reimplemented in Patterns::Bool. Definition at line 613 of file patterns.cc. Determine an estimate for the memory consumption (in bytes) of this object. Reimplemented from Patterns::PatternBase. Definition at line 620 of file patterns.cc. Create a new object if the start of description matches description_init. Ownership of that object is transferred to the caller of this function. Definition at line 629 of file patterns.cc. List of valid strings as passed to the constructor. We don't make this string constant, as we process it somewhat in the constructor. Definition at line 434 of file patterns.h. Initial part of description Definition at line 439 of file patterns.h.
https://www.dealii.org/developer/doxygen/deal.II/classPatterns_1_1Selection.html
CC-MAIN-2020-34
refinedweb
297
60.72
Pycom, the people behind the WiPy and LoPy boards, are very generously providing financial support to get multi-threading implemented in MicroPython. This will be a really fantastic feature to have! Right now this is work in progress. The development is happening on the "threading" branch of the main repository, found here: ... /threading . Once it is working smoothly it will be merged into the master branch. [UPDATE: threading branch was merged into master] The plan is to implement the _thread module, which provides the fundamental functionality for multi-threading: starting new threads and creating mutex objects. For example you will be able to do the following: Code: Select all import _thread def thread_entry(arg): print('thread start', arg) for i in range(4): _thread.start_new_thread(thread_entry, (i,)) - a new configuration option: MICROPY_PY_THREAD - generic _thread module in py/ core (see py/modthread.c, py/mpthread.h) - a thread safe memory manager and garbage collector (see py/gc.c, especially the GC_ENTER and GC_EXIT macros) - thread safe NLR handlers (exception handling) for x86 and x86-64 - unix implementation of necessary thread hook functions using pthreads (see unix/mpthreadport.c) - a test suite (see tests/thread/) Code: Select all git checkout threading cd unix make cd ../tests ./run-tests -d thread That said, in its current form the unix implementation is not safe to use: there are many operations that you can do that will crash the interpreter. For example, modifying a list that is shared across threads will crash it (they can all read a list without problem). You can protect against such crashes by using a mutex/lock object (which you probably want to do anyway). To see some examples that do work look at the thread tests. Right now it's not clear whether the VM can remain GIL free. It can in principle, but it will require a lot of work to make everything safe (eg list, dict, set modifications). Certainly though it would be very interesting if MicroPython can have threading without a GIL. The medium term goal is to apply a simple GIL to make everything safe, and then get threading working on the WiPy. Also pyboard will get threading soon enough. A GIL free VM/runtime may follow in the future. Note that having a GIL on bare-metal ports like WiPy and pyboard doesn't really make a difference (compared with no GIL) because there is only 1 CPU core to make use of. Regards, Damien.
https://forum.micropython.org/viewtopic.php?p=10499
CC-MAIN-2019-39
refinedweb
411
62.07
I'm trying to create a program where I want to generate all permutations [1,2,3,4] and I recursively generate all the permutations of [1,2,3]: [1,2,3], [1,3,2], [2,1,3], [2,3,1], [3,1,2], [3,2,1] and then put 4 back into the sequence of numbers. For example, with [1,2,3], we get [4,1,2,3], [1,4,2,3], [1,2,4,3], and [1,2,3,4]. We "interleave" 4 with the sequence [1,2,3]. The following code shows the interleave and permute function. I'm currently stuck at the permute function where it returns a vector that contains all permutations of the first n positive integers and it is recursive: it generates all permutations of 1 upto n-1, apply interleave to each of the permutations and put all the resulting permutations into a vector. This vector then conatins all permutations of 1 upto n. Is there a way to permute the numbers and do it recursively without using the next_permutation STL algorithm as I'm not allowed? Any help would be appriciated. #include <iostream> #include <utility> #include <vector> using namespace std; vector<vector<int> > interleave(int x, const vector<int>& v){ vector<int> p(v.size()+1); vector<vector<int> > q(v.size()+1); for(vector<int>::size_type i = 0; i < p.size(); i++) { p = v; p.insert(p.begin()+i,x); q.push_back(p); } return q; } vector<vector<int> > permute(size_t n){ vector<int> p; vector<vector<int> > q; if(n == 0){ return q; } for(size_t i = 1; i < n; i++){ p.push_back(i); } return q; } int main(){ permute(4); return 0; }
https://www.daniweb.com/programming/software-development/threads/426516/interleave-permutation-recursion
CC-MAIN-2018-17
refinedweb
281
56.35
- Type: Bug - Status: Closed - Priority: P1: Critical - Resolution: Done - Affects Version/s: 5.12.0, 5.13.0 - Fix Version/s: 5.12.5, 5.13.1, 5.14.0 Alpha - Component/s: QML: Declarative and Javascript Engine - Labels: - Environment:Windows 10, MSVC 2017, x64 - Commits:cca5d1ec2f2c1f038c7c933b6c57d89888fc683b (qt/qtdeclarative/5.12) Beginning with Qt 5.12, the binding like typeof (name) will not update when property name was undefined and then became defined as context property. It worked in Qt 5.11 and before. Consider the following QML code: import QtQuick 2.11 import QtQuick.Window 2.11 Window { visible: true Text { text: "model is " + typeof (model) } } Then I define model context property through C++ code, like: engine.rootContext()->setContextProperty("model", QVariant::fromValue(obj)); I expect binding to be reevaluated, however it isn't. It is now only updated only if it directly binds to model (without the typeof check). The text remains model is null instead of changing to model is object. See the attachment for the full project demonstrating the problem. This breaks quite a much of my code (preventing upgrade from Qt 5.11), as I was using the typeof frequently to avoid ReferenceError. Please let me know, if any further information is needed.
https://bugreports.qt.io/browse/QTBUG-76796?gerritIssueStatus=Open
CC-MAIN-2020-16
refinedweb
207
52.46
I'm new to FitNesse. I would like to know certain things. Firstly, I have implemented a Web Service with certain methods using Eclipse. Then, I exported it to a WAR file, which is to be to be used with Tomcat. Then I used wsimport to create 'stubs' for my web service. The 'stubs' are just the interfaces. Now I want to know how to call the web service through my FitNesse fixture I'll be writing. I'm coding in JAVA. Is there any method through which I can call the web service method from my FitNesse fixture, keeping in mind the 'stubs' generated for the web service? I'm totally new to this. Help will be appreciated. Thanks! There are many ways to do what you describe. You could, for instance, create your own fixture (i.e. class containing test code) in Java that uses the stubs you generated to call your service. Or (what I prefer) is to call the services directly using HTTP posts, configured in the wiki, and execute XPath queries, configuring the XPaths either by writing Java code or on the wiki, on the responses you receive to check you service implementation. The latter approach is supported by fixtures (and ready to run FitNesse installation) I put on GitHub (). For specific information on how to call a web service see and depending on whether you want to use Slim or Fit. Sample for Slim: !2 Body via scenario Using a scenario allows us to generate multiple request, only changing certain values. !*> Scenario definition !define POST_BODY_2 { {{{ <s11:Envelope xmlns: <s11:Body> <ns1:GetCityWeatherByZIP xmlns: <ns1:ZIP>@{zip}</ns1:ZIP> </ns1:GetCityWeatherByZIP> </s11:Body> </s11:Envelope> }}} } |script|xml http test| |table template |send request | |post |${POST_BODY_2} |to |${URL} | |check |response status|200 | |show |response | |register prefix|weather |for namespace | |check |xPath |//weather:City/text()|@{City} | *! |send request | |zip |City | |10007|New York | |94102|San Francisco|
https://codedump.io/share/QAzCzWjderq/1/fitnesse-fixtures---enabling-a-fitnesse-fixture-to-call-a-method-on-the-server-side
CC-MAIN-2018-09
refinedweb
319
65.62
In iOS Swift Programming tutorial, I have covered how to create Swift Project with Xocde, and Swift language Basics . Below are the topics covered in this tutorial. 1).How to create Swift Project 2). Variables and Constants 3).Printing Variables 4).Conditional Statements 5).Control Statements(Loops) 6).Functions 1). How to create Swift Project Swift is introduced in iOS 8, So you need to download XCode 6 to create swift project. Follow the steps 1). In Xcode Menu Go to “File” -> “Project” and select type of your application 2).Select Language type as “Swift”. Xcode 6 supports both Swift and Objective-C languages. 2). After creating the project, you can see ViewController and AppDelegeate files extension with “.swift”. 3). If you want to create new Swift file, Right click on Project and go to “New File” -> “Swift File” You can import any iOS API using “import” import Foundation import UIKit 2). Variables and Constants Swift is type safe language.It performs type checks when compiling your code and flags any mismatched types as errors. This enables you to catch and fix errors as early as possible in the development process Swift provides its own versions of all fundamental C and Objective-C types, Int Double Float Bool String Character Swift also provides collection types, Array and Dictionary. In Swift, we need not tell the compiler what data type you are going to use. To create a variable use ‘var‘. var is mutable. To create constant use ‘let‘. ‘let’ is immutable. var var2=20 //20 is integer, so var 2 declared as Integer. var2 = 20.0 //ERROR. assigning float value to integer. var2 = 30 //OK let const1=10 var var3 = "Ravi" //"Ravi" is a string, so var3 data types becomes String var3 = 20 //ERROR: Cannot convert the expression's type '()' to type 'String' //You can use almost any character you like for constant and variable names, including Unicode characters: let π = 3.14159 var रवि="Ravi" Note: Semicolon is not required at the end of each statement. But if you want to write two statements in a single line,You need to use semicolon. var x=10; println(x) Explicit type annotation You can provide a type annotation when you declare a constant or variable, to be clear about the kind of values the constant or variable can store. Write a type annotation by placing a colon after the constant or variable name, followed by a space, followed by the name of the type to use. var var6:(Int) = 30 var var7:(String) = "Ravi" var var8:(Bool) = true Conversion between integer and floating point number made this way. var var10:(Int) = Int(10.01) //Double is converted to Int var var11:(Double) = Double(21) //Int is converted to Double 2.1) Swift Strings Below are the different examples on Swift strings //1.Create empty string var str = "" //emtpy string literal var str1 = String() //empty string //2.Check if string is empty if str.isEmpty { } //3.Check length of string countElements(str) //4.String Concatenation var str2 = "Hello" var str3 = "World" var str4 = str2+" "+str3 //Iterate through String var str2 = "Hello" for char in str2 { println(char) //each character } //5.Iterate through UTF-8 var name = "रवि"; //UTF8-Code 224 164 176 224 164 181 224 164 for code in name.utf8 { println(code); // each UTF8 code } //6.Formatting String var int1 = 33 var str5 = "Ravi age \(int1)" //Ravi age 33 //7.Comparing Strings if str2 == str4 { println("Equal") } 2.2) Swift Arrays Below are the different examples on Swift Arrays. //1.Creating an empty array var empty = []; //2.Create an initialized array var names=["Ravi","Haya","Rama"] var names2:String[] = ["Ravi","Haya","Rama"] //with explicit type convention var numbers = [1,2,4,5,6,7,8] //3.Get the count of an array println(numbers.count) //4. Iterate through each element in array //Method 1 for (var i=0;i<numbers.count;i++) { println(numbers[i]) } //Method 2 for num in numbers { println(num) } //Method 3 for (index,value) in enumerate(numbers) { println("Index \(index): value \(value)") } //5.Create sub array with index range var subnumbers = numbers[2..4] //4,5 subnumbers = numbers[2...4] //4,5,6 //6.To add an element numbers.append(10); //7.To insert at specific index numbers.insert(12,atIndex: 5) //8.Remove element at index var removed = numbers.removeAtIndex(0) //9.Remve last element var removed = numbers.removeLast() 2.2).Dictionary Below are the different examples on Swift Dictionary //1.Create an empty dictionary var emptyDict = Dictionary<Int,String>(); //2.Create an dictionary with key value pairs var dict1:Dictionary<Int,String> = [ 1:"One",2:"TwO",3:"Three"]; var dict2 = ["1":"One","2":"Two","3":"Three"]; //3.Iterate through dictionary for(key,value) in dict1 { println("key: \(key) value:\(value)") } //4.Dictionary element are accessed using subscript. print(dict["1"]); dict["1"]="Modifed"; //5.Update an element dict1.updateValue("Modified",forKey:"2"); //6.Remove an element dict1["1"] = nil; //or dict1.removeValueForKey("1"); 2.3) Swift Tuples Swift provides python like tuple datatype.Tuples group multiple values into a single compound value. //1.Creating tuple let digits = (1,"One") //digits is type of (Int,String) //2.Read tuple data let(num,name) = digits println("Name \(num) Name:\(name)") //3.Create tuple with custom keys let digits2 = (num:2,name:"Two") println("Name \(digits2.num) Name:\(digits2.name)") 3) Printing Variables and Constants you can use println() function to print variables. To print the variable you need to wrap the variable name in parenthesis: \(VARIABLE) //1. Print string println("Ravi") //2. Print variables var x=10 var y=10 println("x: \(x) y:\(y)") //3. using NSLog NSLog("Ravi") NSLog("x:%d",x) 4).Conditional Statements Swift provides all conditional statements Objective-C provides. Swift conditional statements are similar to Objective-C, parenthesis are not mandatory. Swift conditional statements are similar to Objective-C, parenthesis are not mandatory.switch statement doesn’t need break statement, it does not fall through the bottom of each case. //1.If else statement var status = true if status { println("YES") } else { println("NO") } //2.Switch Statement let char = "a" switch char { case "a","b","c": println("First Three") case "x","y","z": println("Last Three") case "o": println("OOO") default: println("NONE"); } //3. Switch with Range matching let num = 1 switch num { case 0...99: println("Below 100"); case 100...999: println("Between 100 & 999") default: print("Out of range"); } 5) Control Statements( Loops ) Swift supports for-in, for, while,do-while control statements. Below are the examples. //1.for-in example //read each character in string var str="Ravi"; for char in str { println(char); } //read each element in array var arr = [1,2,4,5] for value in arr { println(value); } //.. defines range from 1 to 10, does not include 10 for val in 1..10 { println(val); } //.. defines range from 1 to 10,includes 10 for val in 1...10 { println(val); } //2. For loop for var i = 0;i<10;i++ { println(i); } //3. While loop var i=0 while i < 10 { println(i) i++ } //4. do-while loop var i=0 do { i++ println(i) }while i < 10 6) Function You can read about swift functions, here: Reference: Apple Documentation
http://hayageek.com/ios-swift-tutorial/
CC-MAIN-2017-04
refinedweb
1,207
60.21
Remember I told you that Groovy is really good for scripting? Time for some proof. Lets say you want to download and parse some html page and get the content of the third list on page. How do you do that? Using URL and Regular expressions? Here I`ll tell you how it can be done in groovy. I will use gumtree search result as an example: // Grap HTTPBuilder component from maven repository @Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.5.2') // import of HttpBuilder related stuff import groovyx.net.http.* def http = new HTTPBuilder("" + "list_postings.pl?search_terms=car&search_location=London&ubercat=1") def html = http.get([:]) You might think that *html* var is just a string with page content? No, it is actually an Xml document read by Groovy-s XmlSlurper, and HTTPBuilder is using NekoHTML parser which allows malformed HTML content. Which is very common in web pages (for example tags not being closed). Now, when we have an xml tree of page, we can do really neat stuff with it, for example, we can find all xml elements with some class and do something with them: html."**".findAll { it.@class.toString().contains("hlisting")}.each { // doSomething with each entry } This magic string traverses through all tags and uses closure to filter them. The result is the collection of xml nodes which were matched by closure, so you can iterate through it using standard each And the iteration element will be xml node, so you still will be able to traverse it deeper and extract information. Here is some code to get all ads from that result page, with link to full add, title of ad and image url: def ads = html."**".findAll { it.@class.toString().contains("hlisting")} .collect { [ link : it.A[0].@href.text(), title : it.A[0].@title.text(), imgUrl : it.A[0].IMG[0].@src.text() ] } After that, ads will contain the array of Hashes with “link”, “title” and “imgUrl” keys (check out html returned by server to get the idea of what is being parsed). As you can see, html parsing of pages in Groovy using HTTPBuilder is really easy and fun. But thats not everything HTTPBuildercan do! It also provides you tools to handle JSON/XML responses easily. I will discuss that later.
https://binarybuffer.com/post/2012-05-09-parsing-html-pages-like-a-boss-with-groovy/
CC-MAIN-2019-09
refinedweb
385
75.3
Contact Me! The ASP.NET MVC framework (which I will refer to as "MVC" in this article) encourages greater separation of concerns than the older ASP.NET web forms framework. Some key differences between web forms and MVC are A controller is a class that inherits from System.Web.Mvc.ControllerBase. MVC uses several conventions to find this class. First, it expects controller classes to be in the Controllers folder. Also, it expects a controller class name to end with "Controller". So if we tell MVC to look for a Product controller, it will look for the file Controllers\ProductController.cs or Controllers\ProductController.vb. An Action is a method within a Controller class that returns a System.Web.Mvc.ActionResult. The ActionResult represents the View data that is available when MVC renders output to the client. One way we can tell MVC to look for a controller is by typing a URL into the browser’s address bar. Doing this causes MVC to use the routing engine. I described the routing engine in a previous article. The default routing assigned to a new MVC project looks for a URL with the following format Controller/action/id When the routing engine encounters a URL formatted like the one above, it looks for a controller named after the first part of the URL; an action method within that controller named after the second part of the URL; and a parameter to pass to that method in the third part of the URL. For example, if the user typed the following URL into the address bar: Customer/Details/1 , MVC would look for a class named CustomerController in Controller\CustomerController.cs (assuming we are coding in C#). If it found this class, it would look in it for a method named "Details" that returns an ActionResult and that accepts a parameter. It would then call the Details method and pass it the parameter "1". The ActionResult returned by this method is used by the MVC View engine to render output to the client. I described MVC views in a previous article. The code below is for a Controller Action method. It assumes the existence of the GetCustomer method that returns a single customer object. public ActionResult Details(Int32 id) { Customer cust = MVCDemoRepository.GetCustomer(id); return View(cust); } The View method called in the code above returns a ViewResult – a class that inherits from ActionResult. By passing the Customer object to this method, the ActionResult’s Model property is populated with the Customer object. Properties of that object can then be used within the view. Another way to pass data from the controller to the view is to populate the ViewData property. ViewData is a list of name-value pairs. You can populate this list in the controller and retrieve elements from it within the view. We can modify the Action method above to add to the ViewData list. public ActionResult Details(Int32 id) { ViewData["HelloMessage"] = "Good morning to you"; Customer cust = MVCDemoRepository.GetCustomer(id); ViewData["GoodbyeMessage"] = "Good night. See you later"; return View(cust); } By default, this controller routes the user to a view with the same name as the Action in a folder named after the controller. In this case, MVC will look in the \Views\Customer folder for a file named either Details.aspx or Details.ascx. If it cannot find either of these files in that folder, it will search in the Views\Shared folder. Here MVC is using configuration again to determine where to look for files. You can change the view MVC looks for by using an overload of the View method as in the following example return View("CustomerDetails", cust); The above line tells MVC to look for a view page named CustomerDetails.aspx or CustomerDetails.ascx in either \Views\Customer or \Views\Shared. Here is a complete listing of a Controller class using System; using System.Collections.Generic; using System.Web.Mvc; using MVCDemoController.Models; namespace MVCDemoController.Controllers { public class CustomerController : Controller { // GET: /Customer/ public ActionResult Index() { List<Customer> customers = MVCDemoRepository.GetAllCustomers(); return View(customers); } // GET: /Customer/Details/1 public ActionResult Details(Int32 id) { ViewData["HelloMessage"] = "Good morning to you"; Customer cust = MVCDemoRepository.GetCustomer(id); ViewData["GoodbyeMessage"] = "Good night. See you later"; return View(cust); } } } Below is a sample view to render the output from the Details controller action above <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MVCDemoController.Models.Customer>" %> <asp:Content Details </asp:Content> <asp:Content <h2>Details</h2> <div> <%=Html.Encode(ViewData["HelloMessage"]) %> </div> <fieldset> <legend>Fields</legend> <p> ID: <%= Html.Encode(Model.ID) %> </p> <p> <%= Html.Encode(Model.FirstName) %> </p> <p> <%= Html.Encode(Model.LastName) %> </p> </fieldset> <p> <%=Html.ActionLink("Back to List", "Index") %> </p> <div> <%=Html.Encode(ViewData["GoodbyeMessage"]) %> </div> </asp:Content> ID: 1 FirstName: David LastName: Giard Back to List The MVC Controller is used to retrieve data from the Model, to populate any extra data needed by the view and to determine which view to render. Understanding it is key to understaning MVC. Download demo code for this article at MVCDemoController.zip (281.08 KB) Episode 57 In this interview, Dr. David Truxall discusses the art of debugging and dives into WinDbg and other tools to debug production issues. The ASP.NET MVC framework (which I will refer to as "MVC" in this article) encourages developers to work closer to actual rendered HTML than does the more traditional web forms ASP.NET framework. The web form framework abstracted away much of the HTML, allowing developers to create sophisticated web pages simply by dragging controls onto a design surface. In fact, with the web forms framework, it is sometimes possible for someone with no knowledge of HTML to build an entire web application. But MVC’s view engine removes that abstraction, encouraging users to write more of their own HTML. By doing so, developers also get more control over what is rendered to the client. Some web developers may be surprised to learn that most of the server controls they are used to dragging onto a design surface do not work in MVC. This is because ASP.NET server controls are self-contained objects that encapsulate all their functionality, including the C# or VB code they run. This is contrary to the way that MVC works. In the MVC framework, all business logic code is in either the model (if it applies to the data) or in the controller if it applies to routing or output. An MVC view consists only of an ASPX page. By default it doesn't even contain a code-behind file. Let’s analyze a typical MVC view. Like a web form, an MVC view contains a page directive at the top. <%@ Page %> The Title, Language and MasterPageFile attributes should be familiar to anyone who has developed in ASP.NET. The meanings of these attributes have not changed in MVC. The Inherits attribute is also used in web forms development, but in this case, we are inheriting from a ViewPage that contains our model. The model represents the data we wish to render (I will write more about the MVC model in a future article.) By inheriting from a ViewPage, we provide the model data directly to the page and we can strongly type the keyword Model to whatever data type our model is. Mixed in with the HTML of our page, we can output some server side data by enclosing it in <% %> tags. If you wrote any web sites with classic ASP (the predecessor to ASP.NET), you probably used these tags. (I recently heard someone refer to them as "bee stingers"). If you place "=" in front of a value or variable, the view engine will output the value or contents of that variable. For example <%=System.DateTime.Now %> outputs the current date and time, like the following 10/2/2009 6:08:46 PM We mentioned earlier that the Model keyword is strongly-typed. For example, If we inherit our ASPX from System.Web.Mvc.ViewPage<TestMvc.Models.Customer>, then our model represents a Customer object and we can output a property of that model class. Assuming that our Model is uses the following Customer class: public class Customer { public string FirstName { get; set; } public string LastName { get; set; } } , we can output the FirstName property with the following markup: <%= Model.FirstName %> You probably already know that it is almost always a bad idea to output text directly for the screen without encoding it, as I did above. Failure to do so may leave your site open to scripting attacks. Fortunately MVC includes an Encode helper method to encode strings for safer output. That helper method (along with other helpers) is available via the Html property of the ViewPage from which our view inherits. We can call the helper method and encode the first name with the following markup. <%= Html.Encode(Model.FirstName) %> This outputs the FirstName property, but encodes it to prevent problems if the first name property somehow gets infected with some client-side script. Other helper methods of the ViewPage give you the ability to begin and end an HTML form, to render an HTML textbox, to render a hidden field, to render an HTML listbox, and perform many other useful functions. The code below will output a hyperlink with the text "Home" that links to the Index method in the Home controller. <%= Html.ActionLink("Home", "Index", "Home")%> Another way to use data in the view is to store it in ViewData. ViewData is a property of the view and contains a dictionary of name/value pairs. You can add a value to this dictionary in the controller with code like the following ViewData["Greetings"] = "Hello dummy"; and display a ViewData element in the view with markup like the following <%= Html.Encode(ViewData["Greetings"]) %> Below is the full markup for a sample MVC view page <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MVCDemoView.Models.Customer>" %> <asp:Content Index </asp:Content> <asp:Content <div> <%= Html.Encode(ViewData["Greetings"]) %> </div> <div> <%= Html.Encode(Model.FirstName) %> <%= Html.Encode(Model.LastName) %> </div> <div> <%= Html.ActionLink("Home", "Index", "Home")%> </div> </asp:Content> Here is the output of the view page above If you don’t like the view engine that ships with MVC, it is possible to replace it with your own. Some open source projects, such as Spark have already given users this ability. Using the MVC view engine encourages developers to have more control over the output sent to the client and provides a greater separation of concerns for the application as a whole. Download the code for this sample at MVCDemoView.zip (281.64 KB) I am months late producing this video. But now that it's finished, I want to show it off. Earlier this year, my son's 8th grade basketball team tied for the city championship. Here are highlights from the season..
https://www.davidgiard.com/default,date,2009-10-06.aspx
CC-MAIN-2021-43
refinedweb
1,826
56.25
needed such big changes. Since the router is such a big part of an application's architecture, this would potentially change some patterns I've grown to love. The idea of these changes gave me anxiety. Considering community cohesiveness and being that React Router plays a huge role in so many React applications, I didn't know how the community would accept the changes. A few months later, React Router 4 was released, and I could tell just from the Twitter buzz there was mixed feelings on the drastic re-write. It reminded me of the push-back the first version of React Router had for its progressive concepts. In some ways, earlier versions of React Router resembled our traditional mental model of what an application router "should be" by placing all the routes rules in one place. However, the use of nested JSX routes wasn't accepted by everyone. But just as JSX itself overcame its critics (at least most of them), many came around to believe that a nested JSX router was a pretty cool idea. So, I learned React Router 4. Admittedly, it was a struggle the first day. The struggle was not with the API, but more so the patterns and strategy for using it. My mental model for using React Router 3 wasn't migrating well to v4. I would have to change how I thought about the relationship between the router and the layout components if I was going to be successful. Eventually, new patterns emerged that made sense to me and I became very happy with the router's new direction. React Router 4 allowed me to do everything I could do with v3, and more. Also, at first, I was over-complicating the use of v4. Once I gained a new mental model for it, I realized that this new direction is amazing! My intentions for this article aren't to rehash the already well-written documentation for React Router 4. I will cover the most common API concepts, but the real focus is on patterns and strategies that I've found to be successful. Here are some JavaScript concepts you need to be familiar with for this article: - React (Stateless) Functional Components - ES2015 Arrow Functions and their "implicit returns" - ES2015 Destructuring - ES2015 Template Literals If you're the type that prefers jumping right to a working demo, here you go: A New API and A New Mental Model Earlier versions of React Router centralized the routing rules into one place, keeping them separate from layout components. Sure, the router could be partitioned and organized into several files, but conceptually the router was a unit, and basically a glorified configuration file. Perhaps the best way to see how v4 is different is to write a simple two-page app in each version and compare. The example app has just two routes for a home page and a user's page. Here it is in v3: import { Router, Route, IndexRoute } from 'react-router' const PrimaryLayout = props => ( <div className="primary-layout"> <header> Our React Router 3 App </header> <main> {props.children} </main> </div> ) const HomePage =() => <div>Home Page</div> const UsersPage = () => <div>Users Page</div> const App = () => ( <Router history={browserHistory}> <Route path="/" component={PrimaryLayout}> <IndexRoute component={HomePage} /> <Route path="/users" component={UsersPage} /> </Route> </Router> ) render(<App />, document.getElementById('root')) Here are some key concepts in v3 that are not true in v4 anymore: - The router is centralized to one place. - Layout and page nesting is derived by the nesting of <Route>components. - Layout and page components are completely naive that they are a part of a router. React Router 4 does not advocate for a centralized router anymore. Instead, routing rules live within the layout and amongst the UI itself. As an example, here's the same application in v4: import { BrowserRouter, Route } from 'react-router-dom' const PrimaryLayout = () => ( <div className="primary-layout"> <header> Our React Router 4 App </header> <main> <Route path="/" exact component={HomePage} /> <Route path="/users" component={UsersPage} /> </main> </div> ) const HomePage =() => <div>Home Page</div> const UsersPage = () => <div>Users Page</div> const App = () => ( <BrowserRouter> <PrimaryLayout /> </BrowserRouter> ) render(<App />, document.getElementById('root')) New API Concept: Since our app is meant for the browser, we need to wrap it in <BrowserRouter> which comes from v4. Also notice we import from react-router-dom now (which means we npm install react-router-dom not react-router). Hint! It's called react-router-dom now because there's also a native version. The first thing that stands out when looking at an app built with React Router v4 is that the "router" seems to be missing. In v3 the router was this giant thing we rendered directly to the DOM which orchestrated our application. Now, besides <BrowserRouter>, the first thing we throw into the DOM is our application itself. Another v3-staple missing from the v4 example is the use of {props.children} to nest components. This is because in v4, wherever the <Route> component is written is where the sub-component will render to if the route matches. Inclusive Routing In the previous example, you may have noticed the exact prop. So what's that all about? V3 routing rules were "exclusive" which meant that only one route would win. V4 routes are "inclusive" by default which means more than one <Route> can match and render at the same time. In the previous example, we're trying to render either the HomePage or the UsersPage depending on the path. If the exact prop were removed from the example, both the HomePage and UsersPage components would have rendered at the same time when visiting `/users` in the browser. To understand the matching logic better, review path-to-regexp which is what v4 now uses to determine whether routes match the URL. To demonstrate how inclusive routing is helpful, let's include a UserMenu in the header, but only if we're in the user's part of our application: const PrimaryLayout = () => ( <div className="primary-layout"> <header> Our React Router 4 App <Route path="/users" component={UsersMenu} /> </header> <main> <Route path="/" exact component={HomePage} /> <Route path="/users" component={UsersPage} /> </main> </div> ) Now, when the user visits `/users`, both components will render. Something like this was doable in v3 with certain patterns, but it was more difficult. Thanks to v4's inclusive routes, it's now a breeze. Exclusive Routing If you need just one route to match in a group, use <Switch> to enable exclusive routing: const PrimaryLayout = () => ( <div className="primary-layout"> <PrimaryHeader /> <main> <Switch> <Route path="/" exact component={HomePage} /> <Route path="/users/add" component={UserAddPage} /> <Route path="/users" component={UsersPage} /> <Redirect to="/" /> </Switch> </main> </div> ) Only one of the routes in a given <Switch> will render. We still need exact on the HomePage route though if we're going to list it first. Otherwise the home page route would match when visiting paths like `/users` or `/users/add`. In fact, strategic placement is the name-of-the-game when using an exclusive routing strategy (as it always has been with traditional routers). Notice that we strategically place the routes for /users/add before /users to ensure the correct matching. Since the path /users/add would match for `/users` and `/users/add`, putting the /users/add first is best. Sure, we could put them in any order if we use exact in certain ways, but at least we have options. The <Redirect> component will always do a browser-redirect if encountered, but when it's in a <Switch> statement, the redirect component only gets rendered if no other routes match first. To see how <Redirect> might be used in a non-switch circumstance, see Authorized Route below. "Index Routes" and "Not Found" While there is no more <IndexRoute> in v4, using <Route exact> achieves the same thing. Or if no routes resolved, then use <Switch> with <Redirect> to redirect to a default page with a valid path (as I did with HomePage in the example), or even a not-found page. Nested Layouts You're probably starting to anticipate nested sub layouts and how you might achieve them. I didn't think I would struggle with this concept, but I did. React Router v4 gives us a lot of options, which makes it powerful. Options, though, means the freedom to choose strategies that are not ideal. On the surface, nested layouts are trivial, but depending on your choices you may experience friction because of the way you organized the router. To demonstrate, let's imagine that we want to expand our users section so we have a "browse users" page and a "user profile" page. We also want similar pages for products. Users and products both need sub-layout that are special and unique to each respective section. For example, each might have different navigation tabs. There are a few approaches to solve this, some good and some bad. The first approach is not very good but I want to show you so you don't fall into this trap. The second approach is much better. For the first, let's modify our PrimaryLayout to accommodate the browsing and profile pages for users and products: const PrimaryLayout = props => { return ( <div className="primary-layout"> <PrimaryHeader /> <main> <Switch> <Route path="/" exact component={HomePage} /> <Route path="/users" exact component={BrowseUsersPage} /> <Route path="/users/:userId" component={UserProfilePage} /> <Route path="/products" exact component={BrowseProductsPage} /> <Route path="/products/:productId" component={ProductProfilePage} /> <Redirect to="/" /> </Switch> </main> </div> ) } While this does technically work, taking a closer look at the two user pages starts to reveal the problem: const BrowseUsersPage = () => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <BrowseUserTable /> </div> </div> ) const UserProfilePage = props => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <UserProfile userId={props.match.params.userId} /> </div> </div> ) New API Concept: props.match is given to any component rendered by <Route>. As you can see, the userId is provided by props.match.params. See more in v4 documentation. Alternatively, if any component needs access to props.match but the component wasn't rendered by a <Route> directly, we can use the withRouter() Higher Order Component. Each user page not only renders its respective content but also has to be concerned with the sub layout itself (and the sub layout is repeated for each). While this example is small and might seem trivial, repeated code can be a problem in a real application. Not to mention, each time a BrowseUsersPage or UserProfilePage is rendered, it will create a new instance of UserNav which means all of its lifecycle methods start over. Had the navigation tabs required initial network traffic, this would cause unnecessary requests — all because of how we decided to use the router. Here's a different approach which is better: const PrimaryLayout = props => { return ( <div className="primary-layout"> <PrimaryHeader /> <main> <Switch> <Route path="/" exact component={HomePage} /> <Route path="/users" component={UserSubLayout} /> <Route path="/products" component={ProductSubLayout} /> <Redirect to="/" /> </Switch> </main> </div> ) } Instead of four routes corresponding to each of the user's and product's pages, we have two routes for each section's layout instead. Notice the above routes do not use the exact prop anymore because we want /users to match any route that starts with /users and similarly for products. With this strategy, it becomes the task of the sub layouts to render additional routes. Here's what the UserSubLayout could look like: const UserSubLayout = () => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <Switch> <Route path="/users" exact component={BrowseUsersPage} /> <Route path="/users/:userId" component={UserProfilePage} /> </Switch> </div> </div> ) The most obvious win in the new strategy is that the layout isn't repeated among all the user pages. It's a double win too because it won't have the same lifecycle problems as with the first example. One thing to notice is that even though we're deeply nested in our layout structure, the routes still need to identify their full path in order to match. To save yourself the repetitive typing (and in case you decide to change the word "users" to something else), use props.match.path instead: const UserSubLayout = props => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <Switch> <Route path={props.match.path} exact component={BrowseUsersPage} /> <Route path={`${props.match.path}/:userId`} component={UserProfilePage} /> </Switch> </div> </div> ) Match As we've seen so far, props.match is useful for knowing what userId the profile is rendering and also for writing our routes. The match object gives us several properties including match.params, match.path, match.url and several more. match.path vs match.url The differences between these two can seem unclear at first. Console logging them can sometimes reveal the same output making their differences even more unclear. For example, both these console logs will output the same value when the browser path is `/users`: const UserSubLayout = ({ match }) => { console.log(match.url) // output: "/users" console.log(match.path) // output: "/users" return ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <Switch> <Route path={match.path} exact component={BrowseUsersPage} /> <Route path={`${match.path}/:userId`} component={UserProfilePage} /> </Switch> </div> </div> ) } ES2015 Concept: match is being destructured at the parameter level of the component function. This means we can type match.path instead of props.match.path. While we can't see the difference yet, match.url is the actual path in the browser URL and match.path is the path written for the router. This is why they are the same, at least so far. However, if we did the same console logs one level deeper in UserProfilePage and visit `/users/5` in the browser, match.url would be "/users/5" and match.path would be "/users/:userId". Which to choose? If you're going to use one of these to help build your route paths, I urge you to choose match.path. Using match.url to build route paths will eventually lead a scenario that you don't want. Here's a scenario which happened to me. Inside a component like UserProfilePage (which is rendered when the user visits `/users/5`), I rendered sub components like these: const UserComments = ({ match }) => ( <div>UserId: {match.params.userId}</div> ) const UserSettings = ({ match }) => ( <div>UserId: {match.params.userId}</div> ) const UserProfilePage = ({ match }) => ( <div> User Profile: <Route path={`${match.url}/comments`} component={UserComments} /> <Route path={`${match.path}/settings`} component={UserSettings} /> </div> ) To illustrate the problem, I'm rendering two sub components with one route path being made from match.url and one from match.path. Here's what happens when visiting these pages in the browser: - Visiting `/users/5/comments` renders "UserId: undefined". - Visiting `/users/5/settings` renders "UserId: 5". So why does match.path work for helping to build our paths and match.url doesn't? The answer lies in the fact that {${match.url}/comments} is basically the same thing as if I had hard-coded {'/users/5/comments'}. Doing this means the subsequent component won't be able to fill match.params correctly because there were no params in the path, only a hardcoded 5. It wasn't until later that I saw this part of the documentation and realized how important it was: match: - path - (string) The path pattern used to match. Useful for building nested <Route>s - url - (string) The matched portion of the URL. Useful for building nested <Link>s Avoiding Match Collisions Let's assume the app we're making is a dashboard so we want to be able to add and edit users by visiting `/users/add` and `/users/5/edit`. But with the previous examples, users/:userId already points to a UserProfilePage. So does that mean that the route with users/:userId now needs to point to yet another sub-sub-layout to accomodate editing and the profile? I don't think so. Since both the edit and profile pages share the same user-sub-layout, this strategy works out fine: const UserSubLayout = ({ match }) => ( <div className="user-sub-layout"> <aside> <UserNav /> </aside> <div className="primary-content"> <Switch> <Route exact path={props.match.path} component={BrowseUsersPage} /> <Route path={`${match.path}/add`} component={AddUserPage} /> <Route path={`${match.path}/:userId/edit`} component={EditUserPage} /> <Route path={`${match.path}/:userId`} component={UserProfilePage} /> </Switch> </div> </div> ) Notice that the add and edit routes strategically come before the profile route to ensure there the proper matching. Had the profile path been first, visiting `/users/add` would have matched the profile (because "add" would have matched the :userId. Alternatively, we can put the profile route first if we make the path ${match.path}/:userId(\\d+) which ensures that :userId must be a number. Then visiting `/users/add` wouldn't create a conflict. I learned this trick in the docs for path-to-regexp. Authorized Route It's very common in applications to restrict the user's ability to visit certain routes depending on their login status. Also common is to have a "look-and-feel" for the unauthorized pages (like "log in" and "forgot password") vs the "look-and-feel" for the authorized ones (the main part of the application). To solve each of these needs, consider this main entry point to an application: class App extends React.Component { render() { return ( <Provider store={store}> <BrowserRouter> <Switch> <Route path="/auth" component={UnauthorizedLayout} /> <AuthorizedRoute path="/app" component={PrimaryLayout} /> </Switch> </BrowserRouter> </Provider> ) } } Using react-redux works very similarly with React Router v4 as it did before, simply wrap <BrowserRouter> in <Provider> and it's all set. There are a few takeaways with this approach. The first being that I'm choosing between two top-level layouts depending on which section of the application we're in. Visiting paths like `/auth/login` or `/auth/forgot-password` will utilize the UnauthorizedLayout — one that looks appropriate for those contexts. When the user is logged in, we'll ensure all paths have an `/app` prefix which uses AuthorizedRoute to determine if the user is logged in or not. If the user tries to visit a page starting with `/app` and they aren't logged in, they will be redirected to the login page. AuthorizedRoute isn't a part of v4 though. I made it myself with the help of v4 docs. One amazing new feature in v4 is the ability to create your own routes for specialized purposes. Instead of passing a component prop into <Route>, pass a render callback instead: class AuthorizedRoute extends React.Component { componentWillMount() { getLoggedUser() } render() { const { component: Component, pending, logged, ...rest } = this.props return ( <Route {...rest} render={props => { if (pending) return <div>Loading...</div> return logged ? <Component {...this.props} /> : <Redirect to="/auth/login" /> }} /> ) } } const stateToProps = ({ loggedUserState }) => ({ pending: loggedUserState.pending, logged: loggedUserState.logged }) export default connect(stateToProps)(AuthorizedRoute) While your login strategy might differ from mine, I use a network request to getLoggedUser() and plug pending and logged into Redux state. pending just means the request is still in route. Click here to see a fully working Authentication Example at CodePen. Other mentions There's a lot of other cool aspects React Router v4. To wrap up though, let's be sure to mention a few small things so they don't catch you off guard. <Link> vs <NavLink> In v4, there are two ways to integrate an anchor tag with the router: <Link> and <NavLink> <NavLink> works the same as <Link> but gives you some extra styling abilities depending on if the <NavLink> matches the browser's URL. For instance, in the example application, there is a <PrimaryHeader> component that looks like this: const PrimaryHeader = () => ( <header className="primary-header"> <h1>Welcome to our app!</h1> <nav> <NavLink to="/app" exactHome</NavLink> <NavLink to="/app/users" activeClassName="active">Users</NavLink> <NavLink to="/app/products" activeClassName="active">Products</NavLink> </nav> </header> ) The use of <NavLink> allows me to set a class of active to whichever link is active. But also, notice that I can use exact on these as well. Without exact the home page link would be active when visiting `/app/users` because of the inclusive matching strategies of v4. In my personal experiences, <NavLink> with the option of exact is a lot more stable than the v3 <Link> equivalent. URL Query Strings There is no longer way to get the query-string of a URL from React Router v4. It seems to me that the decision was made because there is no standard for how to deal with complex query strings. So instead of v4 baking an opinion into the module, they decided to just let the developer choose how to deal with query strings. This is a good thing. Personally, I use query-string which is made by the always awesome sindresorhus. Dynamic Routes One of the best parts about v4 is that almost everything (including <Route>) is just a React component. Routes aren't magical things anymore. We can render them conditionally whenever we want. Imagine an entire section of your application is available to route to when certain conditions are met. When those conditions aren't met, we can remove routes. We can even do some crazy cool recursive route stuff. React Router 4 is easier because it's Just Components™
https://css-tricks.com/react-router-4/
CC-MAIN-2019-04
refinedweb
3,524
54.63
Documentation ¶ Overview ¶ Package buildmerge implements the build.proto tracking and merging logic for luciexe host applications. You probably want to use `go.chromium.org/luci/luciexe/host` instead. This package is separate from luciexe/host to avoid unnecessary entaglement with butler/logdog; All the logic here is implemented to avoid: * interacting with the environment * interacting with butler/logdog (except by implementing callbacks for those, but only acting on simple datastructures/proto messages) * handling errors in any 'brutal' ways (all errors in this package are handled by reporting them directly in the data structures that this package manipulates). This is done to simplify testing (as much as it can be) by concentrating all the environment stuff into luciexe/host, and all the 'pure' functional stuff here (search "imperative shell, functional core"). Index ¶ Constants ¶ This section is empty. Variables ¶ This section is empty. Functions ¶ This section is empty. Types ¶ type Agent ¶ type Agent struct { // MergedBuildC is the channel of all the merged builds generated by this // Agent. // // The rate at which Agent merges Builds is governed by the consumption of // this channel; Consuming it slowly will have Agent merge less frequently, // and consuming it rapidly will have Agent merge more frequently. // // The last build before the channel closes will always be the final state of // all builds at the time this Agent was Close()'d. MergedBuildC <-chan *bbpb.Build // Wait on this channel for the Agent to drain. Will only drain after calling // Close() at least once. DrainC <-chan struct{} // contains filtered or unexported fields } Agent holds all the logic around merging build.proto streams. func New ¶ func New(ctx context.Context, userNamespace types.StreamName, base *bbpb.Build, calculateURLs CalcURLFn) *Agent New returns a new Agent. Args: * ctx - used for logging, clock and cancelation. When canceled, the Agent will cease sending updates on MergedBuildC, but you must still invoke Agent.Close() in order to clean up all resources associated with the Agent. * userNamespace - The logdog namespace (with a trailing slash) under which we should monitor streams. * base - The "model" Build message that all generated builds should start with. All build proto streams will be merged onto a copy of this message. Any Output.Log's which have non-absolute URLs will have their Url and ViewUrl absolutized relative to userNamespace using calculateURLs. * calculateURLs - A function to calculate Log.Url and Log.ViewUrl values. Should be a pure function. The following fields will be merged into `base` from the user controlled build.proto stream(s): Steps SummaryMarkdown Status StatusDetails UpdateTime Tags EndTime Output The frequency of updates from this Agent is governed by how quickly the caller consumes from Agent.MergedBuildC. func (*Agent) Attach ¶ Attach should be called once to attach this to a Butler. This must be done before the butler receives any build.proto streams. type CalcURLFn ¶ type CalcURLFn func(namespaceSlash, streamName types.StreamName) (url, viewUrl string) CalcURLFn is a stateless function which can calculate the absolute url and viewUrl from a given logdog namespace (with trailing slash) and streamName.
https://pkg.go.dev/github.com/tetrafolium/luci-go/luciexe/host/buildmerge
CC-MAIN-2022-27
refinedweb
499
57.87
There are still some rough edges to Visual Studio 2015 CTP 6 and one of the biggest is problems with NuGet packages that don’t conform to newly revised requirements. This isn’t really surprising given that the developers are trying to move to an open source and multi-platform environment. But it’s a little painful. One of the elements that I was having problems with is logging. Every application of any size needs logging. I could rant for hours on proper logging and how it helps support diagnose problems plus provide meaningful usage statistics and allow you to concentrate development in places that matter. That’s a post for another time. In order to really implement logging I turn to libraries. Either Log4Net or NLog as the fancy strikes me. Both are open source libraries that are distributed via NuGet. However in both cases there is something in there that Visual Studio 2015 CTP 6 doesn’t like. When you install the library into your project – by editing project.json, using the NuGet Packaging Manager or the Package Manager Console, you will see a little yellow triangle of doom in the references: If you go to your error list you will see the following reason: I know that once Visual Studio 2015 goes GA this problem will be resolved. It’s just too big to be ignored, even if it’s painful for me now. But I want to continue developing. There are two options: - Write Debug.WriteLine everywhere and integrate the library by editing every file again later - Write a wrapper class for the logging library and use that, then refactor one class later I like option 2. In my Services directory, I’m going to add a class that implements logging. The below isn’t the entire file – I’ve left only the important bits in: using System; using System.Text; namespace AspNetIdentity.Services { public class Logger { private string _className; /// <summary> /// Create a new Logger instance /// </summary> /// <param name="className"></param> public Logger(string className) { this._className = className; } #region Logging Methods public void Trace(string fmt, params string[] list) { Log("TRACE", fmt, list); } // Other logging routines here public void Enter(string method, params string[] list) { var m = new StringBuilder(String.Format("{0}(", method)); for (int i = 0; i < list.Length; i++) { if (i != 0) m.Append(","); m.Append("\"" + list[i] + "\""); } m.Append(")"); Log("ENTER", m.ToString()); } public void Leave(string method) { Log("LEAVE", method); } #endregion private void Log(string lvl, string fmt, params string[] list) { string h = String.Format("### {0} [{1}]:", _className, lvl); string m = String.Format(fmt, list); System.Diagnostics.Debug.WriteLine(h + " " + m); } public static Logger GetLogger(string className) { return new Logger(className); } } } Basic functionality is that I create a logger as a static variable at the start of your class and then use it to call .Trace, .Enter, etc. to write logs out. This all consolidates down to using .Log which does a Debug.WriteLine. When I do the refactor for a logging library, this is the only class I have to touch. Everything else just uses this class. Let’s say I wanted to instrument my EmailService class. At the top of the class I would place the following: private static Logger logger = Logger.GetLogger(typeof(EmailService).Name); Then all my Debug.WriteLine calls can be changed to logger.Trace calls. Even with this (and any other logging library), the Output Debug window is liable to be cluttered. This is, fortunately, easy to fix. Go to the Quick Launch and type in Debug Output. You will see a result Debugging => Output Window. Select that to get something like the below: You definitely want to leave All debug output and Exception Messages turned on – those are a vital part of the functionality you need to diagnose issues. However you can probably turn the rest of the entries under the General Output Settings off. Right now I still get a bunch of Loaded… messages. Still it was less than I was getting before and it allows me to see my tracing messages much more easily. Want things to improve? Check out this UserVoice suggestion and consider voting for it. While you are there, consider looking at other ideas and voting for the ones you think should be implemented. In the meantime, I’ve checked in my code and you can check it out at my GitHub Repository.
https://shellmonger.com/2015/04/09/logging-in-asp-net-vnext/
CC-MAIN-2017-51
refinedweb
733
66.44
- Advertisement Rebel CoderMember Content Count11 Joined Last visited Community Reputation158 Neutral About Rebel Coder - RankMember Plane Assignment Program From Hell For A Very Bad Java Programmer Rebel Coder replied to Rebel Coder's topic in General and Gameplay Programmingyeah thanks man, but it's a different sintax so what you're saying does not apply but thanks anyways Plane Assignment Program From Hell For A Very Bad Java Programmer Rebel Coder posted a topic in General and Gameplay ProgrammingProblem: -Create a java program in which an airline can keep track of it's seating assignments. -Using the InputDialog Method ask the user how many people will be coming aboard the plane. - The plane consists of only 5 seats. - Keep assigning people to the seats in the plane until all seats are filled. -Once every seat is filled tell the user no more seats are avaiable. Question: Ok I have a simple question. How do I make it so that I can read the elements inside an array and if any one of them gets to or above 5 it stops letting me input numbers. For example, let's say I have 2 in index 0 and 3 in index 1, that makes 5 so I don't want to store anything more and when i go and input another number it doesn't let me. Another example, let's say index 0 holds the number 5 then it won't let me progress to index 1 because I already have the desired number which is 5. I hope that was pretty clear. This is for a plane assignment seating program with only 5 seats. This is what I have so far: [source lang="java"]import javax.swing.JOptionPane; public class Seating { public static void main(String args[]) { int seatstaken = 0; int seats[] = new int [5]; do{ JOptionPane.showMessageDialog(null, "Welcome to Coqui Air!"); for ( int i= 0; i < seats.length; i ++){ seats = Integer.parseInt(JOptionPane.showInputDialog("How many people will be traveling with you today?")); } for (int i = 0; i <seats.length; i++){ seatstaken += seats; } }while(seatstaken <5); if (seatstaken > 5){ JOptionPane.showMessageDialog(null, "Plane full."); } } }[/source] It's pretty terrible code, but I'm a beggining java programmer and my proffessor did not explain this subject of arrays at all, he just gave us a very brief example and it did not clear up my doubts whatsoever. Also in his example he uses a scanner for input, the problem requires that we use the JOptionPane user input method. Any help is appreciated as I have been trying to solve this since 4 pm my time. - I think I'm very excited for Assasin's Creed 3 don't you think :D Save my life from miserable hours of manual labor xD Rebel Coder replied to Rebel Coder's topic in GDNet Lounge@Sik: I did it with Microsoft Word. @frob: I will check that out, thanks Save my life from miserable hours of manual labor xD Rebel Coder posted a topic in GDNet LoungeSo - Need to pick an investigation topic based on computer science for college. Im in an investigative mood. ASP.NET with VB. I need help Rebel Coder replied to Rebel Coder's topic in General and Gameplay ProgrammingThanks alot ASP.NET with VB. I need help Rebel Coder posted a topic in General and Gameplay ProgrammingSo my teacher gave us this awesome project where we have to data bind the information the user enters on a textbox and selects on a drop down list at the click of a button. He gave us this video which showed us how to get the database set up with the table easy peasy, then when it came to the button coding part, the guy in the video was using C# and I need to do it in VB -___-, I've been looking around and no one can give me a clear answer. So how do I make it so when I click the button it saves the info on a textbox to my table and a dropdownlist selection. If anyone can tell me I'd really appreciate it. - Working with C# and XNA. It;s gonna get le boot once WP8 comes out but at least it's fun to work with. Plus I'm gonna make a game for my gf - Yeah it is an awesome feeling to get something working after trying to figure it out for a while although the same thing happens to me lol then I have to backtrack because I forgot where I left off on another problem. - Thanks for the nice comments guys, made the post a little easier to read lol - Agreed dude it is a great feeling - Indeed my good sir, but I did not know that and I was excited so yeah that happened EDIT Btw your game Gnoblins looks sweet - Advertisement
https://www.gamedev.net/profile/196181-rebel-coder/
CC-MAIN-2019-22
refinedweb
814
67.18
Type: Posts; User: Cambalinho thank you VictorN both zero Dim s As Long s = GetDIBits(DestinationHDC, DestinationBitmap, 0, bm.bmHeight, gptr, bi, DIB_RGB_COLORS) If (s = 0) Then MsgBox "error " + vbTab + CStr(GetLastError()) ... so far i did these code: Friend Sub DrawImageRectanglePoints(DestinationHDC As Long, Points() As Position3D, WorldSize As Size3D) 'Points(0) is the Upper-Left 'Points(1) is the Upper-Right... using the GetCurrentObject() with HDC, i can get the actual HBitmap handle....but i need ask more 2 things: 1 - how can i get the BITMAPINFOHEADER from HBitmap? 2 - do i need create the DIB or the... i have these 2 functions for get pixels: Public Sub GetImageData(ByRef SourcehBitmap As Long, ByRef SourceHDC As Long, ByRef ImageData() As Byte) 'Declare variables of the necessary bitmap... i use these functions for rotate a pixel... 1 - we get the pixel(the Z is zero) on RotationImage() and GetPixel(); 2 - all angles(X, Y and Z) must be converted in Radians(the computer use Radians... heres the functions that i use for rotation 3D: //RotateImage.h: #include <iostream> #include <windows.h> #include <math.h> struct PointAPI { i can't find that option :( PS: if i use a 'private' on UDT's and the function that have that parameters being 'private', i will not get that error... but the function must be used.. so must be... ok.. the type is public, but i get, now, an error on type: Public Type POINTL X As Long Y As Long End Type "compiler error: cannot define a public user-defined type within a private... Public Sub DrawImageTrapezoid(ByRef Points() As POINTL, ByVal DestinationHDC As Long, ByVal TranspColor As Long) "Compiler error: Private enum and user defined types cannot be used as parameters ou... heres the error message: and now the 'array parentheses': Option Explicit Private GDIsi As GDIPlusStartupInput, gToken As Long, hGraphics As Long, hBitmap As Long i created a class: Option Explicit Private GDIsi As GDIPlusStartupInput, gToken As Long, hGraphics As Long, hBitmap As Long Private Type POINTL X As Long Y As Long no.... but i'm trying understand if the values from points() to texturepoints() are correct or not :( i'm confused... the texture go to the 1st to the 2nd points without problems... i belive the 3... wow i found 1 problem: if the point have a negative value, i will get big numbers... ok these problem will be fixed. after i learn 1 math. heres the values: 35907 but what i did wrong with values? VictorN: even not be C code... what you think about the image? heres a pseud-rectangle(A to D): B |-----------|C A |-----------| D "A pointer to an array of three points in logical space that... finally i'm using it, but i'm getting the wrong shape width.. 'Get the four vectors 'Floor: 'Vector1: low-left FillPosition3D NewPosition3D(0), Position.X, Position.Y,... correct me anotherthing: even if i fix it, the PlgBlt() can be used on a plane? or the results will be not what we expected? i mean drawed the texture.. like create a street PlgBlt Me.hDC, points2, Picture1.hDC, 0, 0, Picture1.ScaleWidth, Picture1.ScaleHeight, &O0, &O0, &O0 error by ref argument.... how use\add GDIPLUS on VB6? yes Dim points2(4) As POINTAPI points2(0) = Points(0) points2(1) = Points(1) points2(2) = Points(2) points2(3) = Points(3) PlgBlt Me.hDC, points2(0), Picture1.hDC, 0, 0,... i'm trying now testing the function: 'Convert all 3D vectors to 2D vectors(for screen): Dim i As Integer For i = 0 To 3 Points(i) = ConvertPositon3DTo2D(NewPosition3D(i),... thank you so much for all... thank you maybe i was confused.. nothing more.. sorry i'm sorry, but you miss some words? "... VB is the upper-right corner, and VC and the lower-left corner then it looks..." is like: (upper-left)VA - VB(upper-right) (low-right)VC - VD(low.
https://forums.codeguru.com/search.php?s=1a9e56684dfab7c06d80d5aae57017dd&searchid=22011161
CC-MAIN-2021-39
refinedweb
652
67.15
timf wrote: > Allen wrote: > >> Tim, >> >> I was just sorting through some old emails from linuxtv archive, and >> came across >> >> >> >>> PS. I've actually managed to get the remote to work through a very >>> convoluted approach via the archives (Hermann), using ir-kbd-i2c.c, >>> saa7134-i2c.c. But it's no use unless we can fix this tuning/scanning >>> issue. >>> >>> >> I have a kw 220rf which I think might be very similar. >> >> Despite repeated attempts to get the remote working, I have never been >> able to get it to respond in any fashion. Could you please describe how >> you got it to function, and any problems with operation. >> >> Thanks, >> >> Allen >> >> >> > Hi Steve, > Woops! Profuse apologies, I of course meant Hi Allen, (too many email replies is my only excuse) > Please cc to these mail lists so others can be helpful to you, as well. > You will find Hermann has some answers for saa7134 i2c remotes. > I will paste here my mods, so that others may be able to help get it > going. > > This code is an adaptation of originally Henry Wong's work, > and much further work by numerous people across the planet. > Until recently this modification enabled a working remote control > for the Kworld 210RF. > This card has a KS007 remote controller chip. > > Since that time, the i2c code in v4l-dvb has undergone a > substantial transition. > > Thus this code no longer works, in particular, within ir-kbd-i2c.c > This is the only success I have ever had in getting an i2c > remote control to work in saa7134. > > As I have had a few problems with this card working properly, > I basically lost interest. > > Perhaps others with a Kworld 210RF or a Kworld 220RF card > can get it working again. > > *************************************************************** > Mods to /v4l-dvb/linux/include/media/ir-common.h > > extern IR_KEYTAB_TYPE ir_codes_kworld_210[IR_KEYTAB_SIZE]; > *************************************************************** > Mods to /v4l-dvb/linux/drivers/media/common/ir-keymaps.c > > IR_KEYTAB_TYPE ir_codes_kworld_210[IR_KEYTAB_SIZE] = { > [ 0x00 ] = KEY_1, > [ 0x01 ] = KEY_2, > [ 0x02 ] = KEY_3, > [ 0x03 ] = KEY_4, > [ 0x04 ] = KEY_5, > [ 0x05 ] = KEY_6, > [ 0x06 ] = KEY_7, > [ 0x07 ] = KEY_8, > [ 0x08 ] = KEY_9, > [ 0x09 ] = KEY_BACKSPACE, > [ 0x0a ] = KEY_0, > [ 0x0b ] = KEY_ENTER, > [ 0x0c ] = KEY_POWER, > [ 0x0d ] = KEY_SUBTITLE, > [ 0x0e ] = KEY_VIDEO, > [ 0x0f ] = KEY_CAMERA, > [ 0x10 ] = KEY_CHANNELUP, > [ 0x11 ] = KEY_CHANNELDOWN, > [ 0x12 ] = KEY_VOLUMEDOWN, > [ 0x13 ] = KEY_VOLUMEUP, > [ 0x14 ] = KEY_MUTE, > [ 0x15 ] = KEY_AUDIO, > [ 0x16 ] = KEY_TV, > [ 0x17 ] = KEY_ZOOM, > [ 0x18 ] = KEY_PRINT, > [ 0x19 ] = KEY_SETUP, > [ 0x1a ] = KEY_STOP, > [ 0x1b ] = KEY_RECORD, > [ 0x1c ] = KEY_TEXT, > [ 0x1d ] = KEY_REWIND, > [ 0x1e ] = KEY_FASTFORWARD, > [ 0x1f ] = KEY_SHUFFLE, > [ 0x45 ] = KEY_STOP, > [ 0x44 ] = KEY_PLAY, > }; > EXPORT_SYMBOL_GPL(ir_codes_kworld_210); > *************************************************************** > Mods to /v4l-dvb/linux/drivers/media/video/saa7134/saa7134-i2c.c > ... > /* Am I an i2c remote control? */ > > switch (client->addr) { > case 0x7a: > case 0x47: > case 0x71: > case 0x2d: > case 0x30: /*for kw210 remote control*/ > { > ... > > static char *i2c_devs[128] = { > [ 0x20 ] = "mpeg encoder (saa6752hs)", > [ 0xa0 >> 1 ] = "eeprom", > [ 0xc0 >> 1 ] = "tuner (analog)", > [ 0x86 >> 1 ] = "tda9887", > [ 0x5a >> 1 ] = "remote control", > [ 0x30 ] = "kw210 remote control", > }; > ... > *************************************************************** > Mods to /v4l-dvb/linux/drivers/media/video/ir-kbd-i2c.c > ... > > static int get_key_kworld_210(struct IR_i2c *ir, u32 *ir_key, u32 *ir_raw) > { > unsigned char b; > > /* poll IR chip */ > if (1 != i2c_master_recv(&ir->c,&b,1)) { > dprintk(1,"read error\n"); > return -EIO; > } > > /* it seems that 0x80 indicates that a button is still hold > down, while 0xff indicates that no button is hold > down. 0x80 sequences are sometimes interrupted by 0xFF */ > > dprintk(2,"key %02x\n", b); > > if (b == 0xff) > return 0; > > if (b == 0x80) > /* keep old data */ > return 1; > > *ir_key = b; > *ir_raw = b; > return 1; > } > ... > /*Unless the timer is modified, you have time to make a cup of tea while > waiting > * for a response after pressing a key > */ > static int polling_interval = 100; /* ms */ > ... > static void ir_timer(unsigned long data) > { > struct IR_i2c *ir = (struct IR_i2c*)data; > schedule_work(&ir->work); > } > ... > static void ir_work(struct work_struct *work) > #endif > { > #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20) > struct IR_i2c *ir = data; > #else > struct IR_i2c *ir = container_of(work, struct IR_i2c, work); > #endif > > ir_key_poll(ir); > /*kw210 improve key-response time*/ > mod_timer(&ir->timer, jiffies + polling_interval*HZ/1000); > } > ... > /*needed to select between cards with same i2c address for remote > controller*/ > static int kWorld_210 = 0; > ... > case 0x30: > ir_type = IR_TYPE_OTHER; > /*add kw210 card*/ > if (kWorld_210 == 1) { > name = "kWoRlD210"; > ir->get_key = get_key_kworld_210; > ir_codes = ir_codes_kworld_210; > } else { > name = "KNC One"; > ir->get_key = get_key_knc1; > ir_codes = ir_codes_empty; > } > break; > ... > static int ir_probe(struct i2c_adapter *adap) > { > ... > > static const int probe_bttv[] = { 0x1a, 0x18, 0x4b, 0x64, 0x30, -1}; > /*add 0x30*/ > static const int probe_saa7134[] = { 0x7a, 0x47, 0x71, 0x2d, 0x30, -1 }; > ... > c->adapter = adap; > for (i = 0; -1 != probe[i]; i++) { > c->addr = probe[i]; > rc = i2c_master_recv(c, &buf, 0); > /*mod added here to "wake up" kw210 remote controller chip*/ > if (adap->id == I2C_HW_SAA7134 && probe[i] == 0x30) > { > struct i2c_client c2; > memset (&c2, 0, sizeof(c2)); > c2.adapter = adap; > for (c2.addr=127; c2.addr > 0; c2.addr--) { > if (0 == i2c_master_recv(&c2,&buf,0)) { > dprintk(1,"Found another device, at addr 0x%02x\n", > c2.addr); > break; > } > } > > /* Now do the probe. The controller does not respond > to 0-byte reads, so we use a 1-byte read instead. */ > rc = i2c_master_recv(c,&buf,1); > rc--; > kWorld_210 = 1; > } else { > rc = i2c_master_recv(c,&buf,0); > } > dprintk(1,"probe 0x%02x @ %s: %s\n", > probe[i], adap->name, > (0 == rc) ? "yes" : "no"); > if (0 == rc) { > ir_attach(adap, probe[i], 0, 0); > break; > } > } > kfree(c); > return 0; > } > *************************************************************** > > Regards, > Timf > > _______________________________________________ > linux-dvb mailing list > linux-dvb at linuxtv.org > > >
http://www.linuxtv.org/pipermail/linux-dvb/2008-June/026805.html
CC-MAIN-2015-06
refinedweb
853
71.55
public class Sapphire extends Object The Sapphire II Stream Cipher is designed to have the following properties: The Sapphire Stream Cipher is very similar to a cipher I started work on in November 1993. It is also similar in some respects to the alledged RC-4 that was posted to sci.crypt recently. Both operate on the principle of a mutating permutation vector. Alledged RC-4 doesn't include any feedback of ciphertext or plain text, however. This makes it more vulnerable to a known plain text attack, and useless for creation of cryptographic check values. On the other hand, alledged RC-4 is faster. The Sapphire Stream Cipher is used in the shareware product Quicrypt, which is available at and on the Colorado Catacombs BBS (303-772-1062). There are two versions of Quicrypt: the exportable version (with a session key limited to 32 bits but with strong user keys allowed) and the commercial North American version (with a session key of 128 bits). A variant of the Sapphire Stream Cipher is also used in the shareware program Atbash, which has no weakened exportable version. The Sapphire II Stream Cipher is a modification of the Sapphire Stream Cipher designed to be much more resistant to adaptive chosen plaintext attacks (with reorigination of the cipher allowed). The Sapphire II Stream Cipher is used in an encryption utility called ATBASH2. The Sapphire Stream Cipher is based on a state machine. The state consists of 5 index values and a permutation vector. The permutation vector is simply an array containing a permutation of the numbers from 0 through 255. Four of the bytes in the permutation vector are moved to new locations (which may be the same as the old location) for every byte output. The output byte is a nonlinear function of all 5 of the index values and 8 of the bytes in the permutation vector, thus frustrating attempts to solve for the state variables based on past output. On initialization, the permutation vector (called the cards array in the source code below) is shuffled based on the user key. This shuffling is done in a way that is designed to minimize the bias in the destinations of the bytes in the array. The biggest advantage in this method is not in the elimination of the bias, per se, but in slowing down the process slightly to make brute force attack more expensive. Eliminating the bias (relative to that exhibited by RC-4) is nice, but this advantage is probably of minimal cryptographic value. The index variables are set (somewhat arbitrarily) to the permutation vector elements at locations 1, 3, 5, 7, and a key dependent value (rsum) left over from the shuffling of the permutation vector (cards array). Key setup (illustrated by the function initialize(), below) consists of three parts: The keyrand() function returns a value between 0 and some maximum number based on the user's key, the current state of the permutation vector, and an index running sum called rsum. Note that the length of the key is used in keyrand(), too, so that a key like "abcd" will not result in the same permutation as a key like "abcdabcd". Each encryption involves updating the index values, moving (up to) 4 bytes around in the permutation vector, selecting an output byte, and adding the output byte bitwise modulo-2 (exclusive-or) to the plain text byte to produce the cipher text byte. The index values are incremented by different rules. The index called rotor just increases by one (modulo 256) each time. Ratchet increases by the value in the permutation vector pointed to by rotor. Avalanche increases by the value in the permutation vector pointed to by another byte in the permutation vector pointed to by the last cipher text byte. The last plain text and the last cipher text bytes are also kept as index variables. See the function called encrypt(), below for details. If you want to generate random numbers without encrypting any particular ciphertext, simply encrypt 0. There is still plenty of complexity left in the system to ensure unpredictability (if the key is not known) of the output stream when this simplification is made. Decryption is the same as encryption, except for the obvious swapping of the assignments to last_plain and last_cipher and the return value. See the function decrypt(), below. The original implimentation of this cipher was in Object Oriented Pascal, but C++ is available for more platforms. For a fast way to generate a cryptographic check value (also called a hash or message integrity check value) of a message of arbitrary length: There are several security issues to be considered. Some are easier to analyze than others. The following includes more "hand waving" than mathematical proofs, and looks more like it was written by an engineer than a mathematician. The reader is invited to improve upon or refute the following, as appropriate. There are really two kinds of user keys to consider: (1) random binary keys, and (2) pass phrases. Analysis of random binary keys is fairly straight forward. Pass phrases tend to have much less entropy per byte, but the analysis made for random binary keys applies to the entropy in the pass phrase. The length limit of the key (255 bytes) is adequate to allow a pass phrase with enough entropy to be considered strong. To be real generous to a cryptanalyst, assume dedicated Sapphire Stream Cipher cracking hardware. The constant portion of the key scheduling can be done in one cycle. That leaves at least 256 cycles to do the swapping (probably more, because of the intricacies of keyrand(), but we'll ignore that, too, for now). Assume a machine clock of about 256 MegaHertz (fairly generous). That comes to about one key tried per microsecond. On average, you only have to try half of the keys. Also assume that trying the key to see if it works can be pipelined, so that it doesn't add time to the estimate. Based on these assumptions (reasonable for major governments), and rounding to two significant digits, the following key length versus cracking time estimates result: Key length, bits Time to crack ---------------- ------------- 32 35 minutes (exportable in qcrypt) 33 1.2 hours (not exportable in qcrypt) 40 6.4 days 56 1,100 years (kind of like DES's key) 64 290,000 years (good enough for most things) 80 19 billion years (kind of like Skipjack's key) 128 5.4E24 years (good enough for the clinically paranoid) Naturally, the above estimates can vary by several orders of magnitude based on what you assume for attacker's hardware, budget, and motivation. In the range listed above, the probability of spare keys (two keys resulting in the same initial permutation vector) is small enough to ignore. The proof is left to the reader. For a stream cipher, internal state space should be at least as big as the number of possible keys to be considered strong. The state associated with the permutation vector alone (256!) constitutes overkill. If you have a history of stream output from initialization (or equivalently, previous known plaintext and ciphertext), then rotor, last_plain, and last_cipher are known to an attacker. The other two index values, flipper and avalanche, cannot be solved for without knowing the contents of parts of the permutation vector that change with each byte encrypted. Solving for the contents of the permutation vector by keeping track of the possible positions of the index variables and possible contents of the permutation vector at each byte position is not possible, since more variables than known values are generated at each iteration. Indeed, fewer index variables and swaps could be used to achieve security, here, if it were not for the hash requirements. The change in state altered with each byte encrypted contributes to an avalanche of generated check values that is radically different after a sequence of at least 64 bytes have been encrypted. The suggested way to create a cryptographic check value is to encrypt all of the bytes of a message, then encrypt a sequence of bytes counting down from 255 to 0. A single bit change in a message causes a radical change in the check value generated (about half of the bits change). This is an essential feature of a cryptographic check value. Another good property of a cryptographic check value is that it is too hard to compute a message that results in a certain check value. In this case, we assume the attacker knows the key and the contents of a message that has the desired check value, and wants to compute a bogus message having the same check value. There are two obvious ways to do this attack. One is to solve for a sequence that will restore the state of the permutation vector and indices back to what it was before the alteration. The other one is the so-called "birthday" attack that is to cryptographic hash functions what brute force is to key search. To generate a sequence that restores the state of the cipher to what it was before the alteration probably requires at least 256 bytes, since the index "rotor" marches steadily on its cycle, one by one. The values to do this cannot easily be computed, due to the nonlinearity of the feedback, so there would probably have to be lots of trial and error involved. In practical applications, this would leave a gaping block of binary garbage in the middle of a document, and would be quite obvious, so this is not a practical attack, even if you could figure out how to do it (and I haven't). If anyone has a method to solve for such a block of data, though, I would be most interested in finding out what it is. Please email me at <m.p.johnson@ieee.org> if you find one. The "birthday" attack just uses the birthday paradox to find a message that has the same check value. With a 20 byte check value, you would have to find at least 80 bits to change in the text such that they wouldn't be noticed (a plausible situation), then try the combinations until one matches. 2 to the 80th power is a big number, so this isn't practical either. If this number isn't big enough, you are free to generate a longer check value with this algorithm. Someone who likes 16 byte keys might prefer 32 byte check values for similar stringth. Let us give the attacker a keyed black box that accepts any input and provides the corresponding output. Let us also provide a signal to the black box that causes it to reoriginate (revert to its initial keyed state) at the attacker's will. Let us also be really generous and provide a free copy of the black box, identical in all respects except that the key is not provided and it is not locked, so the array can be manipulated directly. Since each byte encrypted only modifies at most 5 of the 256 bytes in the permutation vector, and it is possible to find different sequences of two bytes that leave the five index variables the same, it is possible for the attacker to find sets of chosen plain texts that differ in two bytes, but which have cipher texts that are the same for several of the subsequent bytes. Modeling indicates that as many as ten of the following bytes (although not necessarily the next ten bytes) might match. This information would be useful in determining the structure of the Sapphire Stream Cipher based on a captured, keyed black box. This means that it would not be a good substitute for the Skipjack algorithm in the EES, but we assume that the attacker already knows the algorithm, anyway. This departure from the statistics expected from an ideal stream cipher with feedback doesn't seem to be useful for determining any key bytes or permutation vector bytes, but it is the reason why post-conditioning is required when computing a cryptographic hash with the Sapphire Stream Cipher. Thanks to Bryan G. Olson's <olson@umbc.edu> continued attacks on the Sapphire Stream Cipher, I have come up with the Sapphire II Stream Cipher. Thanks again to Bryan for his valuable help. Bryan Olson's "differential" attack of the original Sapphire Stream Cipher relies on both of these facts: I have not yet figured out if Bryan's attack on the original Sapphire Stream Cipher had complexity of more or less than the design strength goal of 2^64 encryptions, but some conservative estimations I made showed that it could possibly come in significantly less than that. (I would probably have to develop a full practical attack to accurately estimate the complexity more accurately, and I have limited time for that). Fortunately, there is a way to frustrate this type of attack without fully developing it. Denial of condition 1 above by increased alteration of the state variables is too costly, at least using the methods I tried. For example, doubling the number of index variables and the number of permutation vector items referenced in the output function of the stream cipher provides only doubles the cost of getting the data in item 1, above. This is bad crypto-economics. A better way is to change the output function such that the stream cipher output byte is a combination of two permutation vector bytes instead of one. That means that all possible output values can occur in the differential sequences of item 1, above. Denial of condition 2 above, is simpler. By making the initial values of the five index variables dependent on the key, Bryan's differential attack is defeated, since the attacker has no idea which elements of the permutation vector were different between data sets, and exhaustive search is too expensive. Are there any? Take you best shot and let me know if you see any. I offer no challenge text with this algorithm, but you are free to use it without royalties to me if it is any good. This is a new (to the public) cipher, and an even newer approach to cryptographic hash generation. Take your best shot at it, and please let me know if you find any weaknesses (proven or suspected) in it. Use it with caution, but it still looks like it fills a need for reasonably strong cryptography with limited resources. The intention of this document is to share some research results on an informal basis. You may freely use the algorithm and code listed above as far as I'm concerned, as long as you don't sue me for anything, but there may be other restrictions that I am not aware of to your using it. The C++ code fragment above is just intended to illustrate the algorithm being discussed, and is not a complete application. I understand this document to be Constitutionally protected publication, and not a munition, but don't blame me if it explodes or has toxic side effects. ___________________________________________________________ | | |\ /| | | Michael Paul Johnson Colorado Catacombs BBS 303-772-1062 | | \/ |o| | PO Box 1151, Longmont CO 80502-1151 USA John 3:16-17 | | | | / _ | mpj@csn.org aka mpj@netcom.com m.p.johnson@ieee.org | | |||/ /_\ | CIS: 71331,2332 | | |||\ ( | -. --- ----- .... | | ||| \ \_/ | PGPprint=F2 5E A1 C1 A6 CF EF 71 12 1F 91 92 6A ED AE A9 | |___________________________________________________________|Regarding this port to Java and not the original code, the following license applies: The GNU Lesser General Public License for details. clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait private int[] cards private int rotor private int ratchet private int avalanche private int lastPlain private int lastCipher private int keypos private int rsum public Sapphire(byte[] aKey) aKey- the cipher key public byte cipher(byte b) b- the next byte to decipher public void burn() public void hashFinal(byte[] hash) hash- the destination private void initialize(byte[] key) Key size may be up to 256 bytes. Pass phrases may be used directly, with longer length compensating for the low entropy expected in such keys. Alternatively, shorter keys hashed from a pass phrase or generated randomly may be used. For random keys, lengths of from 4 to 16 bytes are recommended, depending on how secure you want this to be. key- used to initialize the cipher engine. private void hashInit() private int keyrand(int limit, byte[] key)
http://www.crosswire.org/jsword/javadoc/org/crosswire/common/crypt/Sapphire.html
CC-MAIN-2020-40
refinedweb
2,752
57.81
Combining Silverlight and Windows Azure projects Standard Silverlight applications require that they be hosted on HTML pages, so that they can be loaded in a browser. Developers who work with the .Net framework will usually host this page within an ASP.Net website. The easiest way to host a Silverlight application on Azure is to create a single web role that contains an ASP.Net application to host the Silverlight application. Hosting the Silverlight application in this way enables you, as a developer, to take advantage of the full .Net framework to support your Silverlight application. Supporting functionalities can be provided such as hosting WCF services, RIA services, Entity Framework, and so on. In the upcoming chapters, we will explore ways by which RIA services, OData, Entity Framework, and a few other technologies can be used together. For the rest of this chapter, we will focus on the basics of hosting a Silverlight application within Azure and integrating a hosted WCF service. Creating a Silverlight or Azure solution Your system should already be fully configured with all Silverlight and Azure tools. In this section, we are going to create a simple Silverlight application that is hosted inside an Azure web role. This will be the basic template that is used throughout the book as we explore different ways in which we can integrate the technologies together: - Start Visual Studio as an administrator. You can do this by opening the Start Menu and finding Visual Studio, then right-clicking on it, and selecting Run as Administrator. This is required for the Azure compute emulator to run successfully. - Create a new Windows Azure Cloud Service. The solution name used in the following example screenshot is Chapter3Exercise1: - Add a single ASP.Net Web Role as shown in the following screenshot. For this exercise, the default name of WebRole1 will be used. The name of the role can be changed by clicking on the pencil icon next to the WebRole1 name: - Visual Studio should now be loaded with a single Azure project and an ASP. Net project. In the following screenshot, you can see that Visual Studio is opened with a solution named Chapter3Exercise1. The solution contains a Windows Azure Cloud project, also called Chapter3Exercise1. Finally, the ASP.Net project can be seen named as WebRole1: - Right-click on the ASP.Net project named WebRole1 and select Properties. - In the WebRole1 properties screen, click on the Silverlight Applications tab. - Click on Add to add a new Silverlight project into the solution. The Add button has been highlighted in the following screenshot: - For this exercise, rename the project to HelloWorldSilverlightProject. Click on Add to create the Silverlight project. The rest of the options can be left to their default settings, as shown in the following screenshot. - Visual Studio will now create the Silverlight project and add it to the solution. The resulting solution should now have three projects as shown in the following screenshot. These include the original Azure project, Chapter3Exercise1; the ASP.Net web role, WebRole1; and the third new project HelloWorldSilverlightProject: - Open MainPage.xaml in design view, if not already open. - Change the grid to a StackPanel. - Inside the StackPanel, add a button named button1 with a height of 40 and a content that displays Click me!. - Inside the StackPanel, underneath button1, add a text block named textBlock1 with a height of 20 - The final XAML should look similar to this code snippet: - Double-click on button1 in the designer to have Visual Studio automatically create a click event. The final XAML in the designer should look similar to the following screenshot: - Open the MainPage.xaml.cs code behind the file and find the button1_Click method. Add a code that will update textBlock1 to display Hello World and the current time as follows: - Build the project to ensure that everything compiles correctly. Now that the solution has been built, it is ready to be run and debugged within the Windows Azure compute emulator. The next section will explore what happens while running an Azure application on the compute emulator. (Move the mouse over the image to enlarge.) <UserControl> <StackPanel x: <Button x: <TextBlock x: </StackPanel> </UserControl> private void button1_Click(object sender, RoutedEventArgs e) { textBlock1.Text = "Hello World at " + DateTime.Now.ToLongTimeString(); } Running an Azure application on the Azure compute emulator With the solution built, it is ready to run on the Azure simulation: the compute emulator. The compute emulator is the local simulation of the Windows Azure compute emulator which Microsoft runs on the Azure servers it hosts. When you start debugging by pressing F5 (or by selecting Debug | Start Debugging from the menu), Visual Studio will automatically package the Azure project, then start the Azure compute emulator simulation. The package will be copied to a local folder used by the compute emulator. The compute emulator will then start a Windows process to host or execute the roles, one of which will be started as per the instance request for each role. Once the compute emulator has been successfully initialized, Visual Studio will then launch the browser and attach the debugger to the correct places. This is similar to the way Visual Studio handles debugging of an ASP.Net application with the ASP. Net Development Server. The following steps will take you through the process of running and debugging applications on top of the compute emulator: - In Solution Explorer, inside the HelloWorldSilverlightProject, right-click on HelloWorldSilverlightProjectTestPage.aspx, and select Set as startup page. - Ensure that the Azure project (Chapter3Exercise1) is still set as the start-up project. - In Visual Studio, press F5 to start debugging (or from the menu select Debug | Start Debugging). Visual Studio will compile the project, and if successful, begins to launch the Azure compute emulator as shown in the following screenshot: - Once the compute emulator has been started and the Azure package deployed to it, Visual Studio will launch Internet Explorer. Internet Explorer will display the page set as the start-up page (which was set to in an earlier step HelloWorldSilverlightProjectTestPage.aspx). - Once the Silverlight application has been loaded, click on the Click me! button. The TextBlock should be updated with the current time, as shown in the following screenshot: Upon this completion, you should now have successfully deployed a Silverlight application on top of the Windows Azure compute emulator. You can now use this base project to build more advanced features and integration with other services. Consuming an Azure-hosted WCF service within a Silverlight application A standalone Silverlight application will not be able to do much by itself. Most applications will require that they consume data from a data source, such as to get a list of products or customer orders. A common way to send data between .Net applications is through WCF services. The following steps will explore how to add a WCF service to your Azure web role, and then consume it from within the Silverlight application: - In Visual Studio, right-click on the ASP.Net web role project (WebRole1) and click Add | New Item. - Add a new WCF service named HelloWorldService.svc as shown in the following screenshot: - Once the WCF service has been added into the project, three new files will be added to the project: IHelloWorldService.cs, HelloWorldService.svc, and HelloWorldService.svc.cs. - Open IHelloWorldService.cs and change the interface, so that it defines a single method named GenerateHelloWorldGreeting that takes no parameters and returns a string. The entire file should look similar to the following code snippet: - Open HelloWorldService.svc.cs and modify the code, so that it implements the GenerateHelloWorldGreeting method as follows (the method in the code snippet returns Hello World, as well as the current server time): - Add a breakpoint on the line of code that returns the "Hello world" message. This breakpoint will be used in a later step. - Build the solution to ensure there are no syntax errors. If the solution does not build, then runtime errors can occur when trying to add the service reference. - Right-click on the Silverlight project HelloWorldSilverlightProject and select Add Service Reference. Click on Discover to allow Visual Studio to automatically detect the WCF service in the solution. Select the service and name the reference HelloWorldServiceReference, as shown in the screenshot, and click OK: - With the WCF service reference added to the Silverlight application, we will change the functionality of the Click me! button. Currently when clicked, the event handler will update the TextBlock with a "Hello world" message and the current time on the client side. This will be changed, so that clicking on the button will cause the Silverlight application to call the WCF service and have the "Hello world" message generated on the server side. In Visual Studio, within the Silverlight project, open MainPage.xaml.cs. - Modify the button1_Click method, so that it calls the WCF service and updates textBlock1 with the returned value. Due to the dynamic nature of developing with Azure, the address of the service endpoint can change many times through the development lifecycle. Each time Visual Studio deploys the project onto the compute emulator, a different port number can be assigned if the previous deployment has not been de-provisioned yet. Deploying to the Windows Azure staging environment will also give it a new address, while deploying to production will provide yet another endpoint address. The following code shows one technique to automatically handle the Silverlight application being hosted at different addresses. The Silverlight application invokes the WCF service by accessing it relative to where the Silverlight application is currently being hosted. This is in contrast to the usual behavior of calling WCF services which require an absolute address that would need to be updated with each deployment. - Compile the application to check that there are no syntax errors. - Press F5 to run the whole application in a debug mode. The Azure compute emulator should start up and Internet Explorer should be launched again with the Silverlight application. - Click on the Click me! button. The Silverlight client will call the WCF service causing Visual Studio to hit the breakpoint that was set earlier inside the WCF service. This shows that even though we are running and debugging a Silverlight application, we are still able to debug WCF services that are being hosted inside the Azure compute emulator. - Remove the breakpoint and continue the execution. Click on the button a few more times to watch the TextBlock update itself. The results should look similar to the following screenshot. Be sure to keep the browser open for the next steps: - Open the Azure compute emulator. Do this by right-clicking on the Azure icon in the system tray, and then clicking on Show Compute Emulator UI. - The compute emulator UI should now be open and look similar to the following screenshot. In the screenshot, you can see that there is a single deployment (the 29th one that has been deployed to the compute emulator). The deployment has one Azure project named Chapter3Exercise1. This Azure project has a single web role named WebRole1, which is currently executing a single instance. Clicking on the instance will show the output terminal of that instance. Here the trance information can be seen, being an output to the window: using System.ServiceModel; namespace WebRole1 { [ServiceContract] public interface IHelloWorldService { [OperationContract] string GenerateHelloWorldGreeting(); } } using System; namespace WebRole1 { public class HelloWorldService : IHelloWorldService { public string GenerateHelloWorldGreeting() { var currentTime = DateTime.Now.ToLongTimeString(); return "Hello World! The server time is " + currentTime; } } } using System; using System.ServiceModel; using System.Windows; using System.Windows.Controls; using HelloWorldSilverlightProject.HelloWorldServiceReference; namespace HelloWorldSilverlightProject { public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } private void button1_Click(object sender, RoutedEventArgs e) { //Find the URL for the current Silverlight .xap file. Go up one level to get to the root of the site. var url = Application.Current.Host.Source.OriginalString; var urlBase = url.Substring(0, url.IndexOf("/ClientBin", StringComparison.InvariantCultureIgnoreCase)); //Create a proxy object for the WCF service. Use the root path of the site and append the service name var proxy = new HelloWorldServiceClient(); proxy.Endpoint.Address = new EndpointAddress(urlBase + "/HelloWorldService.svc"); proxy.GenerateHelloWorldGreetingCompleted += proxy_GenerateHelloWorldGreetingCompleted; proxy.GenerateHelloWorldGreetingAsync(); } void proxy_GenerateHelloWorldGreetingCompleted(object sender, GenerateHelloWorldGreetingCompletedEventArgs e) { textBlock1.Text = e.Result; } } } Relative WCF services The code in the code snippet shows a technique for calling WCF services relative to the currently executing Silverlight application. This technique means that the Silverlight application is not dependent on the service address being updated for each deployment. This allows the whole ASP.Net application to be hosted and deployed on a number of environments without configuration changes, such as the ASP.Net development server, Azure compute emulator, Azure staging or production environments, or on any other IIS host. Configuring the number of web roles The power in Azure comes from running multiple instances of a single role and distributing the computational load. It is important to understand how to configure the size and number of instances of a role that should be initialized. The following steps will explain how this can be done within the Visual Studio: - Stop debugging the application and return to Visual Studio. - Inside the Azure project Chapter3Exercise1, right-click on WebRole1, and select Properties. The role properties window is used to specify both the size of the instances that should be used, as well as the number of instances that should be used. The VM size has no effect on the compute emulator, as you are still constrained by the local development machine. The VM size setting is used when the package is deployed onto the Windows Azure servers. It defines the number of CPUs and amounts of RAM allocated to each instance. These settings determine the charges Microsoft will accrue to your account. In the earlier stages of development, it can be useful to set the VM size to extra small to save consumption costs. This can be done in situations where performance is not a high requirement, such as when a few developers are testing their deployments. Extra small instances The extra small instances are great while developing as they are much cheaper instances to deploy. However, they are low-resourced and also have bandwidth restrictions enforced on them. They are not recommended for use in a high performance production environment. The Instance count is used to specify the number of instances of the role that should be created. Creating multiple instances of a role can assist in testing concurrency while working with the compute emulator. Be aware that you are still constrained by the local development box, setting this to a very large number can lower the performance of your machine: - Set the Instance count to 4 as shown in the following screenshot. If you are planning to deploy the application to the Windows Azure servers, it is a good idea to set the VM size to Extra Small while testing: - Open HelloWorldService.svc.cs and modify the service implementation. The service will now use the Windows Azure SDK to retrieve instance ID that is currently handling the request: - Press F5 to debug the project again. - Open the Azure compute emulator UI. There should now be four instances handling the request. - In Internet Explorer, click on the Click me! button multiple times. The text will update with the server time and the instance that handled the request. The following screenshot shows that instance 1 was handling the request. If the instance ID does not change after multiple clicks, try to launch a second browser, and click again. Sometimes, affinity can cause a request to get sticky and stay with a single instance: using System; using Microsoft.WindowsAzure.ServiceRuntime; namespace WebRole1 { public class HelloWorldService : IHelloWorldService { public string GenerateHelloWorldGreeting() { var currentTime = DateTime.Now.ToLongTimeString(); var instanceId = RoleEnvironment.CurrentRoleInstance.Id; return string.Format("Hello World! The server time is {0}. Processed by {1}", currentTime, instanceId); } } } This exercise demonstrated requests for a WCF service being load balanced over a number of Azure instances. The following diagram shows that as requests from the Silverlight client come in, the load balancer will distribute the requests across the instances. It is important to keep this in mind while developing Azure services and develop each role to be stateless services when working with multiple instances. Each request may be handled by a different instance each time, requiring you to not keep any session state inside an instance: Summary In this article, we created a new Silverlight application that was hosted with an Azure project. We then created a WCF service that was also hosted within the Azure project, and consumed it from the Silverlight application. The WCF service was then scaled to 4 instances to demonstrate how WCF requests can be load balanced across multiple instances. A technique was also shown to allow a WCF service to be consumed through a relative path, allowing the website to be hosted anywhere without the service address needing to be changed for each deployment.
https://www.packtpub.com/books/content/combining-silverlight-and-windows-azure-projects
CC-MAIN-2016-40
refinedweb
2,808
55.03
This is the fourth article in the CI/CD using Jenkins series, wherein we setup automation of fetch, build, test and deploying any codebase using Jenkins automation server and we have been looking at a particular usecase of automating the various stages of an aspnetcore codebase for our understanding. So far we have seen how to fetch an aspnetcore codebase from a Git repository, how to automate the build of the fetched codebase and generate build artifacts. In this article we shall look at how we can automate executing test scripts on our fetched codebase after a build and generate a report out of it. We shall first start by refactoring our existing codebase (a single API project ReadersApi.csproj) and bringing it under a solution file for our maintainability. Since we are using aspnetcore for demonstration, we can make things easier by using the dotnet core CLI to add a new solution file. I shall move all the ReadersApi project files under a sub directory Readers.Api and then under the root directory i shall open a command terminal (or command prompt) and shall add a new solution by the below command. > dotnet new sln --name ReadersApi This shall an empty solution file of the name ReadersApi.sln under the root directory. Next, I add the existing csproj under the Readers.Api sub directory onto the solution file. > dotnet sln ReadersApi.sln add Readers.Api/ReadersApi.csproj This shall add the csproj as well as link the existing API project onto the solution file. Next, I shall add a new Test project under the root directory which shall contain all the unit tests and integration tests I would want to write for the existing ReadersApi project. > dotnet new xunit --name Readers.Tests This shall create a sub folder Readers.Tests along with all the test project project under the sub folder. We finish the setup by adding the newly created Test project under the solution we have under the root directory. > dotnet sln ReadersApi.sln add Readers.Tests/Readers.Tests.csproj Read: Getting Started with Dotnet Core CLI Now that we have the things ready to go, we shall add a new UnitTest class file which would contain the Unit Tests onto the UserController existing under the ReadersApi. To keep things simple, I shall write a single unit test for success case when the FetchAllUsers endpoint is invoked. For a UnitTest, a controller is simply yet another plain class with added dependencies to be injected over constructor (if any). And we can test the functionality by simply creating an object of the UserController class. The UserController class looks as shown below: namespace ReadersApi.Controllers { [Route("api/[controller]")] [ApiController] public class UserController : ControllerBase { IUnitOfWork repo; public UserController(IUnitOfWork repo) { this.repo = repo; } [HttpGet] [Route("allusers")] public IEnumerable<User> GetUsers() { return repo.UserRepo.Find(x => x.Id != 0); } } } The UserController class has a dependency of type IUnitOfWork and uses a property UserRepo of type IUserRepo from the IUnitOfWork instance. To unit test this, we shall use a mocking unit test case to verify the behaviour of the method GetUsers(). If there are no users available in the system (with a valid Id), then the return list shall be empty, if there are any users found, the result set shall not be empty. We shall verify this functionality using xUnit and Moq, which is an open-source unit testing framework. Read: Writing Mocking Unit Tests using xUnit and Moq in ASP.NET Core The Unit Test for the above controller shall be as shown below: namespace Readers.Tests { public class UserController_Tests { IUnitOfWork unitOfWork; public UserController_Tests() { //Arrange Moqs var repomoq = new Mock<IUserRepo>(); repomoq.Setup(x => x.Find(It.IsAny<Expression<Func<User, bool>>>())).Returns(new List<User>() { new User() }); var repo = repomoq.Object; var uowmoq = new Mock<IUnitOfWork>(); uowmoq.Setup(x => x.UserRepo).Returns(repo); unitOfWork = uowmoq.Object; } [Fact] public void GetUsers_Success() { //Arrange var userController = new UserController(unitOfWork); //Act var users = userController.GetUsers().ToList(); //Assert Assert.True(users.Count > 0); } } } We setup a mock alternative for the IUnitOfWork interface and then pass the created mock to the UserController class and then call the GetUsers() method over the controller object. We assert if the result is true or not. This completes the setup of Test project. And we can test this by running the below command using CLI > dotnet test Configuring Jenkins to Capture Test Results: we can pass additional arguments to the dotnet test command to write the test results onto an xml file, which shall be of the format MSTest. We shall now configure the Jenkins to read data from the generated test results file once the tests are executed and generate a build test stats for us. The command shall be as follows: > dotnet test --logger:"trx;logFileName=report.xml" We shall configure the same command under the Jenkins job we created earlier, under a new build step. Observe that we have specified a path %WORKSPACE%/tests/report.xml with an environment variable %WORKSPACE%. It means that the variable is substituted with the current absolute path where the job is being executed. And so the report file shall be generated under the job execution path under a sub directory tests. Next, we shall add a plugin to the build step which takes care of the test report analysis. Since we are using xunit for test execution, we shall add xUnit plugin to the jenkins server which can then be added to the pipeline. Goto Manage Plugins section under the Jenkins drop menu and search for xUnit under the Available plugins section. Tick the plugin and then click on Install without Restart. This shall install the xUnit plugin for test reporting on Jenkins. Once done, we shall add this plugin as a build step after the test execution build step. Once we add the step, we shall select the Report Type as MSTest (since the report generated will be of MSTest format) and then under the pattern, we give the below regular expression tests/*.xml By default the plugin searches for any xml report file starting from the %WORKSPACE% path. And since we place all the resultant report files under the test sub folder of the workspace, we give the regex in the similar fashion so that the reports generated under the tests folder shall be taken for report generation. Once this is done, we save the job and let the job run again, we can see that in the build log that the recorder looks for the file and then creates report basing on that. Once this is done, we move back to the job home page, where we can now see a stat graph generated basing on the test results per job run. And when we click on Latest Test Result link (third in the main section), we move to the latest test result page. Clicking on the History in the left gives us the total test stats for all the builds since the test report is configured. In this way, we can configure test script execution and then record the test reports into stats for the builds in a fully automated fashion using xUnit plugin in Jenkins. Published 26 days ago
http://referbruv.com/blog/posts/cicd-getting-started-automating-test-and-reporting-on-aspnet-core-build-using-jenkins
CC-MAIN-2020-10
refinedweb
1,203
62.68
Sorry, normally do that but missed this one: corrected now. Search Criteria Package Details: xtitle-git 25-1 Dependencies (3) - libxcb (libxcb-git) - xcb-util-wm - git (git-git) (make) Sources (1) Latest Comments bidulock commented on 2016-06-09 23:58 StephenBrown2 commented on 2016-06-09 21:48 Can the source url be modified to: git+ ? Having trouble with firewalls blocking git connections, and having to modify the PKGBUILD every time is tiresome. kstolp commented on 2016-01-20 01:53 The xcb-util dependency seems to have been removed, and I'm getting build errors without it. baskerville commented on 2014-06-22 16:58 Thanks, fixed. dcell commented on 2014-05-23 03:58 This seems to be missing the xcb-util dependency. I get build errors without it. cc -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -std=c99 -pedantic -Wall -Wextra -I/usr/include -D_POSIX_C_SOURCE=200112L -DVERSION=\"0.1\" -Os -c -o xtitle.o xtitle.c xtitle.c:10:27: fatal error: xcb/xcb_event.h: No such file or directory #include <xcb/xcb_event.h> ^ compilation terminated. Makefile:24: recipe for target 'xtitle.o' failed make: *** [xtitle.o] Error 1
https://aur.archlinux.org/packages/xtitle-git/
CC-MAIN-2016-40
refinedweb
201
52.76
The concept of an array is common to most programming languages. In an array, multiple values of the same data type can be stored with one variable name. The use of arrays allows for the development of smaller and more clearly readable programs. Arrays are classified as aggregate data types. Unlike atomic data types which cannot be subdivied, aggregate data types are composed of one or more elements of an atomic or another aggregate data type. Simply put aggregate data types can be subdivided or broken down. To access individual elements of an array a subscript must be used. The subscript identifies the sub-element or component of the array that is to be accessed. Subscripts must start at 0 and proceed in increments of 1. Arrays can be of any data type supported by C, including constructed data types and arrays can be of any storage class except ‘register’. Arrays can be declared as local variables or global variables. Initialization of an array when it is declared is allowed for both locally and globally declared arrays. Initializing an array when it is declared does not require the presence of a dimension on the array char name[] = "John Smith"; int age[] = {21,32,43,22,25}; main() { . . . } OR main() { int age[] = {21,32,43,22,25}; . . . } If an array is declared to hold ten elements, the subscripts start at 0 and the highest subscript allowed for that array is 9. There is no bounds checking on arrays. It is the programmers responsibility to keep the subscript within the bounds of the array. On the above array age, which has five elements if a statement such as age[6] = 10; is made in the program, no error or warning will be generated by the compiler. The array age should only support subscripts ranging from 0 to 5. A subscript of 6 is not within the bounds of the array, but C/C++ will allow the statement. In most cases the program will terminate with a runtime error because of the above statement. Two and three dimensional arrays are allowed but actually there is no limit on the number of dimensions for an array. int aver[10][20]; float sales[50][10][2]; Values in a multi-dimensional array are stored in row-major order, meaning that if array is declared as int data[3][4]; the values are stored in memory in this order [0][0] [0][1] [0][2] [0][3] [1][0] [1][1] [1][2] [1][3] ... and so forth. Therefore, when initializing a multi-dimensional array at declaration time, the row designator does not have to be stated, but the column length must be stated, such as: float monthlySalesByProduct[][4] = { {100.50, 234.32, 987.98} , {324.56, 123.43, 876.83} , {765.23, 8743.28, 27388.87} ... }; Notice the use of nested {...} pairs to set off the column values for a row. This is required in order to delineate the boundaries of the rows. The two-dimensional array above can also be called a square array. A square array can be used to store arrays of strings. #include <iostream.h> int main() { char students[10][25]; // holds names of students int idx; for( idx = 0; idx < 10; ++idx ) { cout << "Enter a students first name: "; cin >> students[idx]; } cout << "\nThe students in the class are:" << endl; for( idx = 0; idx < 10; ++idx ) cout << students[idx] << endl; return 0; } Notice the used of students[i] in the above example. The name of each row in the multi-dimensional array is the name of an array, which means that it represents the beginning address of where data is stored. You can use the name of a row the same as you would the name of an array. Also, notice that no matter how short the name of the student is, the same number of characters is allocated for each row, therefore leading to excessing use of memory space. Arrays are automatically passed by reference to a function. The name of an array is the beginning address of where the array data is stored in memory; also known as a ‘pointer constant’. #include <iostream.h> int main() { int list[10] ,max = 10 ,indx = 0 ,largest ,num_ele ; int find_max( int [], int ); while( indx < max) { cout << "\nEnter a number: "; cin >> list[indx]; ++indx; } largest = find_max( list, max ); cout << "\nLargest number is " << largest << endl; return 0; } int find_max( int list2[], int size ) { int idx ,highest ; for(highest = list2[0],idx = 1; idx < size; ++idx) if(highest < list2[idx]) highest = list2[idx]; return(highest); } In the find_max() function, list2[] is an array of undetermined length. On the call to find_max() the argument list is stated without subscripts, this passes the address of list, because list is an array. The name of an array is a pointer constant that represents the beginning address of the array. Therefore, when list is stated in the call to find_max(), the address that list represents is actually passed to the function. The receiving argument list2[] only receives the address of where the array begins in memory. With the array notation used for the receiving argument, list2[] can be treated as if it is an alias that allows for the direct manipulation of the data stored in list. Multi Dimensional arrays as arguments follow the same syntax except that on the receiving side the last dimension must be stated. This is so the compiler knows where one row of data ends and another row begins. #include <iostream.h> #define ROWS 10 #define COLUMNS 2 int main() { float agents[ROWS][COLUMNS]; int indx = 0; int maxex( float [][COLUMNS], int ); cout << "\nEnter 3-digit agent" << " number then travel " << "expenses(001 1642.50)"; do { cout << "\nAgents # and expenses:"; cin >> agents[indx][0] >> agents[indx][1]); }while( agents[indx++][0] != 0 ); indx--; indx = maxex(agents,indx); cout << "\nAgent with highest expenses: " << agents[indx][0]; cout << " Amount: %.2f" << agents[indx][1]; return 0; } int maxex(float list[][COLUMNS],int size) { int idx ,maxidx ; float max ; max = list[0][1]; maxidx = 0; for(idx = 1; idx < list[idx][1]) { max = list[idx][1]; maxidx = idx; } return(maxidx); } memory value address at address 2000 J 2001 o 2002 h 2003 n 2004 2005 S 2006 m 2007 i 2008 t 2009 h 200A \0 An array that will be used as a string must be declared with one additional byte more than is needed for data. This allots space for the NULL byte which is the signal indicating the end of the string. // // looks at string in memory // #include <iostream.h> #include <stdio.h> #include <string.h> int main() { char name[81]; int indx; cout << "\nEnter your name: "; gets(name); // used because of whitespace input cout << "\nThe string " << name << " is stored at address " << hex << unsigned(name); for(indx=0; indx < strlen(name); ++indx) { cout << "\nADDR: " << hex << unsigned(name); cout << " CHAR: " << name[indx]; cout << " DEC: " << dec << int(name[indx]); } return 0; } Strings must be manipulated with standard functions supplied with the compiler function libraries. **char *strcat(char *str1, char *str2);** Appends str2 to str1, terminates the resulting string with a NULL character and returns the address of the concatenated string, str1. **char *strchr(char *str, char c);** Returns the address of the first occurrence of c in str; the function returns NULL if the character is not found. **int strcmp(char *str1, char *str2);** Compares str1 and str2 lexicographically and returns a value indicating their relationship. Value Meaning < 0 str1 less than str2 0 str1 equal to str2 > 0 str1 greater than str2 **int strcmpi(char *str1, char *str2);** Case-insensitive version of strcmp. Strings are compared without regard to case. **char *strcpy(char *str1, char *str2);** Copies str2, including the null terminating character to location specified by str1 and returns str1. **int strcspn(char *str1, char *str2);** Returns the index of the first character in str1 that belongs to the set of characters specified by str2. **char *strdup(char *str);** Allocates storage space for a copy of str and returns the address to that storage space containing the copied string. Returns NULL if storage could not be found. **int strlen(char *str);** Returns the length in bytes of str not including the termininating null character. **char *strlwr(char *str);** Converts any uppercase letters in the given null terminated string to lowercase. **char *strncat(char *str1, char *str2, int n);** Appends at most the first n characters of str2 to str1, terminates the resulting string with a null character and returns the address of the concatenated str1. **int strncmp(char *str1, char *str2, int n);** Compares at most the first n characters of str1 and str2 lexicographically and returns a value indicating the relationship between the substrings. Value Meaning < 0 substring1 < substring2 0 substring1 = substring2 > 0 substring1 > substring2 **char *strncpy(char *str1, char * str2, int n);** Copies exactly n characters of str2 to str1 and returns str1. **char *strnset(char *str, char c, int n);** Sets at most the first n characters of str to the character c and returns the address of the altered str. **char *strtok( char *str, char *delims );** Searches one string for tokens, which are separated by delimiters defined in a second string. #include <string.h> #include <stdio.h> int main() { char input[16] = "abc,d"; char *p; /* * strtok places a NULL terminator * in front of the token, if found */ p = strtok( input, "," ); if( p ) printf( "%s\n", p); /* * a second call to strtok using a * NULL as the first parameter returns * a pointer to the character following * the token */ p = strtok( NULL, "," ); if( p ) printf("%s\n", p ); return 0; }
https://docs.aakashlabs.org/apl/cphelp/chap07.html
CC-MAIN-2020-45
refinedweb
1,597
58.82
Hi, I'm having trouble and hoping someone can help me out or set me in the right direction. I’m trying to have a form with an input field and submit button on my site. Then when someone enters text into the field, it will add a comment to a Trello Card. I have my Trello API key and Token already. I’ve looked at tons of examples but continue to get lost. Here’s a few sites I’ve checked: So I think the code should look something like this, but a lot is copy/paste, so I don’t totally understand what I’m doing: Frontend: import {putAddToCard} from 'backend/aModule'; $w.onReady(function () { }); export function button1_click(event, $w) { putAddToCard().then(resp => console.log(resp)); } Backend: import {fetch} from 'wix-fetch'; export function putAddToCard() { const key = ''; const token = ''; const url = `{key}&token=${token}`; return fetch(url) .then(resp => { if (resp.ok) { return resp.json(); } }) .catch(err => console.log(err)); } Thanks, I appreciate any help.
https://www.wix.com/corvid/forum/community-discussion/api-input-field-to-trello-card
CC-MAIN-2020-05
refinedweb
167
66.94
Hello C Board, I'm a first time caller :) I reading thru C++ Without Fear-Overland. Okay, here's the example exercise I'm having a problem with: Okay, that is the sample answer, i didn't change it.Okay, that is the sample answer, i didn't change it.Code: //the project name is readtxt2 and the text file is name output.txt #include <stdafx.h> #include <iostream> #include <fstream> #include <string.h> using namespace std; int main(int argc, char *argv[]) { int c; // input character int i; // loop counter char filename[81]; char input_line[81]; if (argc > 1) strncpy(filename, argv[1], 80); else {; } if (file_in.eof()) break; cout << endl << "More? (Press 'Q' and ENTER to quit.)"; cin.getline(input_line, 80); c = input_line[0]; if (c == 'Q' || c == 'q') break; } return 0; } I am told I can enter in the command line this: readtxt2 output.txt and it should open output.txt and display for me, but when I run it and enter: readtxt2 output.txt I am told file not cannot open, now when I just enter: output it works fine. why is that? btw I'm using microsoft visual studio 2010 on windows vista. I know this is a beginner question, so please bear with me. I can use all the help I can get, since I am trying to learn by homeschool, thanks.
https://cboard.cprogramming.com/cplusplus-programming/135468-command-line-problems-printable-thread.html
CC-MAIN-2017-30
refinedweb
228
76.22
Use conanfile.py for consumers¶ You can use a conanfile.py for installing/consuming packages, even if you are not creating a package with it. You can also use the existing conanfile.py in a given package while developing it to install dependencies. There’s no need to have a separate conanfile.txt. Let’s take a look at the complete conanfile.txt from the previous timer example with POCO library, in which we have added a couple of extra generators [requires] poco/1.9.4 [generators] gcc cmake txt [options] poco:shared=True openssl:shared=True [imports] bin, *.dll -> ./bin # Copies all dll files from the package "bin" folder to my project "bin" folder lib, *.dylib* -> ./bin # Copies all dylib files from the package "lib" folder to my project "bin" folder The equivalent conanfile.py file is: from conans import ConanFile, CMake class PocoTimerConan(ConanFile): settings = "os", "compiler", "build_type", "arch" requires = "poco/1.9.4" # comma-separated list of requirements generators = "cmake", "gcc", "txt" default_options = {"poco:shared": True, "openssl:shared": True} def imports(self): self.copy("*.dll", dst="bin", src="bin") # From bin to bin self.copy("*.dylib*", dst="bin", src="lib") # From lib to bin Note that this conanfile.py doesn’t have a name, version, or build() or package() method, as it is not creating a package. They are not required. With this conanfile.py you can just work as usual. Nothing changes from the user’s perspective. You can install the requirements with (from mytimer/build folder): $ conan install .. conan build¶ One advantage of using conanfile.py is that the project build can be further simplified, using the conanfile.py build() method. If you are building your project with CMake, edit your conanfile.py and add the following build() method: from conans import ConanFile, CMake class PocoTimerConan(ConanFile): settings = "os", "compiler", "build_type", "arch" requires = "poco/1.9.4" generators = "cmake", "gcc", "txt" default_options = {"poco:shared": True, "openssl:shared": True} def imports(self): self.copy("*.dll", dst="bin", src="bin") # From bin to bin self.copy("*.dylib*", dst="bin", src="lib") # From lib to bin def build(self): cmake = CMake(self) cmake.configure() cmake.build() Then execute, from your project root: $ conan install . --install-folder build $ conan build . --build-folder build The conan install command downloads and prepares the requirements of your project (for the specified settings) and the conan build command uses all that information to invoke your build() method to build your project, which in turn calls cmake. This conan build will use the settings used in the conan install which have been cached in the local conaninfo.txt and file in your build folder. This simplifies the process and reduces the errors of mismatches between the installed packages and the current project configuration. Also, the conanbuildinfo.txt file contains all the needed information obtained from the requirements: deps_cpp_info, deps_env_info, deps_user_info objects. If you want to build your project for x86 or another setting just change the parameters passed to conan install: $ conan install . --install-folder build_x86 -s arch=x86 $ conan build . --build-folder build_x86 Implementing and using the conanfile.py build() method ensures that we always use the same settings both in the installation of requirements and the build of the project, and simplifies calling the build system. Other local commands¶ Conan implements other commands that can be executed locally over a consumer conanfile.py which is in user space, not in the local cache: - conan source <path>: Execute locally the conanfile.py source()method. - conan package <path>: Execute locally the conanfile.py package()method. These commands are mostly used for testing and debugging while developing a new package, before exporting such package recipe into the local cache. See also Check the section Reference/Commands to find out more.
https://docs.conan.io/en/1.44/mastering/conanfile_py.html
CC-MAIN-2022-33
refinedweb
625
58.99
Provided by: blahtexml_0.9-1.1build1_amd64. OPTIONS These programs follow the usual GNU command line syntax, with long options starting with two dashes (`-'). A summary of options is included below. For a complete description, see the online documentation. --help Show summary of options. --texvc-compatible-commands Enables use of commands that are specific to texvc, but that are not standard TeX/LaTeX/AMS-LaTeX commands. --print-error-messages This will print out a list of all error IDs and corresponding messages that blahtex can possibly emit inside an <error> block. MATHML OPTIONS These options control the MathML output of the blahtexml program. --mathml Enables MathML output. --xmlin This allows one to embed TeX equations in an existing MathML code, using a special notation. The equations are given as attributes (inline or block) in the namespace. Whenever blahtexml meets such an equation, it expands it into the equivalent MathML code. For more information check or the blahtexml manual. --annotate-TeX Produces TeX annotations in the MathML output. --annotate-PNG Produces PNG files and annotates the MathML output with the PNG file name. SEE ALSO The program is documented fully by the online manual available at: AUTHOR blahtexml was written by Gilles Van Assche. blahtex (whose superset is blahtexml) was written by David Harvey. This manual page was written by Abhishek Dasgupta <abhidg@gmail.com>, for the Debian project (but may be used by others). March 17, 2010 BLAHTEXML(1)
https://manpages.ubuntu.com/manpages/bionic/en/man1/blahtexml.1.html
CC-MAIN-2022-21
refinedweb
238
51.65
indu Mythology also has many superstitions, some might have scientific reasoning behind them, many do not. (I know this because I am a Hindu myself.) Some examples are : Cutting nails after sunset Cutting nails at night (or after sunset) is bad luck. A variation of this is that one isn't allowed to cut their nails on Saturday. I believe that in the olden days, when electric lights didn't exist, people used to cut their nails with scissors(?!) and cutting their nails in the dark would result in massive pools of blood all over the place because people wound up cutting their fingers off or something. I do not know about why Saturday is a restriction. Taking clothes off the washing line at night This is also supposed to be bad luck. The rationale behind this is that at night, the ghosts and demons are active. Some of them maybe hiding in your clothes, and when you take them back into the house, you are effectively inviting them into your homes and allowing them to wreak havoc. Hanging a lemon and a chilli on your front door This practice is supposed to ward off evil. A string is pushed through the lemon and chilli and is tied (only at one end) around a nail over the door. The resulting contraption of dangling chilli and lemon is supposed to keep evil spirits out of your house (I do not know how, my mom never was able to explain that one). They can still get in if you do things to invite them, like whistling at night for example. Which brings me to the next superstition... Whistling at night This brings demons into your house. No explanations there. Just really angry aunties shouting at you telling you not to invite the demons over for dinner. Staring at shadows at night Shadows at night you say! If there is some external light source, then you should be able to get shadows at night. Of course if you looked at these shadows, you have inadvertently invited demons to your home. When I was a kid it seemed like almost anything I could do would invite demons home. I think this superstition started because little kids could get nightmares looking at strange shadows before they slept at night. I know I did, after I heard about this superstition. Returning to the house for any reason, just after you left it. Not only is this bad luck, but be prepared to waste a lot of time, depending on what your parents choose to believe. Many people believe that if you leave the house, and immediately return (to pick up those car keys you forgot to take the first time around), you will either be KILLED on your journey, or have horrible luck until you return home. (Whew, at least demons don't get into your house this time huh?). To avert certain disaster, there are a few things you can do. Some people believe it is enough to get get back into the house, sit down for a minute and then walk out again. This would probably be enough to convince the demons who were out to destroy your car (or horse, or airplane) that you didn't really start on your journey the first time. (Hey I said they weren't getting into your house. I never said demons weren't involved here). Other people believe that when you get back in, you MUST drink a glass of water and THEN sit down. Still others believe that you should walk in and out of the house five times(thus confusing those pesky demons. Now they won't know whether you're coming or going). And finally there are a few people who believe a combination of all these methods is a good way to ward off the demons. Leaving your hair open on a full moon night I believe that this applies to girls (mostly). So you thought you were smart by hanging that lemon chilli contraption over your door eh? You forgot about your daughters/sisters/wives walking around with their long tresses flowing over their shoulders at night! The ever present demons will be seduced by their hair, and climb into the hair, and enter your houses that way! I believe THIS superstition came up because mothers did not want their daughters to attract extra attention from boys, and on a full moon night, there would be more visibility than on nights without a moon (Remember, we are talking about a time before street lights). I think I also have to mention that open hair is considered to attract boys/men in Indian culture (I feel that this is a given, but I still thought I should explain it) Too much flattery This is bad for the one who is receiving the praise. The idea behind this is that the demons will get jealous, and then decide to hurt you out of envy. So if someone keeps telling you how good you look, then you have to touch your temples with your knuckles to remove the "kaala nazar" (literally translated that means "black sight", but it basically means the 'evil eye') Staring at the moon during Ganesh Chaturthi I know this doesn't apply to everyone, but it has some interesting roots, so I wanted to add this in my list. Ganesh Chaturti is a festival celebrating the birth of Lord Ganesh. On one of his birthdays, tripped and fell while he was dancing on his flying rat.. (yes. I said it. Flying rat. Or mouse if you would prefer. I'm really not making this up. Lord Ganesh himself was a boy whose head was chopped off and replaced with the head of an elephant. Try explaining Hindu mythology to little kids and see how many of them you can terrify.) Anyhoo, when the moon saw this, he laughed at Lord Ganesh. Not a smart thing to do, laughing at a God on his birthday. So Lord Ganesh cursed the moon, saying that no one would ever look at the moon on his (Lord Ganesh's) birthday. So if you DID look at the moon, you would incur the Wrath of God. There are many others, and I will add the interesting ones as and when I remember them, and if anyone knows any others, they can message me or add them here themselves.to write something here or to contact authors. Need help? accounthelp@everything2.com
http://everything2.com/title/Superstition?author_id=1839117
CC-MAIN-2015-35
refinedweb
1,086
78.18
Your answer is one click away! Thank you guys for your answers. I am now editing the question after considering all your solutions guys but I am still getting error. I am using support v4 now. Here is the result. I keep getting the same error in the switch case block of code ] These are my current imports import android.support.v4.app.Fragment; import android.support.v4.app.FragmentManager; Here a new error I am getting after I changed the imports again guys Remove all Fragment's import from class and import android.support.v4.app.Fragment again the problem is the fragment that you are using is a of type android.support.v4.app.Fragment and the method requires the fragment of type android.app.Fragment so you can go to your fragment remove the import statement of Fragment import android.support.v4.app.Fragment and add import android.app.Fragment it should work the activity you are using the fragment in must extend the FragmentActivity and in place of getFragmentManger you should use getSuppotFragmentManager public class YourActivity extends FragmentActivity{ } and the fragments also must be of type android.support.v4.app.Fragment There are two ways we can use fragments By importing fragment manager and fragments from support v4 library to support lower OS versions devices. By importing it from standard android API Here you are trying to use fragment from 1 and FragmentManager from other. Hence you should use both from anyone I would suggest you to use support v4 library
http://www.devsplanet.com/question/35269059
CC-MAIN-2017-22
refinedweb
255
57.16
In the previous articles on Postman Tutorial, we have covered “How To Fix Common Errors In Postman“ In this “GUID in Postman” article, I will be demonstrating how you can implement this concept and get a tight grip over this. What is GUID? GUID stands for Global Unique Identifier. It is basically hexadecimal digits separated by hyphens. GUID solves the purpose of uniqueness. In Postman, we use this to generate and send a random value to APIs. It can be generated using online tools or manually. Online tool to generate guid is Structure: GUID is a 128-bit value. It follows the structure defined in RFC4122. Basic structure is: xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx where M defines version and N defines variant For Example, b3d27f9b-d21d-427c-164e-7fb6776f87b0 guid is a version 4 There are various versions defined for guid: Version1: Date-Time and MAC address are used Version2: Uses DCE security Version3: MD5 hash and namespace are used Version4: Generate random digits to create a GUID Version5: SHA-1 hash and namespace are used Advantages: - Unique value is generated every time. Hence, less or no probability of duplicate values. - GUIDs can be generated manually. - GUID can be used as a primary key in database - GUIDs are used when we have multiple independent systems Disadvantages: - Takes a lot of space - Sorting can’t be performed to arrange data in particular data Next steps: Learn “API Documentation” in the next tutorial. Author Bio: This article is written by Harsha Mittal an ISTQB-CTFL Certified Software Test Engineer having 3.5+ years of experience in Software Testing.
https://www.softwaretestingmaterial.com/guid-in-postman/
CC-MAIN-2020-34
refinedweb
266
54.42
14 March 2012 04:16 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The plant will be able to produce 100,000 tonnes/year of 2-ethylhexanol (2-EH) and 50,000 tonnes/year of butanol, said the statement. The plant will supply feedstock to Shandong Bluesail’s plasticiser units and thus reduce the company’s production costs, the statement added. Shandong Bluesail’s sales revenue is expected to rise by 20.5% year on year to yuan (CNY) 4.71bn ($744m) in 2012, largely because of the start-up of the oxo-alcohol plant, according to the statement. Construction of the plant, which will cost Shandong Bluesail CNY1.31bn, began in 2010, the statement said. The company has a 60,000 tonne/year phthalic anhydride (PA) plant and a total plasticiser capacity of 400,000 tonnes/year, according to Chemease, an ICIS service in Shandong Bluesail Chemical is a key plasticiser producer
http://www.icis.com/Articles/2012/03/14/9541293/chinas-shandong-bluesail-chemical-to-start-up-oxo-alcohol.html
CC-MAIN-2013-48
refinedweb
152
65.12
ASP.NET Validation server controls allow you to easily validate any data a user has entered in a Web form. These controls support such validations as required fields and pattern matching, and they also make it easy to build your own custom validations. In addition, Validation controls allow you to completely customize the way error messages are displayed to users when data values don't pass validation. Validation controls are similar to Web controls (see Day 5, "Beginning Web Forms"). They're created on the server, they output HTML to the browser, and they're declared with the same syntax: <asp:ValidatorName The difference is that these controls don't display anything unless the input fails validation. Otherwise, they're invisible and the user can't interact with them. Thus, a Validation control's job is to watch over another server control and validate its content. The ControlToValidate property specifies which user input server control should be watched. When the user enters data into the watched control, the Validator control checks the data to make sure it follows all the rules you've specified (see "How Validation Controls Work" later today). Table 7.1 summarizes the types of predefined validation controls offered by ASP.NET. You'll use all of these controls as you progress through today's lesson. They all belong to the System.Web.UI.WebControls namespace. Unfortunately, Validation controls will only validate input with a subset of ASP.NET server controls. Most of the time, these controls will be more than enough to validate all user input. The controls that can have their input validated by ASP.NET's Validation controls are HtmlInputText HtmlTextArea HtmlSelect HtmlInputFile TextBox ListBox DropDownList RadioButtonList Note that the Validation controls are supplements to the controls listed. They don't allow user input themselves. To create a Validation control on an ASP.NET page, simply add the proper Validation control tag to your Web form. Each Validation control that you add has a specific job: watching over one other server control. When a user enters any information into that server control, its corresponding validator scrutinizes the input and determines if it passes the tests that you've specified. All this is accomplished without any intervention or extra code from the developer. Validation controls work on both the client and server. When a user enters data into a Web control and it violates the validation rule you've specified, the user will be alerted immediately. In fact, ASP.NET won't allow a form to be posted that has invalid information! This means you don't have to send information back and forth from the server to provide validation it occurs on the client side automatically. This provides a large boost in performance and a better experience for the users. Note Client-side validation only occurs on browsers that support DHTML and JavaScript. When the form is finally posted to the server, ASP.NET validates the inputs again. This allows you to double-check the data against resources that may not have been available on the client side, such as a database. You don't need to validate on the server again if you trust the client-side validation, but it can often help in situations that require more complex validation routines. Although much of this functionality comes automatically, there are a lot of things that must be accomplished behind the scenes in order for these controls to work. Listing 7.3 is a simple example. 1: <%@ Page 4: sub Submit(Sender as Object, e as EventArgs) 5: if Page.IsValid then 6: lblMessage. 13: <asp:Label<p> 14: 15: Enter your name: 16: <asp:TextBox<br> 17: <asp:RequiredFieldValidator<p> 20: 21: <asp:Button 24: </form> 25: </body></html> Save this file as listing0703.aspx and view it from your browser. Before looking at what goes on backstage, let's take a tour of the page itself. In the HTML portion of the page, you see four server controls: Label (line 13), TextBox (line 16), RequiredFieldValidator Validation control (lines 17 through 19), and a Button control (lines 21 through 23). RequiredFieldValidator is declared just like any other control. It contains a control name and runat="server". The ControlToValidate property specifies which server control this validator should watch over (tbFName in this case). If any data is entered into the tbFName text box, the RequiredFieldValidator is satisfied. This control only checks to see if a field contains data. Every Validation control has an IsValid property that indicates if the validation has passed. In this case, if the user enters any data, this property is set to true. Otherwise, it's false. When the form is posted, the Submit method on line 4 is executed. The Page.IsValid property makes sure all Validation controls on the page are satisfied. You could also check each individual control's IsValid property, but this way is faster. Lines 5 7 display a message to the user if all of the Validation controls in the page are satisfied with the user input. Try clicking Submit without entering anything in the text box. You should see what's shown in Figure 7.2. What happened? The Validation control on line 17 checks if its dependent control has any data in it. Since you didn't enter any information, the validator stops the form from posting and displays the message you specified in the ErrorMessage property on line 19. The server doesn't get to see the user input at all. Try entering some information into the text box and moving out of the element. The error message disappears automatically! Erase the text, and the error message appears again. Each element is validated as soon as the focus leaves it. The dynamic appearance of the error message is only supported on browsers that support dynamic HTML (Internet Explorer and Netscape 4+). Let's take a look at the HTML produced by your ASP.NET page. Right-click on the page and select view source. Listing 7.4 shows a portion of the HTML source. 1: ... 2: <form name="ctrl1" method="post" action="listing0702.aspx" 3: 9: </script> 10: ... 11: ... 12: <span controltovalidate="tbFName" 13: 16: First name required</span><p> 17: ... 18: ... 19: <script language="javascript"> 20: <!-- 21: ... 22: ... 23: function ValidatorOnSubmit() { 24: if (Page_ValidationActive) { 25: ValidatorCommonOnSubmit(); 26: } 27: } 28: // --> 29: </script> 30: ... 31: ... This page has a lot of content, so let's just examine the important parts. On line 2, you see your standard form tag. However, notice that when this form is submitted (that is, when the Submit method is raised), the form executes the ValidatorOnSubmit function instead of posting directly to the server. This function, located on line 25, determines if validation is enabled for the page (which it is, by default) and executes another function that performs the validation processing (see "Disabling Validation" later today for more information). On line 8, you see that your page includes a reference to the WebUIValidation.js JavaScript file, located on the server. This script, automatically generated by ASP.NET, contains all the necessary client-side DHTML functions to display dynamic error messages, validate the input, and post the data to the server. This file is quite large, so we won't examine it here it's usually located at c:\inetpub\wwwroot\_aspx\version\script. On line 12, you see the HTML output of the Validation server control a <span> element. It contains a custom attribute, evaluationfunction, that tells WebUIValidation.js what type of validation to perform. Also, this span is set to "hidden," which means the user can't see it that is, until the validator functions have their say. This is a complicated process, but you don't have to build any of it because ASP.NET handles everything automatically. When this client-side script executes and evaluates each Validation control on the page, it sets a parameter for each based on the outcome of the validation. These parameters are sent to the server for validation again. Figure 7.3 illustrates this process. Recall that client-side validation only occurs on browsers that support DHTML. Additionally, if a client has JavaScript disabled, client-side validation won't occur. Why does the server need to validate the information again? Suppose that you need to validate some of the information against a database that resides on the server. Or that you want to prevent a user from submitting a form more than once. Sometimes the extra validation might be necessary, and the added precaution can't hurt. When the server receives the form data, it checks the state of each Validation control (through the IsValid property). If they've all been satisfied, ASP.NET sets the IsValid property for the Page object to true. Otherwise, it's set to false. Even if it's false, execution of your code will still continue. The code will be executed whether or not the input was validated. Therefore, it's up to you to stop your code from executing if the data is invalid. This is usually done with a simple check to the Page.IsValid property.
https://flylib.com/books/en/4.226.1.81/1/
CC-MAIN-2020-45
refinedweb
1,525
57.47
JEE. | Site Map | Business Software Services India Spring Frame Work Tutorial Section Spring 3 | Spring 3.0 Features | Spring 3 Hello World | @configuration annotation in Spring 3 | Introduction Spring Framework tutorial for file upload in spring - Spring tutorial for file upload in spring Is there tutorial available for uploading file using spring framework. The example in the spring reference uses... interface.How to work with it? I am totally new to spring can somebody help me.   Tutorial Tutorial suggest me some good spring material for begginer Spring Annotation Tutorial Spring Annotation Tutorial In Spring Framework you can use annotations to write to java classes such as Controller easily. For example making any java class as Controller you can simply write @Controller before the class declaration The Complete Spring Tutorial The Complete Spring Tutorial In this tutorial I will show...? In this Spring tutorial series we will learn Spring... Framework Video tutorial: Spring 4 MVC Hello World Tutorial for Beginners The Spring Framework is an open source Java platform through which... for the Java platform. Spring Framework was first written by Rod Johnson in his... then, many advance versions of Spring were released namely Spring Framework 1.0...://av - Spring jav In PART2 of the Spring tutorial it says "Add entry to web.xml file". Can you tell me what I should enter. Here is my web.xml file: servletclient servletclient 1 servletclient Hibernate Spring Integration In this tutorial we will discuss how to integrate hibernate with Spring framework Framework Install Spring Framework Install Spring Framework Install - a quick tutorial to install Spring... the latest version of Spring framework (spring-framework-2.5.1-with-dependencies.zip Spring Set Property Spring Set Property The Spring Framework has bean support for the Collections. It provide list, set, map and props elements. Here in this tutorial you will see about the set elements which is used to set values inside the set Spring AOP Advice Ordering Spring AOP Advice Ordering Advice ordering is required when you use more than one advice in your application. Spring AOP follows some precedence rules... System.out.println("Have a nice Day"); } @Override public void sayHi Spring Collection Merging Spring Collection Merging The merging of collection is allowed in the Spring 2.0. A parent-style like <list/>, <map/>, <set/> or <pros/> element can have child-style <list/>, <map/>, <set/> Spring Interceptor Example Spring Interceptor Example An example of interceptor is given below that prints all the Log information on the console To use interceptor in your application do the following steps 1. Write a class that extends spring spring sir how to access multiple jsp's in spring jQuery Tutorial jQuery Tutorial jQuery is a cross browser, java Script library created by 'John Resig' in 2006 with a nice motto: "Write less, do more". It handles the client side scripting of HTML. jQuery simplifies  Spring Batch Example Spring Batch Example JDBC Template Batch update example, In the tutorial we have discussed about batchUpdate() method of class JdbcTemplate in Spring framework Spring Map Example Spring Map Example In this example you will see how bean is prepared for injecting Map collection type key and its values. MapBean.java package spring.map.example; import java.util.Iterator Spring Bean Example, Spring Bean Creation Basic Bean Creation The Spring bean can typically be POJO(Plain Old Java Object) in the IoC container. Here in this tutorial you will see at first a simple...;"> Spring p-namespace Spring p-namespace In this tutorial you will see the use of p-namespace in spring framework. In spring generally we have nested <property/> elements.../schema/beans Spring Download - Downloading and Installing Spring Framework framework at the time of writing this tutorial is Spring 2.5.1, which available... Spring Download - Downloading and Installing Spring Framework Spring Framework spring spring i am a beginner in java.. but i have to learn spring framework.. i know the core java concepts with some J2EE knowledge..can i learn spring without knowing anything about struts Excluding filter in Spring Exclude-Filter in Spring In this tutorial you will learn about how to exclude specified component, if you want spring to avoid detection and inclusion into the spring container exclude those files with the @Service annotation Spring Spring What is AOP concept in spring? How AOP concept differ from DI concept spring spring sir can you explain me the flow of sample example by using spring? thanks Spring JDBC Introduction Spring JDBC Introduction The Spring's DAO(Data access object) make it easy... catching exception that are related to each technology. This tutorial provide a brief introduction about Spring DAO JDBC. The following table describe Spring Handling Form Request Spring Handling form Request By help of this tutorial we are going explain the flow of the form data in spring MVC. For running this tutorial you need... spring-asm-3.0.3.RELEASE.jar spring-beans-3.0.3.RELEASE.jar spring Calling Constructor in Spring Calling Constructor in Spring  ... and how to call a constructor in the Spring. Declaring constructor injection in the Spring framework is generally done in the bean section of the configuration Spring Exception Handling Spring Exception Handling In any web application it is always recommended... mapped error page will be directly rendered by the servlet container. The Spring... to handle exception in spring. 1. Write a ExceptionHandler class that extends Java Springs Framework Tutorial Java Springs Framework Tutorial Spring framework is a Java platform that is developed by Spring Source Company and is used develop robust Java applications. Java developers use Spring to create modular, portable and testable Spring 3.2 MVC, Upload File in a specific folder In this Spring 3.2 MVC tutorial, you will learn about uploading file in a specified folder Spring filter Bean Spring Filter Bean The auto scan property can be gain through @Component... the components in Spring framework. StudentDAO.java package.../schema/beans spring spring package bean; public interface AccountsDAOI{ double... normally. i set the classpath=D:\java softwares\ST-IV\Spring\spring-framework-2 .5.1\dist\spring.jar;D:\java softwares\ST-IV\Spring\spring-framework-2.5.1\lib\c Spring Map Factory, Spring Map Configuration Spring Map Factory Ths MapFactoryBean is a simple factory for shared Map instance. The map element is defined in the xml bean definitions. The setSourceMap...:// Spring @Required Annotation Spring @required Annotation In this tutorial you will see about spring...:// Spring the following links: Spring the following links: Spring Spring I understand spring as dependency injection. It can avoid object creating and can directly inject values. But i am comfusing that Dependency... are created. By the same way i want to know how spring injected property Spring Setter Injection Spring Setter Injection The In Spring framework Setter Injection is used to inject the value into the instance variable from the xml file without hard coding.../beans"> < Spring 3.0 Tutorials with example code Spring 3.0 - Tutorials and example code of Spring 3.0 framework In this Spring 3.0 tutorial you will learn Spring 3.0 with the help of example code. The Spring 3.0 tutorial explains you different modules
http://www.roseindia.net/tutorialhelp/comment/99494
CC-MAIN-2014-52
refinedweb
1,194
55.64
This week, we saw the release of Node.js v12, the next Node.js release line that will become LTS. I wanted to go through the various posts that went out and the changelog and condense the information into an easily consumable digest of what's new in Node.js v12.x to share with everyone. 💖 The 🔥 Changes Let's dig into some of the most important and remarkable changes that have landed in v12.0.0! New ES Modules, who dis With the release of Node.js v12.0.0, we see the introduction of a new implementation of ES Modules in Node.js. 🎉 Note: ES Modules features are still Experimental and as such should not be used in production code until they are finalized. At release, this new implementation has replaced the previous implementation behind the --experimental-modules flag. This is intended to help get the new implementation out there and tested so the project can get feedback. If all goes well (🤞), this can ship unflagged once Node.js v12 goes LTS in October! Up front, I want to say this is going to be a tl;dr. If you're interested in a deeper dive into the new hotness around ESM in Node.js, please check out the blog post by the Modules Team on Medium. Previous implementation Many of the previous implementation's features carried over. This includes ES2015 import statements, various kinds of export, Node.js export support on all core modules, WIP imports for CommonJS, very WIP loader API, and explicit ESM parsing if the .mjs file extension is present. New implementation features These features are 100% new with the enhancements the Modules Team has been working on, and are available behind the --experimental-modules flag in Node.js v12.0.0. - Import and export syntax in .jsfiles - there was lots of feedback that Node.js needs to provide a way to use import/export in .jsfiles. - Two different solutions were implemented for this (keep reading!) - Support for "type": "module"in package.json - If this is detected, Node.js will treat all .jsfiles in your project as ES Modules. - If you still have CommonJS files, you can rename them with the .cjsfile extension, which will tell Node.js to parse them as CommonJS explicitly - An --input-typeflag for cases like --evaland STDIN Current WIP Features These features are currently being worked on by the Modules team and are either implemented but are likely going to change or are being worked on but did not ship in Node.js v12.0.0. - JSON imports - Currently does not work, but is being actively worked on. - import and require interop - ️️⚠️ The Modules Team has requested that you do not publish ES Modules that can be used in Node.js until it's been resolved. I assume that modules published before this is resolved will likely break. - Module Loaders - ⚠️ Very WIP - A first implementation of the --loaderAPI has shipped, but it's going to be improved upon and, as such, change. - A simpler way to requirein ES Modules code. - The current implementation is a bit heavy-handed. The Modules team is working on lowering the barrier. - Package path maps - This would allow for less verbose imports in certain situations - Automatic entry point module type detection - Effectively, static analysis that would allow Node.js to figure out if a module is a CommonJS module or an ES Module. Quick ESM Examples If you're interested in seeing what ESM in Node.js looks like, you can check out two repos I pushed out yesterday: - simple-esm – an example of what ESM in Node.js looks like with the current ESM implementation - simple-esm-usage – an example of how you could use ESM modules from npm in Node.js if the current implementation were to ship unchanged (it's going to be changing, so this is more theory than practice) I'm planning to keep these repos (and the version of simple-esm published to npm) both up-to-date as the ESM implementation changes both for my own understanding and as a community resource to have a minimum viable example of ESM in Node.js. V8 7.4 This release included a major V8 upgrade, jumping forward several releases to the most recent version of V8 at time of release. This upgrade includes a plethora of really fantastic enhancements. I'm personally most interested in Zero-cost Async Stack Traces, but there are a plethora of additional enhancements that are better outlined by Mathias Bynens from the V8 team: TLS 1.3 Next up, we have official TLS 1.3 support. This is an incredible improvement to previous TLS versions, and I'm super excited that it's now supported in a release line that'll be going LTS! Thankfully, this is a backward compatible change thanks to the underlying implementation in OpenSSL 1.1.1. Additionally, it's mentioned in the PR that it should be backported to other LTS release lines. If you're curious about the awesome parts of TLS 1.3, I recommend this blog post from the IETF. Worker Threads This is the first LTS release line that will include the currently-experimental work on Worker Threads. This release has removed the need to run Worker Threads with a flag, hopefully lowering the barrier to more widespread usage of the tool for parallelizing work in Node.js. If you're interested in trying out Worker Threads today, there are a few resources you can use to get started: - Using worker_threads in Node.js - Simple bidirectional messaging in Node.js Worker Threads - Node.js multithreading: What are Worker Threads and why do they matter? - Official Node.js Worker Threads Docs Built-in Heap Snapshotting In this release, we also see built-in heap snapshotting adapted from the heapdump module on npm. This is exposed via v8.getHeapSnapshot() and returns a readable stream. Other Notable Changes and Improvements - Core Dependencies: - Upgraded to OpenSSL 1.1.1b (nodejs/node#26327) - Upgraded to ICU 63 (nodejs/node#25852) - There is also currently an open PR to further update to ICU 64.2 - Node.js has started using llhttp as its default parser (nodejs/node#24730) - Invalid mainentries in package.jsonwill now throw an error (nodejs/node#26823) node --debugis now EOL – use node --inspectinstead (nodejs/node#25828) - TLS 1.0 and 1.1 are now disabled by default (nodejs/node#23814) Fin Hopefully this overview of the new release is helpful to you! If you've got any questions about the new features that've shipped, when you can start expecting to use ESM in Node.js, or anything else about Node.js v12 I'm happy to be a resource for you to hopefully find the answers you're looking for! Discussion (2) Worker threads, upfront, sound like the most useful feature. Especially in more event-driven type node apps that publish events to an event store, etc., this would (I'm assuming without knowing too much about them) be a huge win. I am extremely excited for Worker Threads. It's something there have been multiple attempts at previously – both in Node.js itself and in user land modules – but none have been as promising as this implementation. Massive props to Anna Henningsen, who did a truly massive amount of work on this feature!
https://dev.to/bnb/the-awesome-features-that-just-landed-with-node-js-v12-178d
CC-MAIN-2021-49
refinedweb
1,228
65.62
On Fri, Jul 11, 2008 at 11:37 AM, Ben Collins-Sussman <sussman_at_red-bean.com> wrote: > Given that we've allowed a user to maintain a branch for this feature > for *years* -- without allowing him to merge the feature -- our > actions speak louder than words! > > I'm fine for reopening the debate; we should first examine why > exactly we've kept this branch in stasis for so long. If we don't > like the design, what would a better design be? Having used other version control systems that have this feature, I always thought Subversion should have it too. My problem with this has always been the design. I do not think you can do this feature properly on top of Subversion's repository today. Using a versioned property sucks. That means you have to carry around the property information in the working copy and every commit in theory needs to update it. This fouls log and diff etc.. In theory revision properties could be used, but do we really want an import of 10,000 files sticking all of that data into revision properties? In addition, we do not have any great ways to access that data and it would add a lot of extra overhead during checkout/update operations to go find this information in the revision properties. To me, both of these problems are also why it is bogus to tell users this is something they can do themselves using scripts or their own clients. That really is not true because we do not provide an adequate way to store this information. I think this should be on our list for 2.0 so that there could be a way to accommodate this information in the repository design. I think the repository should gather and store this information regardless as to whether the user wants it or not. The bit that should be exposed to the user as an option would be whether or not to update the mtime in the working copy on checkout/update. This is more or less how the tools that have this feature seem to do it. -- Thanks Mark Phippard --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe_at_subversion.tigris.org For additional commands, e-mail: users-help_at_subversion.tigris.org This is an archived mail posted to the Subversion Users mailing list.
http://svn.haxx.se/users/archive-2008-07/0607.shtml
CC-MAIN-2014-41
refinedweb
387
63.09
Flashcards Preview FIN3IPM Exam Preparation The flashcards below were created by user gecalder on FreezingBlue Flashcards . Quiz iOS Android More The value of an investment at the end of each of four years is 7%, 17%, -2.4% and 2%. What does the equation to calculate the geometric return look like? [(1+0.07)(1+0.17)(1-0.024)(1+0.02)] 1/4 -1=0.0566 or 5.66% A bill with 90 days to maturity initially has a yield of 8% p.a. and a face value of $100 000. This bill is held for 45 days and sold as a 45-day bill at a yield of 6%. What is the continuously compounding holding period rate of returns over the 45 days? The initial price would be 100000/[1 + (90/365) x (8.0/100)] = $98065.56. The price for the 45-day bill would be 100000/[1 + (45/365) x (6.0/100)] = $99265.71. Hence, the continuously compounded holding period return would be ln(99265.71/98065.56) = 0.0122 or 1.22% continuously compounding for 45 days. Suppose a two-year 9% p.a. bond with a face value of $100 000 has a yield of 8% p.a. The price of this bond is $101814.95 and its duration is 3.7515. Assume the yield on this bond decreases instantaneously from 8% p.a. to 7.75% p.a. What would be the expected increase in the bond price? - {[(0.0775/2) – (0.080/2)]/(1 + 0.080/2)} x 3.752 x 101814.95 = - $459.15. Calculate the beta for an asset with a variance of 10%, where the market has a variance of 15% and a covariance with the asset of 20%. β i = cov im /σ 2 m 0.2/0.15 = 1.33 The three factors that appear to be most relevant when testing the APT relate to: A. unexpected changes in interest rates, inflation and economic growth B. Expected changes in interest rates, inflation and economic growth C. expected and unexpected interest rates and economic growth D. expected and unexpected interest rates and inflation Despite a large volume of work, there is no clear consensus as to which particular variables are relevant pricing factors. For instance, Cho, Eun and Senbet (1986) test the APT at the international level for 11 industrial economies and report between one and five factors. However, in the main, variables related to (unexpected) interest rate structures, inflation and economic growth appear to be relevant most frequently. What are the three types of market efficiency? Fama (1970) originally proposed a three-way classification of market efficiency based directly on information. Weak form efficiency referred to past price information, semi-strong form efficiency referred to public information and strong form efficiency referred to all information. The hypothesis that argues that there is a downward price pressure at the end of the tax year on shares that have experienced recent price declines as investors attempt to sell in order to realise capital losses is the: A. tax-loss selling hypothesis B. price rebound hypothesis C. share price declines hypothesis D. tax on capital gains hypothesis A. The tax-loss selling hypothesis has been put forward as an explanation of the seasonality in the size premium in the USA and extended to other monthly seasonal in other markets, such as July in Australia. The hypothesis that argues that there is a downward price pressure at the end of the tax year on shares that have experienced recent price declines as investors attempt to sell in order to realize the capital loss. (this multiple choice question has been scrambled) An indexed-linked bond is one where the price is linked to the: A. All-Ordinaries B. Consumer Price Index C. world equity market index D. industrial production index B. While inflation will affect the discount rate, in general it has no effect on the coupon payments, though notable exceptions include floating rate bonds and index-linked bonds, where the coupons are indexed to an inflation benchmark such as the Consumer Price Index. (this multiple choice question has been scrambled) Shares that are classified as __________ have a high book-to-market ratio. A. value shares B. growth shares C. blue-chip shares D. green-chip shares A.. (this multiple choice question has been scrambled) Research from Arthur, Cheng, Czernkowski (2010), report that models of future earnings that use disaggregated cash flows based on components of firm cash flows produce: A. better forecasts of future earnings than methods based on aggregate net cash flows B. earnings forecasts that will not help the firms directors in the decision making process C. earnings numbers comparable in quality to those of only the best analyst D. capital structure changes that could potentially affect cash flow A. Research from Arthur, Cheng, Czernkowski (2010), report that earnings predictability has found that disaggregated cash flow models based on components of firm cash flows produce better forecasts of future earnings than methods based on aggregate net cash flows. (this multiple choice question has been scrambled) What the GFC taught advisers and __________________ in the most powerful and painful way possible was that _________________ is/are the most important factor/s for investors. A. self-directed investors; asset allocation B. bankers; bottom up analysis C. governments; top down analysis D. analysts; a combination of fundamental and technical analysis A. What the GFC taught advisers and self-directed investors in the most powerful and painful way possible was that asset allocation is the most important factor for investors. In 2008 what determined whether you had a good or bad year was the percentage you had in each asset class. (this multiple choice question has been scrambled) __________ is an approach to manage bond portfolios by choosing portfolios with duration matching the duration of liability cash flows. A. Immunisation B. Hedging C. Cash matching D. none of the above A. Immunisation is a method of management of bond portfolios that involves choosing portfolios with duration that matches the duration of liability cash flows, and thus immunises these cash flows from changes in interest rates. This approach ensures sufficient cash flows to meet liabilities when they fall due. As with cash matching, the cash flows may be modelled as either certain or uncertain (that is, stochastic). (this multiple choice question has been scrambled) A criticism of Jensen’s alpha is that: A. it can only be applied to individual shares B. it can only be used on portfolios C. it is strongly influenced by outliers and fails to measure consistent performance D. none of the above C. As Jensen’s alpha assumes that the CAPM is the appropriate benchmark, to the extent that the validity of the CAPM is questioned, so too is the validity of Jensen’s alpha. The measure relies upon an estimate of beta that may be problematic. Further, the measure is claimed to measure only depth and not breadth. For instance, a fund manager may have invested in a large number of stocks on which several small. The measure also equally weights superior and inferior performance. Arguably, investors are more concerned with underperformance than overperformance. What are the key strategies that a company can use to respond to a threat to its competitive position in an industry? Low cost strategy : Seeks to be the low-cost leader in its industry. Must still company prices near industry average, so must still differentiate. Too much discounting erodes superior rates of return. Differentiation strategy : Identify something unique in the industry that is important to customers. Above average rate of return only comes if the price premium exceeds extra cost of uniqueness. Briefly describe what is meant by passive and active bond portfolio management. Give examples of passive and active management techniques. Passive management can be divided into buy and hold strategies (based on a traditional diversification approach where the investor chooses a portfolio so that the overall portfolio risk is reduced by including a wide variety of assets, buys them and holds them) such as index tracking (a portfolio is formed that closely tracks price changes in a chosen index and cash flow generation strategies, such as cash matching, which involves the acquisition of a portfolio of bonds that produce cash flows that match those of an underlying liability. Active management may include highly speculative investment strategies such as 'plunging' into bonds which are believed to be mispriced or taking positions on the basis of expected yield curve changes from expectations about information releases or perhaps change in government policies. It could also include some combination of speculative trading and passive strategy such as immunisation, which is designed to protect the initial investment while speculative strategies are undertaken. Examples include 'riding the yield curve' and 'bond swapping'. Briefly discuss the key problems that arose when people attempted to test the CAPM using actual data and statistical methods. The CAPM is an ex-ante model : Do the ex-post returns that are typically used adequately approximate expectations by the investors, e.g. expected returns? Which returns should be used in estimation, arithmetic, geometric or continuously compounded? Identification of index: Roll's critique. Beta estimation : In practice, we encounter thin and infrequent trading in the market. The CAPM assumes betas are stable over time, but they may not be Estimation over longer periods may be better statistically but less likely for beta to be stable as estimation window increases. Survivorship bias : failed companies are excluded, which induces a bias in returns and betas. Because of the problems of the CAPM, the APT model was proposed as an alternative. How does the APT model overcome the shortcomings of the CAPM? What are the shortcomings of the APT model? The APT appears to be less restrictive and empirically testable. However, there is no theoretical basis for choosing the factors to be included in the estimation. Selection of empirical factors (as in Chen, Roll and Ross 1986) claimed to be subjective. What are money market securities? Money market securities involve an initial cash flow from the buyer (lender) to the seller (borrower) with repayment of the loan, including interest at maturity. These might be discount bonds where the face value of the bond is paid at maturity or bullet bonds where the face value plus interest is paid at maturity. The zero coupon bond is probably one of the simplest forms of debt traded in the Australian money market. Other examples include commercial bills, Treasury notes and promissory notes. What is a bank-accepted bill? A bank-accepted bill is a type of bill of exchange endorsed by a bank. This avoids the difficulties of assessing the risk of the ultimate borrower because the risk of the bill is directly related to the risk of the bank that accepted or endorsed the bill What are the four basic theories of term structure? Expectations theory Liquidity premium theory Segmentation theory Preferred habitat theory These theories are concerned with the relationship found between yield and time to maturity for assets, identical in all but time to maturity What is the expectations theory? The Expectations Theory suggests that longer maturity yields are a function of the current yield and expected future yields. In this theory the expected future yields are unbiased predictors of realised future yields. Investors are assumed to care only about the expected return from the security. For example under this theory the yield on a six-month bill could be equated with a series of two consecutive three-month investments. What is the liquidity premium theory? In this theory an additional term is added as an adjustment for risk, to compensate a short-term investor for holding a security with a term to maturity that does not match the investor’s preferred investment horizon. In this theory it is assumed the majority of investors require the shorter term bonds and thus the longer term yields exhibit a liquidity premium, which tends to increase the longer the time to maturity. In this case a premium is required to encourage investors to buy the18-month bond instead of a 12-month bond. In effect, the liquidity premium drives a wedge between the short rates and the long rates, providing one reason for the failure of the expectations theory to adequately explain or predict future interest yields. What is the segmented market theory? The segmented market theory is consistent with the observed concentration of investors in particular segments of the term structure. Risk aversion results in market participants only operating at that position on the yield curve that most suits their business needs. This grouping of participants could lead to quite separate markets operating at different positions on the yield curve with substantial risk premiums required to encourage participants to move out of their optimal segment of the market. The use of the yield curve for forecasting purposes is limited in this model as expected future yields may bear little resemblance to the relationship between currently observed yields suggested by the expectations hypothesis. What is the preferred habitat theory? In this theory participants match asset life and liability life to establish the lowest possible risk position. Substantial premiums may be required to encourage participants to invest in other than the preferred habitat and so risk premiums would tend to be greatest where demand is least. This theory is similar to the liquidity premium theory but it allows both positive and negative premiums to exist rather than the positive premium predicted by the liquidity premium theory. The premium is set by supply and demand in particular interest rate habitats. If there is borrowing pressure (lenders selling bonds) the rates will rise (prices will fall). If there is investing pressure (investors buying bonds) the rates will fall (prices will rise). Again, predictive power of the term structure is under some doubt with this theory. True or false: a $100 000 90-day bill with yield of 8% is purchased now. Assuming the term structure is flat at 8% and does not change, in 30 days time the price of the bill increases. TRUE. As the time to maturity has decreased and the term structure is flat at 8% then the yield to maturity must have decreased and so the price must increase. Price now $98 065.56 (time to maturity is 90 days) = 100,000 / (1 + 0.08 x 90/365) Price in 30 days $98 702.00 (time to maturity is 60 days) 100,000 / (1 + 0.08 x 60/365) Price increase $636.44 True or false: the yield of a $50 000 45-day bill decreases from 7% to 5%, so the bill price decreases too. FALSE. The yield to maturity has decreased and so the price must increase. In the case of risk of fixed interest securities, how do risks differ from money market securities? The cash flows from money market securities and bonds give rise to a number of risks. These include interest rate risk, default risk, inflation risk, foreign n exchange risk, and marketability risk. However, there are two additional risks that affect bonds but not money market securities. These are reinvestment risk and call risk. The price of a bond usually has the assumption that interest or coupon payments can be reinvested at an interest rate equivalent to the bond yield. Yet, interest rates change over time and we can only invest the coupons at the available market rates. While reinvestment risk does not affect zero-coupon bonds or money market securities, it is a potential problem where bonds pay regular coupon payments. As each coupon is received it must be reinvested at some unknown future interest rate and this coupon reinvestment gives rise to reinvestment risk. Call risk refers to the likelihood that the call provision, found in corporate bond issues, will be exercised. A call provision gives the borrower the right, but not the obligation to buy back the issue at some specified price and at some time prior to maturity. This is generally found with bonds paying fixed interest rates. It is important for the investor because it introduces further uncertainty in terms of bond maturity date, bonds cash flows, reinvestment risk and interest rate risk. If the bond can be brought back at a set price, this will tend to occur when interest rates fall because the borrower can lower borrowing costs by buying back the debt and issuing new debt at a lower cost. When this occurs the investor is faced with both lower yields and higher prices, as the price paid by the borrower is generally not the market price but a previously agreed price written into the bond contract. The investor can be compensated for this source of risk by means such as a lower initial purchase price or higher coupon payments. What is duration? Duration is the negative of the bonds price elasticity with respect to yield and so is a measure of the sensitivity of the bond price to changes in the yield. Yield risk (or interest rate risk) is perhaps the key factor in the management of coupon paying bonds as it is changes in interest rates which accounts for much of the volatility in bond prices. What is convexity and why is it important? Convexity provides a measure of the level of curvature evident in the relationship between price and yield. While duration approximates the relationship between price and yield assuming a linear relationship which is generally accurate enough for most purposes, it does measure the relationship with error. Thus where greater accuracy is required convexity may also be useful to the investor. What is the relationship between bond yield and price? As the yield increases the price decreases. What are the key assumptions of expected utility and their related problems? Assumptions: The investor is assumed to be able to rank all possible alternatives. If asset A is preferred to asset B and asset B is preferred to asset C then asset A is also preferred to asset C. This principle is generally called transitivity. The ranking is assumed to be strongly independent, which suggests a chosen ranking will hold no matter what other assets are held. The ranking is measurable. It is important to be able to assign a number to a portfolio which allows comparison of that portfolio with other portfolios. It is possible to rank assets and uncertain gambles. Problems: independence and the existence of complements individuals do not always rank alternatives in a consistent manner ranking of alternatives may not be independent of the environment within which the ranking is made non-satiation may not be a reasonable assumption for individuals especially at extreme consumption levels. What are indifference curves? Indifference curves can be used to compare different combinations of assets. Indifference curves are curves drawn where expected utility is held constant and expected return and standard deviation are allowed to change. This curve lists all possible combinations of standard deviation and expected return which yield the same level of utility or alternatively investors are indifferent between all the portfolios which fall on the indifference curve. What is prospect theory, and how does it differ from traditional views of expected utility theory? Traditional methods of analysing expected utility focus on the total wealth of investors. In contrast, prospect theory models choice in terms of gains or losses rather than total wealth. Investors were found to be twice as concerned about losses as they were about gains (Tversky and Kahneman, 1992). This results in an indifference curve that assigns a greater absolute change in utility from losses, than for gains (known as a subjective value function). What is the difference between the Markowitz approach and the Sharpe approach to solving the portfolio choice problem? Solving the portfolio choice problem involves determining the portfolio weights that either maximise return given a particular level of risk, or minimises the variance for a particular level of portfolio return. As such, three inputs are required: asset expected returns, variances and covariances. The standard Markowitz approach requires the estimation of the full covariance matrix. For example, if there are 20 shares in a portfolio, the Markowitz approach requires 20 expected returns, 20 variance estimates and N(N–1)/2 covariances (e.g. 20(19)/2 = 190 covariance estimates), or 230 estimates in total. The Sharpe approach uses the market model to simplify this step by reducing the number of parameters which must be estimated. By relating to an index, the number of estimates required is three for every asset and then another two to describe the behaviour of the market. This makes 3N+2 which equates to 62 estimates for a 20 asset portfolio. Since each of these terms may be estimated with error, it is advantageous to minimise the number of required calculations. In estimating the opportunity set, does the assumption that the expected returns, variances and covariances are known with certainty matter? The expected returns may be estimated with error and the error may not be consistent across securities. For example the level of information varies across securities and so the precision of the estimates will also vary across securities. If estimates are prone to error the opportunity set may concentrate on those securities with greatest error rather than those companies which best meet the needs of the investor. This result is often termed error maximisation and can result in a portfolio that concentrates on a fairly small subset of the available securities. These are the securities with greatest expected returns and/or least variance and covariance effect, often the most likely to suffer from data entry errors and errors in analysis. The key point to note is the importance of accurate measures of expected return, variance and covariance and the possibility that a number of ‘approximately optimal’ portfolios may exist in practice. What is the capital market line? The capital market line (CML) is a line passing through the risk-free rate of return, tangent to the opportunity set. In equilibrium investors choose that combination of the risk-free asset and one risky portfolio, which ensures the maximum expected return for any given variance. The set of portfolios that meets this requirement is described by the line drawn from the risk free rate of return on the vertical axis and extends to touch the opportunity set of risky assets at a tangency point. Investors then choose that combination of the risky portfolio and the risk free asset, which maximises utility. What is the security market line? The security market line (SML) is the relationship between expected returns on the vertical axis and beta on the horizontal axis. The slope term is commonly called the risk premium and it represents a premium for taking on undiversifiable risk (also called market risk or covariance risk). What is the difference between efficient and inefficient portfolios? Efficient portfolios are those that plot on the CML while inefficient portfolios are portfolios which do not plot on the CML. Efficient portfolios form part of the efficient set and so are those portfolios which risk-averse value maximising investors would choose given a particular level of standard deviation (or variance). If an inefficient portfolio were held the investor could increase utility by changing the composition of the portfolio until it became an efficient portfolio. For the CML these portfolios consist of some combination of the market portfolio of risky assets and the risk-free asset. Can the beta of a portfolio be estimated using the betas of the individual assets in the portfolio? How? Yes. The beta estimates of the individual assets are weighted using the proportion of total wealth invested in each of the assets to give the portfolio beta. βp = w1β1+ w2β2+ ... + wnβn where βi = beta of portfolio i wi = value weighting of asset i (proportion of asset i value to total investment value) How do different borrowing and lending rates affect the CAPM? If differential borrowing and lending rates exist, the traditional capital market line fails. It is no longer a line including the risk-free asset and the market portfolio. Investors no longer consistently choose to invest in the one-market portfolio of risky assets. There are an infinite number of possible portfolios that investors would choose and their choice is now determined by their preference function. The CAPM no longer applies to pricing assets. There is no longer a single CML given a particular efficient set and so there is no longer a unique market portfolio (tangency point). What are the key predictions of the CAPM? completely determines the expected return of the portfolio explains return variation to the exclusion of all other alternative explanatory variables that there is a linear relationship in beta predicts that the beta coefficient is equal to the risk premium, (E(Rm)–Rf) predicts that over long periods of time the market rate of return will be greater than the risk-free rate of return to compensate for the greater risk associated with the market portfolio. (i.e. E(Rm) > Rf). What is the empirical evidence on the CAPM? Early tests of the CAPM generally found support for the model's predictions. For example, Blume and Friend (1973) found that a cross-sectional regression run with mean return regressed against beta estimates supported the predictions made by CAPM, but also suggested the possibility of other factors driving expected returnsHowever Roll's critique cast doubt on whether any test of the CAPM was reliable. Roll argues that to test the CAPM researchers must identify the market portfolio, which is impossible as there is no observable portfolio of all risky assets. This criticism is perhaps a little extreme as econometric theory can cater for situations where a suitable proxy for the market portfolio is found but the basic problem still remains with identifying such a proxyLater, the study of Fama and French (1992) provided perhaps the most damning evidence against the CAPM in which other factors were found to explanatory power over returns. However, doubts of where they are true sources of risk and concerns over the estimation of risk premia have slowed the death of beta. This again focuses the discussion to whether CAPM is a useful model as if we do not use CAPM to price risky assets then what is the alternative? What is a unit trust? Unit trusts are pooled investment vehicles which enable investors to invest and withdraw through individual tradeable units. Unit trusts, like any trust, have an independent trustee who is responsible for overseeing the trust. They are governed by a Trust Deed which sets out specific guidelines for the trust’s activities. Some unit trusts are listed on the stock exchange, such as listed property trusts. A listed unit trust initially sells a fixed number of units to investors which are subsequently listed on the stock exchange. The value of each unit depends upon market forces of supply and demand. As the number of units is fixed, the only way investors can enter the trust is to buy units on the stock exchange or wait for a possible future new issue. As such, these trusts are known as closed-end funds. In comparison, an unlisted unit trust may issue new units at any time and similarly may redeem (ie. buy-back) units at any time. The value of each unit depends upon the value of the underlying investments. These trusts are known as open-ended or mutual funds. What is a superannuation fund? In general, superannuation funds are designed to set aside money during the working lives of people to cater for their financial needs during retirement. The superannuation industry provides for a future national pool of capital for the use in retirement of current workers. Superannuation involves employer sponsored funds and personal superannuation schemes. Employer sponsored funds are set up specifically for an employer and their employees. No one else is permitted to make contributions to the fund. In some funds, only the employer makes contributions and these are known as non-contributory funds. However, the more common arrangement is for both employers and employees to make contributions. Superannuation funds are managed by life insurance companies, external fund administrators, master trust superannuation funds (which bring a number of managers and products under the one umbrella) and pooled superannuation trusts. What is a life office fund? Life office funds include life insurance, life assurance, annuities, pensions and superannuation products. In Australia, life insurance companies are established under specific legislation which places restrictions on the company’s activities. Life insurance companies make investments directly, rather than acting as an agent. While most fund managers are appointed as an agent, life insurance companies act as principals. Hence, investments are taken directly onto the balance sheets of life insurance companies through the special vehicle known as a statutory fund. Life insurance companies make contractual commitments to make a return (guaranteed or expected) on client money. What is a growth fund? Growth funds have the objective of reinvesting earnings such as interest and dividends to take a medium risk position to achieve capital growth. Growth funds suit investors with a medium to long-term horizon. The return to investors is through capital growth. These funds typically have a high weighting in equities. What is an income fund? Income funds have the primary objective of providing a steady income stream to investors through periodic distributions. These funds require regular cash inflows and typically select fixed interest securities and stocks with high dividend yields. Income funds also have a secondary objective of capital growth. What is a capital stable fund? Capital stable funds have the objective of ensuring long-term stability and growth. Typically, investments will be of low risk. These funds are sometimes referred to as capital funds or capital guarantee funds. These funds suit investors who have a long-term horizon and are relatively risk averse. Capital stable funds will commonly have a high weighting in property and fixed interest. What sort of asset allocation would you expect of a growth fund? Growth funds have the objective of reinvesting earnings such as interest and dividends to take a medium risk position to achieve capital growth. Therefore they have a medium to long-term horizon and look to asset classes of medium risk. Typically, equities suit growth funds as they provide the possibility of strong growth without excessive risk. Moreover, equity growth is expected to be ongoing and the market is generally perceived as relatively liquid. What sort of asset allocation would you expect of an income fund? Income funds have the primary objective of providing a steady income stream to investors through periodic distributions. Hence, income funds require regular cash inflows which typically arise through interest and dividends. Asset classes which suit this objective include coupon-paying bonds and shares with high dividend yields. What sort of asset allocation would you expect of a capital stable fund? Capital stable funds have the objective of ensuring long-term stability and growth. These funds suit investors who have a long-term horizon and are relatively risk averse. Capital stable funds will commonly have a high weighting in low risk investments such as property and fixed interest. In particular, inflation-linked securities may be attractive because of their guaranteed return component. To what extent do investors rely on past performance when making investment decisions? Past performance is regarded as a key input into investor decisions. Survey research shows that past performance is the most important factor that investors consider when selecting a managed fund investment. For example, Sweeney Research (2001) found that 54% of investors regard long-term performance as the most important factor when selecting managed fund investments. This result was a long way ahead of the second factor: the risks associated with investment (which 17% of investors nominated). Chartwell investment Management Ltd (2001) also found in an English survey of 2000 investors conducted in 2001, 58% of respondents regard performance as the most important factor to consider, followed by risk profile in second place at 35%. Similarly, in the funds management industry, many believe that past performance is a key factor in attracting investors to their products. A review by the Australian Securities and Investments Commission (ASIC) in 2002 revealed that past performance is included in 70% of commercial advertisements designed to attract investors to managed funds. Moreover, empirical studies have shown that past performance is correlated with future fund flows. Sirri and Tufano (1998) find that investors are attracted to good performers in the USA, while Sawicki (2000) and Frino, Heaney and Service (2005) also document this finding in the Australian market. Explain the circumstances in which the Sharpe and Treynor indices can provide conflicting fund rankings. The inconsistency is due to differences in the unit risk measure, particularly in poorly diversified funds. The Sharpe Index uses standard deviation whereas the Treynor Index uses beta risk. However, if the fund is well diversified then non-systematic risk will be largely eliminated and the Sharpe and Treynor Indices will provide very similar rankings. What is the purpose of Carhart's 1997 model? It is used to control for market biases. How does the Carhart alpha differ from the Jensen alpha? Carhart’s alpha is a measure of superior performance after controlling for the forces generated by the market return, size premium, value premium and momentum premium. Hence, any fund managers that construct portfolios designed to capture these premiums will find that their returns are captured within the model, and so they will not exhibit any alpha performance. Rather, managers that have strategies that do not follow mainstream premiums are expected to have alpha performance. In this sense, Carhart’s alpha is regarded in the industry as a more appropriate measure of individual performance. Jensen’s alpha, on the other hand, assumes that the CAPM is the appropriate benchmark, to the extent that the CAPM is valid. Hence, Jensen’s alpha relies upon an estimate of beta that may be problematic. Further, the measure is claimed to only measure depth and not breadth. For instance, a fund manager may have invested in a large number of stocks on which several. What is the information ratio and how does it differ from other ratios such as the Sharpe and Treynor indices? The information ratio is an efficiency measure. The ratio analyses the excess return (to the benchmark) and then standardises the measure by the amount of ‘risk’ involved in earning the excess return. That is, the information ratio provides an insight into how much risk was undertaken to earn the excess return. An efficient portfolio manager is claimed to have a low standard deviation of tracking errors relative to the overall excess return. As a rule of thumb, values close to one are considered to be consistent with very good performance. The information ratio requires a benchmark to be specified. But this raises the problem of selecting appropriate benchmarks such as the CAPM. The Sharpe ratio (below) measures the risk premium per unit of overall risk. But the Sharpe index does not rely on an asset pricing model such as the CAPM. The Sharpe index measure jointly captures the concepts of return and risk. Hence, it allows for investments of varying risk to be assessed on a comparative basis. But critics of the Sharpe index argue that it is reliant on the capital market line, which is based on unrealistic assumptions that do not hold in practise. The Treynor (1965) index is similar to the Sharpe index except that it is based on the ex-post security market line (rather than the ex-post capital market line). The result is that the standardised measure is beta risk rather than standard deviation. The The Treynor index value for the market will always equal the market risk premium (Rm-Rf) because the beta measure of the market is always 1.0. Hence, a fund is claimed to exhibit superior performance if the value of the Treynor index exceeds the market risk premium. The Treynor index is subject to the same type of criticism as that directed at using the CAPM as an appropriate benchmark. What is meant by performance persistence? What are the implications of performance persistence for poorly performed funds? Performance persistence is the degree to which funds are able to maintain consistent performance across time. This can be assessed by examining the correlation of fund returns over time. A degree of persistence in performance suggests that past returns and rankings are useful in predicting future returns and rankings. If this is the case then poorly performed funds will continue to be poor performers. Moreover, if this information becomes known and accepted them the predicted returns will reveal a poor outlook for these funds. Consequently, investors will shy away from the funds resulting in an outflow of invested funds and a downsizing of the fund, such that it may struggle to survive. Performance persistence implies some predictability of future performance. How does this sit with the notion of an efficient market? There is anecdotal evidence that funds exhibit performance persistence. Research has supported this claim and the results from the USA indicate a degree of persistence in performance in the US funds market. The results suggest that past returns and rankings are useful in predicting future returns and rankings. Specifically, some evidence shows that winners repeat, particularly extremely good performers. Performance persistence is also documented among poorly performed funds. However the evidence is inconsistent and research in the Australian and New Zealand equity funds market has not yet revealed any evidence of performance persistence. If performance persistence exists then it implies that future fund returns are in some sense predictable. If this is the case, then past information can be used to make profitable forecasts which tend to be perceived as inconsistent with a truly efficient market. However, the problem for investors is that by chasing the best performed funds they hold diversifiable risk because the positive correlation among the winning funds means that an investment in them is not well diversified across the market. Hence we may argue that expected return is higher but must also note that (diversifiable) risk is higher. In this sense, there is still consistency with the notion of an efficient market as higher expected returns come at a cost of higher risk (although this is dependent upon the view taken of the appropriate risk measure). What is similar between the basic pricing models for the CCAPM, the arbitrage pricing theory, the Fama and French model, and the international CAPM? All of the equations describe the factors that are important in determining the expected return for each security. The models differ in terms of their underlying assumptions regarding how expected returns are generated. What are the difficulties of the approach to estimate consumption betas? First, the CCAPM requires a measure of aggregate economy-wide consumption. This data is generally not available and some proxy measure must be used. Ideally, we want to measure the actual value of consumption; however, consumption statistics tend to be expenditures rather values. This implies that goods are consumed immediately following their purchase, which of course is not true. Further, in order to implement the CCAPM, we require consumption levels at a particular point in time. However, reported expenditure figures are for expenditure over a period rather than at a fixed point. A final issue relates to the accuracy of the consumption data. Consumption data inevitably provided in aggregate do not span the entire universe of consumption transactions and are therefore measured with error. Explain why the APT should hold. The Ross (1976) approach is based on the argument that if a portfolio which is formed with no risk and requires no investment it will have zero expected return. It is argued that arbitrage portfolios can be formed whose returns are not correlated with the underlying factors. This means there is no systematic risk associated with the arbitrage portfolios and so no systematic return is earned by these arbitrage portfolios. Further, the large number of securities used in the construction of the arbitrage portfolios ensures there is no residual risk. If an arbitrage portfolio is formed with these characteristics the portfolio expected return must be equal to zero. If this is the case then the full correctly specified set of pricing factors should explain expected returns. What are factor sensitivities in the APT model? The factor sensitivities are estimates of the sensitivity of an asset to particular factors. They are similar in effect to the beta measure used in CAPM. That is, the sensitivities measure the rate at which return on the asset varies with a change in the underlying factor all else held constant. What are the factors used in the Chen, Roll and Ross 1986 model? VWNY : return on a value-weighted portfolio of NYSE. An index composed of listed stocks. MP : monthly growth in industrial production. The continuously compounding rate of change in industrial production during the month. DEI : Change in expected inflation. Expected inflation is obtained from a time series analysis of rates of return on Treasury bills and subtracted from the change in the CPI for the period. UI : Unexpected inflation. The difference between the actual inflation rate and the expected inflation rate estimates used in calculation of DEI. UPR : Risk premium. The difference between the returns on risky bonds rated as "baa or under" and the returns on a portfolio of long-term government bonds. UTS : Term structure. The return on a portfolio of long-term government bonds less the return on one-month Treasury bills. Does the empirical evidence support the pricing of the factors used in Chen, Roll and Ross? The empirical evidence suggests that this approach is difficult since there is no correct set of factors. Rather, the variables are selected by reference to theoretical justifications as to what type of variables we expect to be related to returns. As indicated in Table 9.3 (p.291), the average premium on the market index is significant. This suggests that the market portfolio has little ability to explain cross-sectional equity returns, in contrast to the CAPM. Although monthly growth in production, unanticipated change in risk premium and unanticipated change in the term structure appear to have significant explanatory power for the full period, statistically significant results are concentrated in one sub-period (1968–1977). Hence, the explanatory power of the chosen economic variables changes over time. Given the inconclusive results, it may well be that other explanatory variables not selected here provide greater explanatory power. Explain the Fama and French model and its supporting empirical evidence. Fama and French (1993) test the relationship between returns on individual shares and the various underlying characteristics by creating 25 USA portfolios ranked by size and book-to-market value. Seven bond portfolios are also obtained, with the range of default risk varying from the government bonds with little risk to risky corporate bonds. Thus, the model is applied to both equities and bonds. Time series analysis is then conducted on these portfolios using proxies for size, book-to-market, term premium and default premium. Fama and French conclude that these variables help explain cross-sectional variation in stock returns. The power of the market risk premium, size premium and book-to-market premium on share returns are the strongest among all factors and fairly stable across time. What is the value of a security? The value of a security is the expected future cash flows accruing from holding the security discounted at an opportunity cost of capital that reflects relative risk. Different investors have different expectations of cash flows and risk and hence different views on value. What is the price of a security? The price is the exchange rate established in the market set by the forces of supply and demand in competition. In an efficient market, the price is set equal to the expected value of the asset. Explain mispricing. Value could be determined by the present value of the expected stream of future (risky) cash flows. If price does not equal value, then assets would be mispriced. For instance, suppose that the market price was below the estimated present value such that the asset was priced at a discount. There would be incentive for investors to purchase the asset at the lower market price and hold onto the asset to reap the future cash flows. These cash flows are known to have an expected present value higher than what was paid to obtain them. In a competitive market, the price would not remain at a discount for long as increased demand for the asset would push its price up. Explain what is meant by the joint test problem and its significance Any test of market efficiency necessarily compares returns against some benchmark. For instance, an inefficient market may be one in which excess returns are persistently observed for a particular trading strategy. But how do we define excess returns? A benchmark for the expected return is required before we can measure excess returns. One possible benchmark is a formal asset pricing model, such as the CAPM. But the use of the CAPM as the benchmark implies that the model is appropriate. Hence, in order to define excess returns to test market efficiency, a model for expected return is first required. Thus, any test of market efficiency is inherently a joint test of market efficiency and the model of expected return. This problem of the joint test also applies in reverse. Implicit in tests of asset pricing models is the assumption that market prices are set in a rational and efficient manner. Thus, in order to test the asset pricing models, we implicitly assume something about market efficiency. Therefore, we cannot test either market efficiency or an asset pricing model without first assuming that one or other holds. As such, any conclusion about market efficiency must be tempered with the knowledge that the results are based on an implicit assumption about the appropriateness of the return benchmark. What are the forces at work that are helping to maintain perceptions of market inefficiency? In relation to market structure forces We expect to observe winners and losers as uncertain returns have a dispersion about a mean. There is a tendency to interpret the observation of large positive returns and wealthy investors as evidence of inefficiency. However, in a world of uncertainty we expect to encounter some observations from the extremes of the distribution. It is difficult to distinguish skill from luck. Some investors may earn substantial excess profits from a trading strategy but it is difficult to separate these investors from those that simply get lucky. After all, it is in an investor’s best interest to convince others that they have some unique skills. It is rare for an investor, particularly those investing other people’s money, to admit that their profits have been generated by sheer chance. Ex-post explanations as to why certain investments were chosen are always possible. The truly skilful investor will have ex-ante reasoning which is then supported by ex-post observation. These circumstances are very difficult to observe and test. There are rarely many observations to test under the same conditions. The background against which investment decisions are made such as the environment and the skills and knowledge of individual investors is constantly changing. There are vested interests in maintaining the view of market inefficiency. Billions of dollars each year are made around the world by fund managers, stock brokers, merchant banks and financial specialists from investment advising and selling investment products. If everyone suddenly accepted the notion of market efficiency and was content to take a buy-and-hold strategy in a diversified portfolio, then it is possible that much of the demand for financial services would disappear. In relation to behavioural forces Individuals have an aversion to loss realisation. Paper losses are somehow better than real losses. Individuals have a tendency to view unrealised paper losses as less significant than realised losses. Consequently, people hang onto unrealised losses for too long in the hope that these losses will eventually be recouped. In terms of tests of efficiency, we do not observe as many losses as we should. People believe that they know more than they do. This belief is termed the ‘illusion of knowledge’. People tend to overestimate their ability and hence acceptance of an efficient market contradicts what people would like to believe. There is a tendency for individuals to place too much faith on small sample sizes and overvalue anecdotal information. Hence, there is a misconception of an abnormally high frequency of winners in the market. Individuals have a tendency to attribute successes to skill and failures to external forces. In the context of investment, investors carry through with attribution theory and ascribe profitable investments to their ability to make superior investment decisions. Why is it that brokers have recommended growth stocks as sound investments for many years, but value stocks appear to have consistently outperformed growth stocks? Growth forecasts contain an element of error. If the market overreacts to recent news then forecasts may be incorrect. If the market does overreact, then shares with recent growth will be overvalued. Conversely, shares with a recent poor earnings record and low growth will be undervalued. In both cases, the market fails to incorporate the correct value of information in current earnings and does not capture the reversion in growth rates. The reason for the overreaction is that in the short-run, earnings are indeed good indicators of next period’s earnings. However, the market may place too much faith in current earnings to predict earnings over a number of periods. Growth shares may turn into value shares in a shorter period than anticipated. Over time as future earnings become known, the market realises its mistake; growth shares earn rates of return below expected, and value shares earn rates of return above expected. Derivatives markets and program trading have been blamed for the crash of October 1987. Do you agree? Various explanations have been put forward for the Crash of October 1987 including irrationality before the Crash, irrationality during the Crash, bad news releases, misalignment with market fundamentals, major revisions in expectations and institutional failure. The last argument has support through a documented failure of the order system on the New York Stock Exchange. The market procedure was unable to cope with the large volume of sell transactions on the day which exacerbated the price decline and created a massive order imbalance. At the time of the Crash, the derivatives markets were more computerised and did not face the same order delays. Further, the ability to take a short position attracts investors in a falling market. The derivatives markets were better equipped for the transaction pressure and hence did not experience the same fall in prices as the equity market. The difference in order balances between the equity and derivatives markets manifested itself as a delinkage between the markets. Hence, the derivatives and equity markets may not have been as separated as the regulators have implied. What caused the GFC? The origins of sub-prime lending date back to just after the Second World War when the US government established Freddie Mac and Fannie Mae, which were charged with the responsibility of getting loan capital to lower socio-economic households (sub-prime lending) in order for them to secure a housing mortgage. Several key events and economic circumstances occurred that sparked the interest of mainstream banks and lending institutions to enter the sub-prime market. First, the Glass-Steagall Act of 1933 was repealed in 1999, allowing banks to engage in both commercial activities and investment banking. Glass-Steagall was originally enacted because it was believed that the combination of the two led to excessive risk taking. Second, the economic conditions in the early 2000s were very stable, highlighted by low inflation, steady economic growth and general prosperity. These conditions spurred an investment appetite for new opportunities. Meanwhile, banks had entered the sub-prime mortgage market and through a series of lending innovations created securities called collateralised debt obligations (or CDOs).CDOs were securitised mortgage portfolios that contained both risky subprime mortgages and lower risk mortgages. An underlying assumption behind sub-prime lending practises was that house prices would continue to rise as this provides protection to lenders against default on the loan. However, in early 2007, the residential housing market started to show signs of weakness and as house prices fell, loans began to default. As borrowers started to default, the situation spiralled as defaulting properties were sold onto a depressed housing market by the lenders, thereby putting further downward pressure on prices. As the underlying mortgages defaulted, it was a natural extension for the CDO package to start defaulting on payments. The first major signal of collapse was the near failure of two large hedge funds run by Bear Sterns. The first major institution to collapse was Lehman Brothers, which triggered widespread fear that the entire financial system was on the brink. Banks reacted by tightening their available capital and calling in loans. In the wake of capital rationing, businesses deferred major projects and tightened their control of cash. This in turn, put pressure on spending, investment and employment. The consequence was a major decline in economic conditions that we now refer to as the global financial crisis. There is great debate as to the primary causes of the global financial crisis and this should be an open question. However, some key elements include the relaxation of regulations that allowed banks to take on excessive risk through financial engineering, lending practises that encouraged financing to people that could least afford loans, decreasing house prices, and uncertainty in the financial markets. Why do fund managers devote resources to research? Fund managers undertake research because they may believe the market to be inefficient. The results of their research may lead to profitable opportunities. As fund managers are rewarded partly on the basis of performance, there is a clear incentive to search for profitable opportunities. Moreover, even if a fund manager did believe in market efficiency, they might need to be seen by their clients as believers in inefficiency. Hence, in order to maintain confidence with their investor base, the fund manager undertakes research. Further, research activity may be a form of insurance to a fund manager. That is, the fund manager may not have a deep conviction one way or another in relation to the efficiency debate but undertakes research to assure themselves and to make sure that any profitable opportunities are not missed. Note that if fund managers and other investors do not take perceived profits as they arise, then perceptions of inefficiency would remain. If fund managers devoting resources to research was suddenly stopped because of a widely held belief in market efficiency, what would be the implications? Arguably, research is in action which makes the market efficient. Believers of market efficiency argue that it is the existence of believers in market inefficiency that keeps the market efficient. The irony is that investors who seek to exploit market inefficiencies by their own actions ensure that any inefficiency is eliminated. Competition among investors trading on perceived inefficiencies helps to maintain efficient prices. Research by fund managers may fit within this argument. That is, it is the research activity undertaken by funds which helps maintain an efficient market. If this research was stopped, inefficiencies may prevail. What are the implications of a difference between price and value of a security? When considering any investment, the investor will assess the value of the asset and then compare this value to the current price of the asset. The assessment of value is subjective and investor specific, whereas price is objective and determined in the market. If the asset’s value is less than its current price then the asset is over-priced and the investor will either leave the asset out of their portfolio or seek to short sell the asset at the prevailing market price. Conversely, if the asset’s value exceeds its current price then the asset represents a bargain to the purchaser and its purchase is expected to increase the net wealth of the investor’s portfolio. How can we move from the general present value model and arrive at the dividend discount model? The present value model applied to shares becomes the discounted value of all future dividends. The key is to invoke an assumption of constant growth indefinitely such that all future dividends can be expressed as a function of the current dividend. For example, next period’s dividend is the current dividend plus the incremental growth. The growth assumption enables the valuation formula to collapse to three components: the current dividend, the constant growth rate and the discount factor (cost of capital). Compare the dividend discount model (DDM) ad earnings capitalisation model (ECM) The dividend discount model (DDM) and earnings capitalisation model (ECM) are both based upon the present value model. Hence, both models start from the same base of attempting to forecast future cash flows and discount them back to the current time. The difficulty with the present value model is forecasting the future cash flows. To make this task easier, various assumptions can be invoked. The DDM invokes the assumption of a constant growth rate indefinitely while the ECM assumes zero growth. It can be shown that the ECM flows from the DDM. Hence, the models are internally consistent. The key is to recognise that each model makes different assumptions about future cash flows and therefore their applicability depends upon the relationship of these assumptions to the actual circumstances. The dividend discount model has been criticised because of its simplistic assumption about a constant rate of growth. As such, it is unable to cope with situations of no growth, delayed growth and variable growth. How would you respond to this criticism? Most valuation models have a sound theoretical base. However, as valuation models require estimates of future events these are subject to the problems of forecasting. Hence, necessarily there is subjectivity in the practical implementation of valuation models. That is, the problem is not the model itself, but rather obtaining the necessary values of the input parameters to implement the model. The present value model requires estimates of future cash flows and assumptions can be made to simplify the forecasting task. However, the assumptions themselves usually require further subjective input such as a growth rate in the case of the dividend discount model. Moreover, any model which is based on the present value concept requires a discount rate which incorporates the time value of money and relative risk. There are many different approaches to obtaining values for the input parameters, especially the growth rate of cost of capital. Most approaches seek to utilise available data to provide some objectivity to the task. Examples include trend lines based on past data, time-series models such as martingales, expert forecasts, formal models (e.g. the CAPM), inverting of valuation models where the price is known and the use of other financial data (e.g. the plowback technique). No one approach will always provide the most accurate estimate and hence each technique results in estimation, and ultimately, valuation errors. The extent of the error depends upon individual circumstances prevailing at the time. However, it is not uncommon for small input errors to result in large valuation errors, especially when the values of the cost of capital and growth rate are close to each other. What is the free cash flow model and what is its relationship to the present value concept? Free cash flows (FCF) are generally defined as those cash flows which remain in the business after meeting all expenditure and investment outlays. That is, FCF can be viewed as the residual cash available for distribution to shareholders. If the FCF is valued then this is the same as valuing the potential cash dividend stream. The advantage of using FCF rather than dividends as the focus of valuation is that dividends are difficult to forecast because of such factors as sticky dividend policies. FCF is a different way of viewing dividends. First, FCF are derived from reported numbers on which future estimates are usually available. Second, FCF are taken from the overall firm perspective (rather than per share). The FCF approach still involves present value principles as the FCFs are discounted back to a present value. Thus, the concept is simply another way of estimating future cash flows within the context of the general present value model. The FCF approach requires a forecast of each of the component FCF items and arrives at a series of FCFs which are then discounted at the appropriate cost of equity. However, the components in the FCF calculation are linked. The rate of return on investment outlays yields the earnings figure, and the growth of the firm yields the investment outlay. An advantage of the FCF approach is that these links reduce the need for many separate forecasts. Distinguish between growth shares and value shares If the recorded value of equity is divided by the number of shares, we arrive at a value of equity per share which is known as the ‘book value’. This value is simply one way of measuring the worth of equity. Another, more realistic value is to rely upon the market’s current assessment of value by measuring value at the current market price. The market value of the total equity is the number of shares on issue multiplied by the market price per share.. Empirical evidence has shown that value shares outperform growth shares in the long-term. Why is the book-to-market ratio useful? The book-to-market ratio can be shown to be a function of sub-components. The first factor is the cost of equity capital: The higher the cost of equity capital, the greater the ‘riskiness’ of the firm’s equity, which in turn implies a higher book-to-market ratio. Thus, in general, we expect value firms to be of higher risk than growth firms. The second factor is growth: The higher the level of expected growth, the lower the book-to-market ratio. Thus, in general, we expect growth firms, as their name suggests, to have higher expected growth rates than value firms, all else held constant. The last factor is the rate of return on (net) assets in place which is essentially expected profitability: The higher the level of r, the lower the book-to-market ratio. Thus, in general, we expect growth firms to have higher levels of expected profitability than value firms. What are the relationships between book-to-market ratio, dividend yield, PE ratio, EPS growth, beta and return on assets (ROA). P/E ratios and dividend yield are often used as alternative classifications of growth and value. These measures are related to the BV/MV measure because stocks with high growth tend to have high P/E ratios, pay lower dividends and hence have inflated share prices. This is because growth companies have a greater incentive to invest their earnings into profitable projects, potentially resulting in lower dividend payouts and hence, lower dividend yield. The reverse is true for value stocks. As shown in Table 13.1 (in the previous question) the low BV/MV companies (e.g. Woodside Petroleum) are located in high-growth industries such as energy, materials and information technology and are shown to have higher EPS growth, high beta and high ROA. Conversely, companies with the highest BV/MV (e.g. Stockland) usually have low cost of equity capital (as measured by beta), low returns on assets (ROA) and high dividend yield. What is SWOT analysis and how is it useful? SWOT analysis is a useful framework for evaluating competitive position and the effectiveness of corporate strategies. SWOT is the acronym for ‘strengths, weaknesses, opportunities and threats’. SWOT analysis involves a consideration of each of these features. The analysis is designed to aid in the evaluation of a firm’s strategies within the context of the firm’s external and internal environments. What are technical and fundamental analysis, and are they complements or substitutes? Technical analysis relies on the past trends of price and trading volume, whereas fundamental analysis relies upon information about the underlying company’s products, markets and environments. Technicians believe that the market and its trading history convey all the information it requires, whereas fundamentalists believe that a share has some intrinsic value that is reliant upon fundamental economic factors. The two techniques can be viewed as complements. That is, if fundamentalists trade on fundamental information then that information will be reflected in the share price and, therefore, technicians need not worry about the fundamentals. But this point is ironic; the reason why technicians can ignore fundamentals is because there are others in the market that is not ignoring the fundamentals. A similar argument applies in reverse. That is, fundamentalists can ignore the information in past trends because the actions of the technicians ensure that information on past price sequences are incorporated into current market prices. In reality, markets, and therefore market prices, are determined by the actions of both types of investors. In this light, while investors may consider themselves to be technicians and fundamentalists, the prudent investor would likely benefit from both approaches during the stock selection process. In this light, technical analysis should be considered a complement to rather than a substitute for fundamental analysis. Outline the regulatory framework for financial reporting that operates in Australia. In Australia, all companies and securities regulation is governed by the Corporations Act 2001 which is administered by the Australian Securities and Investments Commission (ASIC) and its subsidiary bodies. Under the law, all companies are required to lodge an annual return with ASIC. The level of additional reporting requirements then depends upon the nature and constitution of the company. The Corporations Act 2001 requires every company to keep an accurate set of accounts from which annual financial statements can be prepared. The financial statements must be drawn up in accordance with approved accounting standards. In circumstances where this results in a departure from a ‘true and fair’ view, the directors are required to make additional disclosures to present a true and fair view. For most large companies, the financial statements must comprise a profit and loss statement, a balance sheet, a cash flow statement and any notes and reports connected with these items. Generally, the annual accounts must be audited. Approved accounting standards are issued by the Australian Accounting Standards Board (AASB). These standards govern the recognition, measurement and disclosure rules for corporate accounting. In addition, the Corporations Act requires disclosure of specific key items. Companies listed on the Australian Stock Exchange (ASX) must also comply with business and listing rules issued by the ASX. The listing rules require disclosures in addition to that required under the Corporations Act. What is the continuous disclosure regime? The continuous disclosure regime requires disclosing entities to lodge with the Australian Securities and Investments Commission any information that is not generally available and that it is reasonable to expect to have a material effect on the price or value of the entity’s securities. In the case of listed companies, continuous disclosures must be made with the ASX. In addition to price sensitive information associated with the company’s activities, projects and financing, this information includes items such as alterations in share capital, changes in directors, shareholder resolutions, dividend recommendations and results from drilling tests for mining companies. However, there is an exemption from disclosure if the information is considered confidential. The criterion of confidentiality is difficult because of its subjectivity but examples may include new major contracts, executive pay, plans for expansion and research and development findings. What is DuPont analysis? Involves breaking the ROE into its component parts. ROE has three components representing profit margin, asset turnover and leverage. Profit margin represents how efficient the firm is at converting revenues to profits. Asset turnover represents how efficient the firm is at making investment decisions involving capital expenditure and utilising those resources in generating revenue. Leverage represents the impact of financing decisions on the overall profitability of the firm. DuPont analysis provides information about the strengths and weaknesses of a firm by focussing on these three components. Differences in rates of return may be observed across firms but DuPont analysis provides the analyst with some insight as to what factors cause the differential ROEs. How do we calculate Net Profit Margin? Net Profit Margin = Operating Profit After Tax / Operating Revenue How do we calculate interest coverage? Operating Profit Before Interest and Taxes / Interest Expense How do we calculate business risk? As measured by the coefficient of variation of operating profit (CVOP): CVOP = Standard deviation of operating profit before interest / Average Operating Profit Before Interest How do we calculate sales variability? As measured by the coefficient of variation of sales (CVS): CVS = Standard Deviation of Sales Revenue / Average Sales Revenue Author: gecalder ID: 247420 Card Set: FIN3IPM Exam Preparation Updated: 2013-11-18 10:24:51 FIN3IPM Exam Preparation Folders: Description: FIN3IPM Exam Preparation Show Answers: Flashcards Preview
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=247420
CC-MAIN-2018-22
refinedweb
11,544
53.71
Dear Monks, I try to get this to work: But the output isBut the output isuse Inline CPP => Config => ENABLE => 'STD_IOSTREAM'; use Inline CPP; print "g = ", g(12), "\n"; my ($a, $b, $c) = (1, 2, 3); my ($x, $y, $z); my $error = f($a, $b, $c, $x, $y, $z); print " --> $x, $y, $z (e=$error)\n"; __END__ __CPP__ #include <iostream> using namespace std; int f(double a, double b, double c, double& x, double& y, double& z) { int error = -2; x = a +42; y = b +42; z = c +42; return error; } int g(int b) { return 42; } How do I get this to work? Thanks in advance.How do I get this to work? Thanks in advance.g = 42 Use of inherited AUTOLOAD for non-method main::f() is deprecated at ./ +z line 12. Can't locate auto/main/f.al in @INC (@INC contains: ...) Back to Seekers of Perl Wisdom
https://www.perlmonks.org/?node_id=905148;displaytype=print
CC-MAIN-2018-30
refinedweb
151
81.43
In August, we released Custom Interfaces, the newest feature available in the Alexa Gadgets Toolkit. Custom Interfaces enable you to connect gadgets, games, and smart toy products with immersive skill-based content—unlocking creative ways for customers to experience your product. Using the Custom Interface Controller, you can design voice experiences that are tailored to your product’s unique functionality. Want to start building with Custom Interfaces, but don’t know where to start? To demonstrate the process for building a prototype, we created our own Alexa-connected musical keyboard using the Alexa Gadgets Toolkit. Alexa lights up a sequence of keys on the keyboard corresponding to a given song. When user plays that sequence back, Alexa provides feedback on whether the user pressed right sequence of keys or not. Here’s a video of the experience: The prototype for this musical keyboard uses the Color Cycler sample provided in the Alexa Gadgets Raspberry Pi Samples Github repository, and builds upon the sample to enable new and unique functionality to teach people how to play different songs. The Color Cycler sample uses a single RGB LED and a simple button for the hardware, and uses an Alexa skill to respond to a button press before the experience ends. For the keyboard experience, we needed multiple LEDs to indicate what keys should be pressed, and multiple buttons – a single button for each key. Once the new hardware has been added, it looked something like this without the keyboard overlay: As you can see, each LED is aligned to its corresponding button used for each key. With the updated hardware in place, the keyboard can light up when a customer chooses a song from within the skill. With the hardware assembled, LEDs can be illuminated to teach the customer which keys to press to play the song. When the skill starts, the Enumeration API is used to verify there is a gadget paired to the Echo device. If so, the customer can select a song they want to learn to play. Based on the chosen song, a Custom Directive is sent to the paired Alexa Gadget via a Custom Interface that has been defined. The JSON sent from the skill looks like this: { "type":"CustomInterfaceController.SendDirective", "header": { "name": "ledSequencer", "namespace": "custom.PianoMessaging" }, "endpoint": { "endpointId": "..." }, "payload": { "gapTime": 500, "sequence": '112143', "init_delay": 1000 } } The payload specifies which notes should be played, the time between each note, and a delay that controls when the sequence should start playing. On the gadget side, the payload is parsed and used to illuminate the LEDs in accordance to the song that was chosen. Illuminating the keys to indicate which notes should be hit is only half of the experience. In order to learn the song, the customer must press the keys in the correct order – otherwise, Alexa will adapt the experience accordingly. Using the Custom Interface we defined previously, events from the Alexa Gadget can be sent to the skill, giving Alexa the opportunity to respond and customize the experience. On the gadget side, two types of events are sent to the skill: an event that lets the skill know that the LED sequence has stopped playing and the skill should start listening to key presses, and an event that is sent to the skill on each key press. The event for the individual key press looks like this: { "header": { "namespace": "Custom.PianoMessaging", "name": "keyStroke" }, "payload": { "keystroke": "1" } } As each key press is sent to the skill, the sequence can be compared to the master sequence stored as a session attribute within the skill. If the customer presses all the right keys, it should match the session attribute, and they can continue through the skill. If they make a mistake, the sequence will not match what’s in the session attribute, and Alexa can jump in to help. These highlighted elements of the musical keyboard are unique to this type of product. There’s so much more to dive into with Custom Interfaces, Alexa skills, and building an Alexa Gadget that really showcases the capabilities of your product. Check out these resources to start building your prototype:
https://developer.amazon.com/en-US/blogs/alexa/device-makers/2019/09/deep-dive-how-to-build-an-alexa-connected-musical-keyboard-with-the-alexa-gadgets-toolkit
CC-MAIN-2021-04
refinedweb
689
58.42
Last significant change: 2001/09/17 OO programs are dynamic; the objects they consist of are created and destroyed throughout the course The LeakChecker package is a very simple leak checker facility developed as part of the framework to provide the first level of defense against memory leaks. It can be used to allow basic leak checking to be built into package validation. LeakChecker is not a substitute for a full function leak checker which should still be applied to complete programs. To use the checker, probes (C++ macro calls) are added to the code at each object creation and destruction. By default these macros generate nothing and so can safely be a permanent addition to the code. Activation is controlled by a preprocessor option which then generates calls to record object creation and destruction with a LeaLeakChecker object. This object can be used by validation code to query the number of objects created and the number currently active. The information can be supplied globally or for a specific class. The best place to insert the probes are into the class constructors and destructors, but this is not possible for external code such as ROOT. In this case the probes have to be added where the new and delete operators are used on them. The leak checker macros are defined in:- #include "LeakChecker/Lea.h" The macro LEA_CTOR should be inserted into every constructor and LEA_DTOR into the destructor. These macros deduce the class name from the file name so file names must follow the standard naming convention i.e. the class name followed by .cxx or .h, See below if this is not the case e.g. defining multiple classes in a single file. It is essential that the compiler not be allowed to generate default and copy constructors as calls to these will not be recorded. The code for a dummy class supplying both default and copy constructors would look like this:- #include "LeakChecker/Lea.h" class MyClass { public: MyClass() { LEA_CTOR; } MyClass(const MyClass& that) { LEA_CTOR; *this = that } ~MyClass() { LEA_DTOR; } ... } If you don't want a copy constructor then the trick is simply to declare, but not define, a private one e.g:- private: MyClass(const MyClass& that); } being private nobody else can call it so you don't have to define it. If your class is not defined in a file with the standard naming convention then you will have to use extended forms LEA_CTOR_NM and LEA_DTOR_NM (where NM = named). These macros take two arguments: the name of the class and the address of the object. Using these macros the above example becomes:- #include "LeakChecker/Lea.h" class MyClass { public: MyClass() { LEA_CTOR_NM("MyClass",this); } MyClass(const MyClass& that) { LEA_CTOR_NM("MyClass",this); *this = that } ~MyClass() { LEA_DTOR_NM("MyClass",this); } ... } The extended macro forms LEA_CTOR_NM and LEA_DTOR_NM can be used with each new and delete operation to record construction and destruction of objects for classes where the modification of the source is not an option. Of course you should not do this for MINOS class objects that have probes built into their constructors and destructors, although if you do, this only means double counting; it won't mask a leak or spuriously indicate one. Here is sample code recording object creation and destruction:- #include "LeakChecker/Lea.h" TheirClass* myObj = new TheirClass(...); LEA_CTOR_NM("TheirClass",myObj); ... delete myObj; LEA_DTOR_NM("TheirClass",myObj); myObj = 0; This records heap objects. In principle the same technique could also be used for stack based objects:- #include "LeakChecker/Lea.h" { TheirClass myObj(...); LEA_CTOR_NM("TheirClass",&myObj); ... LEA_DTOR_NM("TheirClass",&myObj); } The DTOR call has to be placed at the point where the object goes out of scope. Not only would this be easy to overlook but will fail if control does not fall through to the end of the compound statement. Further, stack based objects are not a source of leaks and all that is lost by not recording them is that they will not be included in the total creation statistics. For this reason you are advised:- Do not use the macros to record creation and destruction of stack objects. As explained in the introduction, by default the macro probes are disabled; producing no code at all. Consequently they can be permanent additions to the source. To activate them the code has to be compiled with the preprocessor option -DLEAK_CHECKER. This option can be selected when using the standard Makefiles by defining the environmental variable ENV_CXXFLAGS. The following shows how to activate the probes in the MyPackage package:- cd MyPackage setenv ENV_CXXFLAGS -DLEAK_CHECKER gmake clean lib unsetenv ENV_CXXFLAGS if not using csh, substitute the equivalent commands for setenv and unsetenv. In this way leak checking can be activated package by package, or even class by class. Note however, probes used for foreign classes can give spurious results if not all new and delete probes are activated. If an activated package gives objects to another unactivated package to delete this will appear as a memory leak. Conversely the inverse will appear as a negative leak. These problems can be avoided by using the leak checker as part of a validation suite as described in the next section. Activating the probes as described above causes statistics to be collected, class by class on object creation and destruction. To access this information you require the services of the LeaLeakChecker object which can be accessed using it static Instance() method:- #include "LeakChecker/LeaLeakChecker.h" ... LeaLeakChecker* lea = LeaLeakChecker::Instance(); LeaLeakChecker has methods to count the total number of objects created and the number currently active ( = created - destroyed):- UInt_t GetNumCreated(const Char_t* name = 0) const; UInt_t GetNumActive(const Char_t* name = 0) const; they take as their argument the name of the required class. If omitted, the sum for all classes is returned. You can reset the counts for any class using:- void Reset(const Char_t* name = 0); again, omitting the argument means all classes. A simple summary of the current state of the leak checker system can be sent to the MessageService as follows:- MSG("Nav",Msg::kInfo) << lea; The leak checker is primarily a first line of defense, to be incorporated into stand-alone package validation. Suppose you have an XxxPackage which has a validation suite contained in the XxxValidate class and executed using the method:- Bool_t RunAllTests(); A test function to run it and also perform leak checking could be as follows:- Bool_t TestXxx() { // Perform full validation with leak checking and // return kTRUE if successful. #include "LeakChecker/LeaLeakChecker.h" #include "MessageService/MsgService.h" #include "XxxPackage/XxxValidate.h" Bool_t ok = kTRUE; // Clear leak checker. LeaLeakChecker* lea = LeaLeakChecker::Instance(); lea.Reset(); // Run validation suite within an inner scope so that // all stack based objects are destroyed before checking // for leaks. { XxxValidate v; ok = v.RunAllTests(); } // See if leak checker is enabled by checking total created. if ( lea->GetNumCreated() == 0 ) { MSG("Xxx",Msg::kInfo) << "Warning: Leak checking disabled!!" << endl; } // If it has make sure that there are no objects left. else if ( lea->GetNumActive() ) { MSG("Xxx",Msg::kInfo) << "Error, leaks detected! " << lea; ok = kFALSE; } return ok; } The leak checker describe here is not a substitute for a full function leak checker such as Purify for a number of reasons:- Despite all of the above, the system should prove useful for any package that can be tested stand-alone. Adding the probes is not particularly onerous and then the package developer can use the system to check for leaks throughout package development on her/his own machine. In this way some of the basic leaks can be eliminated before moving onto integrated system testing using a more powerful checker.
http://www-numi.fnal.gov/offline_software/srt_public_context/doc/UserManual/node16.html
CC-MAIN-2018-22
refinedweb
1,262
53
Red Hat Bugzilla – Bug 438124 [live] system-config-date backtraces when run from live image Last modified: 2008-07-08 22:42:34 EDT Description of problem: system-config-date gives a backtrace when run from a live image instance. Version-Release number of selected component (if applicable): system-config-date-1.9.24-1.fc9 How reproducible: every time Steps to Reproduce: 1. run system-config-date from gnome-terminal in a livecd desktop session Actual results: 1. Get a backtrace: [fedora@localhost ~]$ system-config-date Text mode interface is deprecated Traceback (most recent call last): File "/usr/share/system-config-date/system-config-date.py", line 87, in <module> useGuiMode(page) File "/usr/share/system-config-date/system-config-date.py", line 59, in useGuiMode import timeconfig File "/usr/share/system-config-date/timeconfig.py", line 103, in <module> timezoneBackend = timezoneBackend.timezoneBackend() File "/usr/share/system-config-date/timezoneBackend.py", line 158, in __init__ line = lines[2].strip() IndexError: list index out of range Expected results: 1. to run normally Additional information: It runs ok from a normal rawhide install. This is caused by /etc/adjtime not having information about whether or not the system clock is UTC (i.e. it's missing the third line). I guess this is due to hwclock being run in a virtualized environment where it simply won't work. I'm Cc'ing Bill Nottingham and Karel Zak on this because I need to know whether s-c-date should assume UTC or local time (or something else?) in virtualized environments before I can fix this for real -- avoiding the traceback is easy, but the question then is what to do with that lack of information. The standard place (for Fedora/RHEL) where is information about UTC is /etc/sysconfig/clock. (In reply to comment #2) > The standard place (for Fedora/RHEL) where is information about UTC is > /etc/sysconfig/clock. That's funny, because I had the impression that keeping info about UTC/ARC in there is deprecated -- about six weeks ago, Bill sent around an email with patches to s-c-date and anaconda to get rid of storing/retrieving that info from /etc/sysconfig/clock and use /etc/adjtime instead. Bill? Correct. anaconda writes UTC-or-not to /etc/adjtime now, as does hwclock when run. Not sure why it's not getting set on the live CD; in any case, defaulting to UTC if it can't be determined is sensible. I don't think it's got to do with Live-CD or not, but rather whether it's a virtual machine or not. How do the system time in the host and guest OS correlate? Is there a way for me to find out if the system time actually is set to UTC or not? Not AFAIK; you'd be relying on anaconda to set the right value in /etc/adjtime. Hm, I'm a bit puzzled. Somehow the system clock (in the guest OS) would need to know whether to adjust for DST or not. Where from does it know that? Whatever is written by the installer (be it in /etc/adjtime, or /etc/sysconfig/clock for older releases. I don't think there is any other communication mechanism. These are evaluated in rc.sysinit and translated into options for hwclock -- which fails in a virtualized guest. What's the kernel default without hwclock setting it? I begin to think that s-c-date should check via running "hwclock" if it runs virtualized -- and if that's the case it should just disable the UTC checkbox and ignore the whole thing (anaconda etc. should do the same). What do you think? What do you mean 'fails'? This: [root@rawhide ~]# hwclock Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. That's a KVM/QEMU guest BTW. That has nothing to do with UTC settings or not, and everything to do with there being no /dev/rtc in the guest. FWIW I only see the backtrace from a live usb instance not under qemu afaicr. (In reply to comment #13) > FWIW I only see the backtrace from a live usb instance not under qemu afaicr. II suspect that you have (by chance?) a third line in your /etc/adjtime in the qemu instance. Was that freshly installed or did you upgrade from something older? (In reply to comment #12) > That has nothing to do with UTC settings or not, and everything to do with > there being no /dev/rtc in the guest. OK. It's embarrassing as the s-c-date maintainer but I have to admit that I don't know exactly how applications (via gettimeofday(), ctime(), e.a.) determine the local time. I think the system time is always in UTC and applications use glibc functions to determine the local "wall" time to display, which in turn evaluate /etc/localtime or (if the TZ environment variable is set) some file in /usr/share/zoneinfo. This would mean that the UTC setting is only ever relevant if there is a hardware clock (i.e. /dev/rtc). Does that make sense?. I suppose the virt case is 435312 all over again. (In reply to comment #15) >. It being s-c-date I guess. Let's see if I get this right, s-c-date should: - honor the UTC setting of /etc/adjtime if present, default to UTC if not - do what with settimeofday()? I've found a way to call C-Functions from python but I'd like to be sure what I do. > I suppose the virt case is 435312 all over again. Interesting. I'm not sure s-c-date needs to be in the business of setting the time, aside from calling hwclock. If it doesn't work, I'm not sure it needs to have a fallback. In that case, should s-c-date just assume no default if this information is missing from /etc/adjtime and write the file manually if the suer sets it to something? Or go with what I described comment #9? Hm, I could see either way. If hwclock doesn't work, writing the value in /etc/adjtime won't help. Disabling the choice if hwclock doesn't work would be the way to go then. Makes no sense to let the user think that this setting would accomplish something when it doesn't do that. FWIW, if I enable time display in the KDE live image, the clock claims that the time zone is "New York" (hence why I tried to use s-c-date in the first place ...). I've just kicked off building system-config-date-1.9.27-1.fc9 which should: - make the "System clock uses UTC" checkbox insensitive if hwclock doesn't work - cope with missing UTC info in /etc/adjtime - not write /etc/adjtime at all if it doesn't have info about the system clock being UTC or not Please try this out once the package is available. Looks good to me now - at least no more backtraces. :) system-config-date-1.9.32-1.fc8 has been submitted as an update for Fedora 8 system-config-date-1.9.32-1.fc8 has been pushed to the Fedora 8 stable repository. If problems still persist, please make note of it in this bug report.
https://bugzilla.redhat.com/show_bug.cgi?id=438124
CC-MAIN-2018-13
refinedweb
1,241
65.12
Thank found including a browser can be very useful, example. The user browse awt - Swing AWT , For solving the problem visit to : Thanks... market chart this code made using "AWT" . in this chart one textbox when user AWT-swings tutorial - Swing AWT :// tutorial hello sir, i am learnings swings and applets.. i want some tutorials and some examples related to swings.. thank you sir aw in java awt in java using awt in java gui programming how to false the maximization property of a frame Beginners java-awt how to include picture stored on my machine to a java frame... frnd.... "\" is a special character... i think u must use "\\" whereever u want... information, Thanks Java AWT Java AWT What is meant by controls and what are different types of controls in AWT Java - Swing AWT Java Hi friend,read for more information, can u plz try this program - Java Beginners can u plz try this program Write a small record management application for a school. Tasks will be Add Record, Edit Record, Delete Record, List.... --------------------- <%@ page language="java Java AWT Package Example query - Swing AWT java swing awt thread query Hi, I am just looking for a simple example of Java Swing slider - Swing AWT :// Thanks...slider can u provide examples on how to work with sliders as soon... Example"); Container content = frame.getContentPane(); JSlider slider java - Swing AWT java hi can u say how to create a database for images in oracle and store and retrive images using histogram of that image plz help me its too urgent AWT basics AWT documentation AWT example AWT manual AWT tutorial...AWT basics Are you looking for the AWT Basics to help you learn AWT quickly? Here we have provided you the links to our AWT tutorials. AWT stands Java: Example - Count vowels Java: Example - Count vowels This example method counts all vowels (a, e, i, o, u) in a string... = text.charAt(i); if (c=='a' || c=='e' || c=='i' || c=='o' || c== - Swing AWT : Thanks...java swing how to add image in JPanel in Swing? Hi Friend, Try the following code: import java.awt.*; import java.awt.image. Help Required - Swing AWT JFrame("password example in java"); frame.setDefaultCloseOperation...(); } }); } } ------------------------------- Read for more information.... the password by searching this example's\n" + "source code JAVA AWT BASE PROJECT JAVA AWT BASE PROJECT suggest meaningful java AWT-base project java awt calender java awt calender java awt code for calender to include beside a textfield swing-awt - Swing AWT swing-awt Hi, Thanks for replying to my question...I'm getting some confusion to add action events in my application(Rich Text Editor).How to add action events? Thank U How to save data - Swing AWT to : Thanks... save data from jList and Jtable(in jList or jTable data will be many).Thank's Java logical error - Swing AWT Java logical error Subject:Buttons not displaying over image Dear Sir/Madam, I am making a login page using java 1.6 Swings. I have want to apply... Thank you. Hi Friend, Try the following code: import Java swing in NetBeans - Swing AWT Java swing in NetBeans thanks a lot sir for everything you answered... can u plz specify code for this above table. Q 3. i want to have another table like this is it possible: if it is can u give me code in NetBeans JTable Cell Validation? - Swing AWT :// Thanks it's not exactly...JTable Cell Validation? hi there please i want a simple example... or leave the cell empty nothing happens(the value still as it was) thank you all  swing/awt - Swing AWT swing/awt How to create richtexteditor using swings...?I'm very much new to swings....It's urgent.....Thank u... hello read this book you get idea; JFC Swing Tutorial, The: A Guide to Constructing GUIs, Second Media Player - Swing AWT Media Player Hi All, Could u help me? How can i create new media player in java programming language java awt components - Java Beginners java awt components how to make the the button being active at a time..? ie two or more buttons gets activated by click at a time tree - Swing AWT tree example of tree java program Hi Friend, Please visit the following code:: LinkButton - Swing AWT but i want java(swings) code in that i forgot to write using swings.... if u... that button it has to go another page. can u give me a simple program......plz... but it is not there . If it is available can u give me link itself.... Hi Friend, Try how to include a session class in java awt code - JavaMail how to include a session class in java awt code Hi... i have been developing a mini online shopping tool using java awt and websphere MQ..... Can u please help me out. LINKBUTTON - Swing AWT LINKBUTTON i want to create a one link button if i click that button it has to go another page. USING SWINGS(JAVA) can u give me a simple program... but it is not there . If it is available can u give me link itself.... Hi Friend example - Java Beginners java program example can we create java program without static and main?can u plzz explain with an example How to start learning Java I am a Java Beginner ...so, please guide me how to start JList - Swing AWT is the method for that? You kindly explain with an example. Expecting solution as early... example"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE... for more information.
http://www.roseindia.net/tutorialhelp/comment/89860
CC-MAIN-2014-23
refinedweb
924
64.41
Applied Options Protective Collar Definition Protective Collar is an option strategy that involves both the underlying stock and two option contracts. The trader buys (or already owns) a stock, then buys an out-the-money put option and sells an out-the-money call option. It is similar to the covered call strategy with the purchase of an additional put option. It is being used if the trader is writing covered calls but wish to protect himself from an unexpected downside sharp move in the price of the underlying security. As a tradeoff, the profit will become limited compared with the covered call strategy. # Protective Collar price = np.arange(700,950,1) # assume at time 0, the price of the undelying stock is 830 k_otm_put = 800 # the strike price of OTM put k_otm_call = 860 # the strike price of OTM call premium_otm_put = 6 # the premium of OTM put premium_otm_call = 2 # the premium of OTM call # payoff for the long put position payoff_long_put = [max(-premium_otm_put, k_otm_put-i-premium_otm_put) for i in price] # payoff for the short call position payoff_short_call = [min(premium_otm_call, -(i-k_otm_call-premium_otm_call)) for i in price] # payoff for the underlying stock payoff_stock = price - 830 # payoff for the Protective Collar Strategy payoff = np.sum([payoff_long_put,payoff_short_call,payoff_stock], axis=0) plt.figure(figsize=(20,15)) plt.plot(price, payoff_long_put, label = 'Long Put',linestyle='--') plt.plot(price, payoff_short_call, label = 'Short Call',linestyle='--') plt.plot(price, payoff_stock, label = 'Underlying Stock',linestyle='--') plt.plot(price, payoff, label = 'Protective Collar',c='black') plt.legend(fontsize = 20) plt.xlabel('Stock Price at Expiry',fontsize = 15) plt.ylabel('payoff',fontsize = 15) plt.title('Protective Collar Strategy - Payoff',fontsize = 20) plt.grid(True) According to the payoff plot, the maximum profit is the strike price of short call minus the purchase price of the underlying asset add the net credit from the premium. It occurs when the stock price is beyond the strike price of the short call option. The maximum loss is the purchase price of the underlying asset minus the strike price of the long put minus the net credit from the premium. It occurs when the stock price is below the strike price of the long put. It is a strategy with limit risk and limit profit. Implementation Step 1: Initialize your algorithm that involves setting the start date and the end date, setting the cash and implement the coarse selection of option contracts. def Initialize(self): self.SetStartDate(2017, 4, 1) self.SetEndDate(2017, 5, 30) self.SetCash(1000000) equity = self.AddEquity("GOOG", Resolution.Minute) option = self.AddOption("GOOG", Resolution.Minute) self.symbol = option.Symbol # set our strike/expiry filter for this option chain option.SetFilter(-10, +10, timedelta(0), timedelta(30)) # use the underlying equity as the benchmark self.SetBenchmark(equity.Symbol) Step 2: Choose the expiration date for your options traded and break the options into the call and put contracts. The choice of expiration date depends on the holding period of stocks in your portfolio. def TradeOptions(self,optionchain): for i in optionchain: if i.Key != self.symbol: continue chain = i.Value # choose the furthest expiration date within 30 days from now on expiry = sorted(chain, key = lambda x: x.Expiry)[-1] # filter the call options contracts call = [x for x in chain if x.Right == 0 and x.Expiry == expiry] # filter the put options contracts put = [x for x in chain if x.Right == 1 and x.Expiry == expiry] Step 3: Choose the deep in-the-money call and put options in the list and then sell the call options and buy the put options. self.otm_call = sorted(call, key = lambda x: x.Strike)[-1] self.otm_put = sorted(put, key = lambda x: x.Strike)[0] if (self.otm_call is None) or (self.otm_put is None): continue self.Sell(self.otm_call.Symbol, 1) # sell the OTM call self.Buy(self.otm_put.Symbol, 1) # buy the OTM put Step 4: In Ondata, if there is no assets in portfolio, we buy the undelying stocks. After that, we trade the options which equivalent to the amount of your stocks holding. (one option contracts equals 100 undelying shares). def OnData(self,slice): optionchain = slice.OptionChains for i in slice.OptionChains: if i.Key != self.symbol: continue chains = i.Value contract_list = [x for x in chains] if (slice.OptionChains.Count == 0) or (len(contract_list) == 0): return # if you don't hold options and stocks, buy the stocks and trade the options if not self.Portfolio.Invested: self.Buy("GOOG",100) # buy 100 shares of the underlying stock self.TradeOptions(optionchain) # sell OTM call and buy OTM put Summary In this algorithm, at the beginning 01/04/2016, we purchased 100 GOOG shares. At the same time, we purchased a $715 put at $6 and sells a $772.5 call at $2.45. The share price of GOOG is $739.32, which is between the strike prices of two out-the-money options. At the expiry 01/15/2016, the share price of GOOG drops to $714.32. The call option expire worthless but the put option is exercised. Then we sell 100 GOOG shares at $715. Then we hold neither option positions and stock positions. You can also see our Documentation and Videos. You can also get in touch with us via Chat.
https://www.quantconnect.com/tutorials/applied-options/protective-collar
CC-MAIN-2018-30
refinedweb
877
51.55
Frozen Messages¶ (New in 1.2.) Since Mido messages are mutable (can change) they can not be hashed or put in dictionaries. This makes it hard to use them for things like Markov chains. In these situations you can use frozen messages: from mido.frozen import FrozenMessage msg = FrozenMessage('note_on') d = {msg: 'interesting'} Frozen messages are used and behave in exactly the same way as normal messages with one exception: attributes are not settable. There are also variants for meta messages ( FrozenMetaMessage and FrozenUnknownMetaMessage). You can freeze and thaw messages with: from mido.frozen import freeze_message, thaw_message frozen = freeze_message(msg) thawed = thaw_message(frozen) thaw_message() will always return a copy. Passing a frozen message to freeze_message() will return the original message. Both functions return None if you pass None which is handy for things like: msg = freeze_message(port.receive()) # Python 3 only: for msg in map(freeze_message, port): ... # Python 2 and 3: for msg in (freeze_message(msg) for msg in port): ... To check if a message is frozen: from mido.frozen import is_frozen if is_frozen(msg): ...
https://mido.readthedocs.io/en/latest/frozen_messages.html
CC-MAIN-2022-33
refinedweb
175
76.62
Source code Source code source code of html to create a user login page.. code for complete control. POPFile: Open Source E-Mail...Open Source E-mail Server MailWasher Server Open Source MailWasher Server is an open-source, server-side junk mail filter package Directory includes source code for both directory client access and directory servers. Open...Open Source Directory Open Source Java Directory The Open Source Java...; Open Source Directory Services Apple's Open Directory Software . Open source software is very useful because source code also distributed... is also know as OSS. Simply open source means the availability of the Source code...Open Source Software In this section we are discussing Open source software Open Source Exchange ; DDN Open Source Code Exchange The DDN site...Open Source Exchange Exchange targeted by open-source group A new open-source effort dubbed Open Source E-mail Server code for complete control. POPFile: Open Source E-Mail...Open Source E-mail Server MailWasher Server Open Source MailWasher Server is an open-source, server-side junk mail filter package Best Open Source Software Best Open Source Software Best Open Source Open source software. Often (and sometimes incorrectly) called freeware, shareware, and "source code," open... the term open source, and even fewer are aware that this alternative software For the last few years several open source software completely changed our.... Thanks to the wide array of benefits of open source software today's IT environment is much dependant on them. Open source software can be something like source code for the following question source code for the following question source code for the fuzzy c-means Open Source Metaverses Open Source Metaverses OpenSource Metaverse Project The OpenSource Metaverse Project provides an open source metaverse engine along the lines... of an emerging concept in massively multiplayer online game circles: the open-source Java source code Java source code How are Java source code files named Open Source software written in Java Code Coverage Open Source Java Collections API...Open Source software written in Java Open Source Software or OSS for short is software which is available with the source code. Users can read, modify Testing Open Source Testing Open Source Security Testing Methodology Manual The Open Source Security Testing Methodology Manual (OSSTMM) is a peer-reviewed... are regularly added and updated. Open Source Open Source Browser Open Source Browser Building an Open Source Browser One year ago -ages ago by Internet standards- Netscape released in open source... browser. Based on KHTML and KJS from KDE's Konqueror open source project Open Source Business Model with publication of their source code on the Internet, as the Open CASCADE...Open Source Business Model What is the open source business model It is often confusing to people to learn that an open source company may give its Open Source Code Coverage Tools For Analyzing Unit Tests written in Java source code - JSP-Servlet source code i want source code for my project online examination it includes two modules user and administrator so if user want to write exam first he register his details and then select the topics on which he want to write Movement Open Source Movement Open source movement Wikipedia The open source movement is an offshoot of the free software movement that advocates open source software as an alternative label for free software, primarily source code - JSP-Servlet source code I want source code for Online shopping Cart. Application Overview The objective of the Shopping Cart is to provide an abstract view... Rakesh, I am sending a link where u will found complete source code What is Open Source? What is Open Source? Introduction Open source is a concept referring to production...; became popular with the spread of the Internet. Open source is a philosophy How to clear screen on command line coding like cls in DOS? Hi Friend, Here is the code to execute cls command of DOS through the java file but only one restriction is that it will execute Why Open Source? Why Choose Open Source? Introduction Open source refers to a production and development system... behind the concept of open source software is that it enables rapid evolution source code - Java Beginners source code Hi...i`m new in java..please help me to write program that stores the following numbers in an array named price: 9.92, 6.32, 12,63... message dialog box. Hi Friend, Try the following code: import Open Source e-commerce code and J2EE best practices. Open Source E...Open Source e-commerce Open Source Online Shop E-Commerce Solutions Open source Commerce is an Open Open Source CRM Open Source CRM Open Source CRM solution Daffodil CRM is a commercial Open Source CRM Solution that helps enterprise businesses to manage customer... open source CRM solution, is feature-rich software that seamlessly integrates all Open Source Open Source Java Code Beautifiers written in Java VoIP PBX Open Source members can contribute to writing open source code, similar to the Linux development... VoIP PBX Open Source  ... the second beta of a "software appliance" version of Asterisk, the open source IP PBX Open Source Code Open Source Code What is SOAP SOAP is a protocol for exchanging.... Open source code in Linux If the Chinese have IT, get...-writing it. Open-source code quality The open Open Source Intelligence Open Source Intelligence Open source Intelligence The Open Source..., a practice we term "Open Source Intelligence". In this article, we use three...; Open source intelligence Wikipedia Open Source Intelligence (OSINT Source code of a website in HL and JSP. Source code of a website in HL and JSP. I want the source code of a website of 10 pages with database connectivity. Source code should be in HTML and JSP. Thanks Open Source Blog Open Source Blog About Roller Roller is the open source blog server...; The Open Source Law Blog This blog is designed to let you know about developments in the law and business of open source software. It also provides java source code - Java Beginners java source code I have written a source code for recording invoices. I wish it to be evaluated. Could somebody help me with it? Moreover, I do not know the method to attach the JAR for evaluation. My email
http://www.roseindia.net/tutorialhelp/comment/53743
CC-MAIN-2014-52
refinedweb
1,048
70.43
Brendan has been describing his recent evaluation experiences with Resharper, CodeRush, CodeSmart, and ReFactory, and his latest conclusion that he is now a “ReSharper Convert”. Having also evaluated all of these tools myself, I’m starting to feel that ReShaper is probably the pick of the litter at this time and is probably overwhelmingly deserving of the small registration fee they ask. One of the features of ReSharper that I find particularly useful is the ability to have ReShaper automatically insert the “using” statements for any code that I am typing on the fly. For example, if I am creating a SqlConnection object, I need to import the System.Data.SqlClient namespace to access the SqlConnection clas. However, with ReSharper, it is able to lookup and insert namespaces for code you are writing as you go: Simply by hitting the Alt-Enter key combination ReSharper will add the “using System.Data.SqlClient” instruction to your list of using statements: Very cool! Another related feature is the ability to “optimize” your using statements. ReSharper will detect for you any unused using statements and prompt you to let it clean up any that it detects you are no longer using. If you’re like me, you like having your source files as uncluttered as possible and this takes care of something you probably wouldn’t have worried about otherwise. (This unused code detection also applies to fields, variables, etc. and gives you the option to remove those as well.) If you haven’t had the chance to take ReSharper for a spin yet, I highly encourage you to do so. A registration fee of $99 per user makes this product affordable for everyone. Technorati Tags: reshar
http://codebetter.com/blogs/paul.laudeman/archive/2004/09/18/26028.aspx
crawl-002
refinedweb
284
52.9
_IsPressed showing key press but no keys being pressed, why? By iAmNewbe, in AutoIt General Help and Support Recommended Posts Recently Browsing 0 members No registered users viewing this page. Similar Content - By Miliardsto Hello I want to know which one from arrow key was pressed the last. <^V> - need only which arrow However I want to call function some seconds after pressed to it must be stored somewhere, not simply if ispressed. How to achieve that? - By TheOnlyOne Hi, I am trying to make some stuff easier for me so when i click on the buttons on the side of my mouse it does some function. I have 4 extra buttons on my mouse and would like to utilize all of them. I have looked at _ispressed but that only seems to have two: 05 X1 mouse button 06 X2 mouse button Those two works fine, I just want to know if there is a way to check if the other two buttons on my mouse is clicked ? - By timmy2 A graphic is already on screen showing the meanings of the following keys: j, k, l. At the point when that graphic appears I want my script to then wait for the user to press any of those keys. My script will respond appropriately. I found an old post by zackrspv that led me to the following script. My question is: how can I limit keyboard input to this GUI? I don't want the user's keystrokes to be seen by any other program that might be running. I cannot figure out how to incorporate the BlockInputEX UDF into my script, plus I'm not sure my approach is even the best solution. #include <GUIConstantsEx.au3> #include <WindowsConstants.au3> $form = GUICreate("test", 500, 500,BitOR($WS_SYSMENU,$WS_POPUP)) $SBack = GUICtrlCreateDummy() $SPause = GUICtrlCreateDummy() $SForward = GUICtrlCreateDummy() Dim $AccelKeys[3][2] = [["j", $SBack],["k", $SPause],["l", $SForward]] GUISetAccelerators($AccelKeys) GUISetState() While 1 $msg = GUIGetMsg() Switch $msg Case $GUI_EVENT_CLOSE Exit Case $SBack ConsoleWrite("back" & @CRLF) Case $SPause ConsoleWrite("pause" & @CRLF) Case $SForward ConsoleWrite("forward" & @CRLF) EndSwitch WEnd - By johnmcloud Hi guys, i have this script: #include <GUIConstantsEx.au3> #include <WindowsConstants.au3> #include <StaticConstants.au3> #include <Misc.au3> Opt("GUICloseOnESC", 1) Global $GUI_1 $GUIHeight = 25 $GUIWidth = 50 $GUI_1 = GUICreate("A", $GUIWidth, $GUIHeight, 0, 0, $WS_POPUP, $WS_EX_LAYERED + $WS_EX_TOOLWINDOW) $Focus_GUI_1 = GUICtrlCreateLabel("", 0, 0, $GUIWidth, $GUIHeight) GUISetState() Global_Func() Func Global_Func() While 1 If _IsPressed(01) Then If WinGetTitle("A") = True Then $GUI_Pos_1 = GUIGetCursorInfo($GUI_1) If IsArray($GUI_Pos_1) Then If $GUI_Pos_1[4] = $Focus_GUI_1 Then MsgBox(0, 0, "click") EndIf EndIf EndIf EndIf WEnd EndFunc ;==>Global_Func Work, but work everytime, also if a window/software/whatever is on it, so it's a problem for me. How to make _IsPressed work only if the GUI is Active or Focus? Thanks for help -
https://www.autoitscript.com/forum/topic/199639-_ispressed-showing-key-press-but-no-keys-being-pressed-why/
CC-MAIN-2021-10
refinedweb
463
60.95
28 August 2013 20:12 [Source: ICIS news] HOUSTON (ICIS)--Petrochemical makers worried about when red hot ?xml:namespace> "We poll our dealers about the 20th every month and what we're hearing is that sales are holding," said one "August is looking good," said another source. US automakers are expected to produce 16m units this year, up by 10% from 14.5m a year ago. That's good news for US petrochemical producers who supply acrylonitrile, acrylonitrile-butadiene-styrene and polycarbonate to the auto industry for plastic body panels and interior trim. Petchem buyers and sellers have said they will watch auto sales closely over the next few months. Some feel that the pent-up demand from the 2008 recession that is driving US car sales has mostly run its course. Once that demand dwindles, petchem buyers and sellers worry that the underlying macroeconomic fundamentals -- high unemployment, declining wages, lower consumer confidence -- will not support robust auto sales. US automakers are sticking by their forecast of full-year car sales of 16m units, which would be the best year for the industry since 2007. Automakers are currently ramping up production for more than 20 new models that they will introduce over the next three months. To switch over plants to new models, automakers reduced the normal summer shutdown at plants, added shifts and hired more employees. Automakers are expected to report August sales on
http://www.icis.com/Articles/2013/08/28/9701195/august-auto-sales-seen-at-targeted-16myear-unit-pace-sources.html
CC-MAIN-2014-52
refinedweb
235
60.85
iQuest Struct Reference A quest instance. More... #include <tools/questmanager.h> Detailed Description A quest instance. This is created (by the quest manager) from a quest factory using the trigger and reward factories. Definition at line 50 of file questmanager.h. Member Function Documentation Activate this quest. This means it will process events again. Quests are activated by default. Deactivate this quest. This means that events will no longer be processed. Find a sequence. Get current state name of this quest. Return true if this quest was modified after the baseline. Mark the baseline for this quest. This means that the status of this quest as it is now doesn't have to be saved. Only changes to the quest that happen after this baseline have to be modified. A quest doesn't actually have to do this in a granular way. It can simply say that it saves itself completely as soon as it has been modified after the baseline. Call this function if the quest' set. Switch this quest to some specific state. Returns false if state doesn't exist (nothing happens then). The documentation for this struct was generated from the following file: - tools/questmanager.h Generated for CEL: Crystal Entity Layer 2.1 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api/structiQuest.html
CC-MAIN-2016-44
refinedweb
212
71.1
One of the most frequent questions which I receive from my blog readers is how to use css and javascript files in application with Spring MVC. So it’s a good opportunity to write an article about usage of resources in Spring MVC. As usually I will use java based configuration approach. In a nowadays it’s hard to imagine web-application which doesn’t has css and javascript files. In what way Spring MVC can deal with them? Where to place these files in a dynamic web project? How to get access to static resources? I will try to explain all this in a few minutes. Firstly you need to have some css or javascript file which you want to plug into a project. In my example I’m going to use a main.css file: Look carefully where I have placed the file (src\main\webapp\resources\css\main.css). After this I can continue with java configuration file. @Configuration @EnableWebMvc ... public class WebAppConfig extends WebMvcConfigurerAdapter { @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/resources/**").addResourceLocations("/resources/"); } ... Above you can see the code snippet which shows how to modify a configuration class to make resources available. In our case a role of resources plays just one main.css file. Let’s consider what’s going on in the following code: ... @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/resources/**").addResourceLocations("/resources/"); } ... Here I declare a mapping of src\main\webapp\resources folder and all its content to a resource location value /resources/ After this manipulations you can access css and javascript files in Spring MVC. Here are some example of usage: If I want to apply main.css to url:, I need to use following construction in HTML code: < link If I want to apply main.css to url:, I need to use following construction in HTML code: < link These two examples are clear enough to understand how to work with Spring MVC resources. In the following posts I will demonstrate resource usage in Spring MVC project. thanq .. this article helps me a lot :) Thanks but your following sentence is very vague: “After this I can continue with java configuration file.” Which configuration file? Where is it located? What does it configure? not working
http://www.javacodegeeks.com/2013/08/spring-mvc-resources.html
CC-MAIN-2015-18
refinedweb
376
57.98
Observed: The code below does not demean the data. Expected: Printout of approx. [0 0] #include "Eigen/Core" #include <iostream> using namespace Eigen; typedef Matrix<float,Dynamic,Dynamic> MatrixType; int main(int argc, char* argv[]) { MatrixXd m = MatrixXd::Random(5,2); m -= m.colwise().mean().replicate(m.rows(),1); std::cout << m.colwise().mean() << std::endl; m = MatrixXd::Random(5,2); m.colwise() -= m.colwise().mean(); std::cout << m.colwise().mean() << std::endl; } Can we detect this and automatically compute a minimal temporary for the RHS. In these cases, I think one really wants a temporary. It should be: m.rowwise() -= m.colwise().mean();. btw, the second version is not correct, it should be a .rowwise() on the left hand side: m.rowwise() -= m.colwise().mean().eval(); (In reply to comment #2) >. > m.rowwise() -= m.colwise().mean().eval(); That was a typo, which I fixed in the second comment. I also thought about the speed implications. Fixing them would automatically resolve the aliasing issue. If I am not wrong, a column-wise -= is implemented like this (VectorwiseOp.h, ll.438ff): template<typename OtherDerived> ExpressionType& operator-=(const DenseBase<OtherDerived>& other) { EIGEN_STATIC_ASSERT_VECTOR_ONLY(OtherDerived) for(Index j=0; j<subVectors(); ++j) subVector(j) -= other.derived(); return const_cast<ExpressionType&>(m_matrix); } Here, I think that it might also be a good idea to evaluate "other.derived()" into a temporary. Do you agree? Evaluating column-wise and replicate arguments into temporaries will resolve both, aliasing as well as speed issues. - Hauke yes, that's what I meant. This is just a matter of using nested<>. (In reply to comment #4) > yes, that's what I meant. This is just a matter of using nested<>. I have prepared a patch. It contains some code duplication but I wanted to check back before cleaning up. I think it is not just a matter of using nested<>. In fact, our replicate expression is using nested. I think we need exactly the opposite of nested<>. If I am not completely wrong, we need to nest every expression by value and every plain object by reference. So right now, I added something like this typedef typename internal::eval<MatrixType>::type PlainObjectType; typedef typename internal::conditional< internal::is_same<PlainObjectType,MatrixType>::value, PlainObjectType&, PlainObjectType >::type NestedType; const NestedType m_matrix; to the replicate expressions and something similar to += and -= of VectorwiseOp. My patch is attached and some feedback would be great. - Hauke Created attachment 167 [details] Nest expressions by value and everything else by reference. (In reply to comment #5) > I think it is not just a matter of using nested<>. In fact, our replicate > expression is using nested. I think we need exactly the opposite of nested<>. > If I am not completely wrong, we need to nest every expression by value and > every plain object by reference. I'm confused how this is the opposite of nested<>. I thought that nested (with a high enough value for n) does precisely that: expressions are evaluated and nested by value and plain objects are nested by reference. As you say, Replicate already uses nested<> but with the default value of n = 1. Perhaps simply changing this will solve the issue? That is, change Replicate.h:51 to /* untested code */ enum { ColTimesRowFactor = (ColFactor == Dynamic || RowFactor == Dynamic) ? Dynamic : ColFactor * RowFactor }; typedef typename nested<MatrixType,ColTimesRowFactor>::type MatrixTypeNested; > So right now, I added something like this > > typedef typename internal::eval<MatrixType>::type PlainObjectType; > typedef typename internal::conditional< > internal::is_same<PlainObjectType,MatrixType>::value, > PlainObjectType&, > PlainObjectType > >::type NestedType; If T is Matrix or Array, then eval<T>::type already is a reference to the Matrix or Array. So why do you need the conditional? Thanks for the feedback Jitse. Its probably been too long ago since I worked on the Eigen internals. You are of course right; eval<T>::type alone is enough and the conditional is not required. Regarding nested<> I wrote "the opposite" since I thought I remembered that in general nested<> does nest expressions by value and not by PlainObjectType. I want all expressions to be nested as PlainObjectType - no matter how small they are. So in case I am not wrong regarding nested, nesting with eval<T>::type seems to be enough. Regards, - Hauke Created attachment 168 [details] Nest expressions as PlainObjects and everything else by reference. I updated the patch and the description. To be more clear, expressions shall be nested as PlainObjects (Matrix or Array, by value). Created attachment 169 [details] Fixed a typo. Changed MatrixType to OtherDerived. I found something related to this, but with /=... I was asked to post the example in here: // Observed: [2.00342 3.70924 1.16536 3.35758 -1.38856 -2.90473] // Expected: [1 1 1 1 1 1] #include "Eigen/Core" #include <iostream> using namespace Eigen; int main(int argc, char* argv[]) { MatrixXd m = MatrixXd::Random(5,6); // normalize the columns: m.array() /= m.colwise().sum().replicate(m.rows(), 1).array(); // after this, all sum(over column) should evaluate to 1 std::cout << m.colwise().sum() << std::endl; } Proof of the last comment: MatrixXd m = MatrixXd::Random(5,6); // normalize the columns: MatrixXd tmp = m.colwise().sum(); m.array() /= tmp.replicate(m.rows(), 1).array(); // after this, all sum(over column) should evaluate to 1 std::cout << m.colwise().sum() << std::endl; Does do the right thing... -- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance:.
https://eigen.tuxfamily.org/bz/show_bug.cgi?id=259
CC-MAIN-2020-45
refinedweb
919
50.94
The class Randomis a random numbers generator. It generates uniformly distributed random bools, ints and doubles. It can be used as the random number generating function object in the STL algorithm random_shuffle. Instances of Random can be seen as input streams. Different streams are independent of each other, i.e. the sequence of numbers from one stream does not depend upon how many numbers were extracted from the other streams. It can be very useful, e.g. for debugging, to reproduce a sequence of random numbers. This can be done by either initialising deterministically or using the state functions as described below. #include <CGAL/Random.h> We use the C library function erand48 to generate the random numbers, i.e., the sequence of numbers depends on the implementation of erand48 on your specific platform. CGAL::default_random
http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Generator_ref/Class_Random.html
crawl-001
refinedweb
136
51.04
Retrieving Tables from a Database Retrieving Tables from a Database  ... for retrieving tables from a specific database through an example. In relational... the connection between the MySQL database and Java file. We will retrieve Retrieving to database!"); try { DatabaseMetaData dbm = con.getMetaData... name in Database!"); System.out.println("Welcome"); try (); Connection conn = DriverManager.getConnection("jdbc:mysql://localhost...jdbc how to display database contents? import java.sql....(); ResultSet rs=st.executeQuery("select * from data"); while(rs.next jdbc Types of locks in JDBC: Row and Key Locks:: It is useful when... or deletes rows or keys. The database server locks the entire page that contains the row. The lock is made only once by database server, even more rows are updated ResultSetMetaData - JDBC in Database!"); Connection con = null; String url = "jdbc:mysql...; Hi, JDBC provides four interfaces that deal with database metadata... of a database and its tables, views, and stored procedures. ResultSetMetaData Training, Learn JDBC yourself methods. Retrieving Tables from a Database This section provides you a facility for retrieving tables from a specific database through... with MySQL JDBC MySQL Tutorial JDBC Tutorials with MySQL Database =...("com.mysql.jdbc.Driver"); Connection con = DriverManager.getConnection("jdbc:mysql...=con.createStatement(); ResultSet rs=st.executeQuery("select * from employee | Creating a MySQL Database Table to store Java Types | Deleting a Table from Database | Retrieving Tables from a Database | Inserting values in MySQL database table | Retrieving All Rows from a Database Table | Getting JDBC - JDBC Connect Example."); Connection conn = null; String url = "jdbc:mysql...("Disconnected from database"); } catch (Exception e...JDBC i am goint to work on JDBC and i knew oracle but very poor jdbc - JDBC jdbc how to fetch the database tables in a textfiles,by using... getting the connection from databaseconnection class through dbconnection method... information. Join tables in the specific database and tables in a database. Now to retrieve a particular row from a table... Join tables in the specific database  ... two or more tables in a specific database. For this you need to have - Java Database Connectivity Tutorial of some specified methods. Retrieving Tables from a Database This section provides you a facility for retrieving tables from a specific... in the MySQL database table. We know that tables store data in rows and column format Retrieving JTree structure from database Retrieving JTree structure from database This example shows how to retrieving data from... the steps required to create tree retrieving the data from the database. Here JDBC Meta Data Get tables JDBC Meta Data Get tables  ... with ExampleThe Tutorial helps you to know understand an example from JDBC... String connection = "jdbc:mysql://localhost:3306/komal"; static public Mysql & java - JDBC ; String url = "jdbc:mysql://localhost:3306/"; String dbName...(); System.out.println("Disconnected from database"); } catch... on JDBC visit to : First Step towards JDBC! of some specified methods. Retrieving Tables from a Database This section provides you a facility for retrieving tables from a specific... JDBC Tutorials with MySQL Database. MySQL is one of the widely used database Comparing tables Comparing tables How to compare two or more tables in the Mysql database using jdbc JDBC tutorial with MySQL JDBC Examples with MySQL In this section we are giving many examples of accessing MySQL database from Java program. Examples discussed here will help...; Create JDBC Create Database JDBC Create Table JDBC Create Tables... and retrieve results and updation to the database. The JDBC API is part of the Java JDBC Example with MySQL . Retrieving Tables from a Database This section provides you a facility for retrieving tables from a specific database through an example. You have to know about... establishing the connection with MySQL database by using the JDBC driver, you JDBC ").newInstance(); Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root" ); String sql = "Select * from data"; Statement stmt...JDBC write a JDBC program to display the result of any query jdbc define lock escalation define lock escalation A lock escalation occurs when the number of locks held on rows and tables in the database equals the percentage of the lock list specified by the maxlocks database ; Concurrency Database concurrency controls ensure that the transactions... In database, a lock is used to access a database concurrently for multiple users. This prevents data from being corrupted or invalidated when multiple users try database connectivity - JDBC database connectivity example java code for connecting Mysql database using java Hi friend, Code for connecting Mysql database using...."); Connection conn = null; String url = "jdbc:mysql://localhost:3306 JDBC Steps ? Basic steps in writing a JDBC Application connected to the MySQL database and retrieved the employee names from the database... Steps for making connection to the database and retrieving the employee from... the employee data from database and displays on the console: /* Import JDBC ) { System.out.println("Inserting values in Mysql database table!"); Connection con = null; String url = "jdbc:mysql://localhost:3306/"; String db... information on JDBC-Mysql visit to : jdbc question - JDBC a database connection for each user. In JDBC connection pool, a pool of Connection...(),"jdbc:mysql://localhost/commons",pros); KeyedObjectPoolFactory kopf =new...jdbc question Up to now i am using just connection object for jdbc JDBC - JDBC vendors are adding JDBC technology-based drivers to their existing database...explanation of JDBC drivers Need tutorial on JDBC driversThanks! Hello,There are four types of JDBC drivers. There are mainly four type JDBC vs ORM statements against the JDBC complaint database. JDBC allows the programmers to quickly... and result is retrieved from the database, programmer can read the data programmatically from the result set object. Here is the simple example of JDBC example database table!"); Connection con = null; String url = "jdbc:mysql...JDBC In process to access database we create a connection the syntax... implementing class. Hi friend, Example of JDBC Connection with Statement how to do two database tables in one page? how to do two database tables in one page? dear all: i want to show these two database tables in one page. one table on the left (dbtable.jsp... = DriverManager.getConnection("jdbc:mysql://localhost:3306/test","root","root"); Statement st jdbc - JDBC from different threads. The JDBC-ODBC Bridge uses synchronized methods... concurrent access from different threads. The JDBC-ODBC Bridge uses... drivers for concurrent access? Question: Is the JDBC-ODBC Bridge "); con = DriverManager.getConnection("jdbc:mysql://192.168.10.211...(); ResultSet res = st.executeQuery("SELECT COUNT(*) FROM empdetail...:// jdbc - JDBC jdbc kindly give the example program for connecting oracle dase...*; import oracle.jdbc.driver.*; import oracle.sql.*; 2) Load and Register the JDBC..."); 3) Connect to database:*********** a) If you are using oracle oci driver Data retrieve from mysql database Data retrieve from mysql database Hi sir, please give some example of jsp code for retrieving mysql database values in multiple dropdown list... from the dropdown, related data will get displayed on the textboxes. Here we have ("com.mysql.jdbc.Driver"); Connection con=DriverManager.getConnection("jdbc:mysql://localhost... con=DriverManager.getConnection("jdbc:mysql://localhost:3306/ram","root","root... MySql visit to : Thanks java - JDBC java how to get connectoin to database server from mysql through java programme Hi Friend, Please visit the following link for more detailed information Help on JSP and JDBC - JDBC Help on JSP and JDBC Retrieve data from Database in JSP and JDBC...;% Connection con = null; String url = "jdbc:mysql://localhost:3306/"...;title>Retrive value from database</title></head><body>< mysql jdbc connectivity mysql jdbc connectivity i want to connect retrieve data from mysqlTree - JDBC to retrieve JTree Structure from the database? Retrieving JTree Structure from the Database Go through the JTree tutorial that is containing a example code for retrieving Jtree Structure from database. http store and retrive image from database - JDBC url = "jdbc:mysql://localhost:3306/"; String dbName = "databasename...()); } } } For retrieve image from database visit to : http...store and retrive image from database how to store and retrive retrieving image from mysql db with standard height and width retrieving image from mysql db with standard height and width Hi . Here is my code to retrieve an image from mysql db. Its working properly. But i... = "jdbc:mysql://localhost:3306/"; String dbName = "db"; String userName j2ee - JDBC and then use JDBC api to connect to MySQL database. Following two tutorials shows how to connect to MySQL database:... for asking question. I will tell you how you can connection to MySQL from JSP page Retrieving the Image from a database Table Retrieving the Image from a database Table Consider a case where we want... to retrieve the image from the database table. You can do it very easily after... from the database table our java program need to make a connection odbc jdbc odbc i have two tables in database 1table's attribute... from two tables SELECT * FROM emp e,dept d WHERE e.DEPT_NO = d.DEPT_NO; where emp and dept are the database tables. CREATE TABLE `emp JTree - JDBC JTree how to retrieve data from database into JTrees? JTree - Retrieve data from database Find out your answer from above Mysql List Tables Mysql List Tables The Tutorial illustrate an example from 'Mysql List Tables... in database name 'Table_in_girish'. The Syntax used to display the list of tables what is the jsp coding to insert a data in database tables , I Want to know the coding for insert the data from jsp to oracle database.. my... that insert the form values to MySQL database. 1)register.jsp: <html> <form..."); Connection con = DriverManager.getConnection("jdbc:mysql://localhost graph generation using jfreechart and retrieving values from the database graph generation using jfreechart and retrieving values from the database I have made a database containing 4 subject marks and name and roll... the implementation over a database JDBC result set. Its constructor consists of url, driver JDBC Versions to a database tables using methods in the Java programming language 3). We can use... JDBC Versions 1). The JDBC 1.0 API. 2). The JDBC 1.2 API Database Connection - JDBC Database Connection In java How will be connect Database through JDBC? Hi Friend, Please visit the following link: Thanks error - JDBC to the database"); conn.close(); System.out.println("Disconnected from database"); } catch (Exception e) { e.printStackTrace... conn = null; String url = "jdbc:oracle:thin:@localhost:1521:xe"; String creating jdbc sql statements - JDBC ."); Connection con = null; String url = "jdbc:mysql://192.168.10.211..."); con.close(); System.out.println("Disconnected from database...creating jdbc sql statements I had written the following program hi... help me to retrieve the image from the database please... = DriverManager.getConnection( "jdbc:mysql://localhost:3306/test", "root", "root...(); f.setTitle("Display Image From database"); Image image = f.getToolkit Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/81654
CC-MAIN-2013-20
refinedweb
1,801
50.94
Closed Bug 841041 Opened 10 years ago Closed 10 years ago [B2G][OTA] Ridiculously janky / unresponsive behavior after OTA update (making it hard to even unlock the phone) Categories (Firefox OS Graveyard :: General, defect) Tracking (blocking-b2g:tef+, firefox19 wontfix, firefox20 wontfix, firefox21 fixed, b2g18 fixed, b2g18-v1.0.0 wontfix, b2g18-v1.0.1 fixed) B2G C4 (2jan on) People (Reporter: nkot, Assigned: dhylands) References Details (Keywords: smoketest) Attachments (5 files, 1 obsolete file) Description: Homescreen does not display after OTA update without having to restart device Repro Steps: 1) Go to Settings => Device information => Software updates => Check now 2) Install new System Update 2013-02-13-071150 3) Wait for the homescreen to appear or press power button if display went off Expected: *Locked Homescreen displays, user is able to unlock and use device Actual: *Homecreen appears black - see screenshot *Homescreen displays successfully after restart Repro frequency: 100% 3/3 devices *screenshot attached *log file attached Notes: Updated from: Gecko:70c8f2cf813626e8c7b0f89676e1a62fe4ddfcae Gaia:ecca2ee860825547d5e1109436b50b74dfe9261e Build ID:20130212070205 That sounds bad. Can you consistently reproduce? blocking-b2g: --- → tef? Component: Gaia → Gaia::Homescreen Something most definitely blew up here. Weird logcat errors: 02-13 08:46:05.212: I/Gecko(108): ###!!! [Parent][AsyncChannel] Error: Channel error: cannot send/recv 02-13 08:46:11.508: E/GeckoConsole(20200): [JavaScript Error: "formatURLPref: Couldn't get pref: app.update.url.details" {file: "jar:" line: 126}] Marshall - Any ideas? Flags: needinfo?(marshall) blocking-b2g: tef? → --- I ran into this today. :overholt did too. Actually maybe I ran into a related but different issue. In my case the screen stayed entirely blank, then I got the boot image for quite a while, then back to entirely blank. I ran into this yesterday and today. I have to pull the battery to get anything other than the "hardware" buttons to light up. blocking-b2g: --- → tef? Component: Gaia::Homescreen → General Gah, should probably be shira? blocking-b2g: tef? → shira? (In reply to Andrew Overholt [:overholt] from comment #8) > Gah, should probably be shira? Why shira? Isn't this going to impact tef? I've applied the update locally, and I also see "app.update.url.details" errors, but I'm able to get into the phone -- albeit _very_ slowly. When the screen is off, pressing the power button to turn it back on takes on the order of ~15-20 seconds on my device, and then almost immediately goes back off. If you tap the screen some while after you press the power button, you can avoid the timeout, but then you have to patiently drag the lock drawer up, and press the unlock button while the device remains extremely unresponsive. Doing a quick top, I noticed that the b2g process is eating between 97-98%! 2577 0 98% S 36 163016K 53780K fg root /system/b2g/b2g Looking into why that might be.. Flags: needinfo?(marshall) (In reply to Jason Smith [:jsmith] from comment #9) > (In reply to Andrew Overholt [:overholt] from comment #8) > > Gah, should probably be shira? > > Why shira? Isn't this going to impact tef? Agreed Assignee: nobody → marshall blocking-b2g: shira? → tef? (See also bug 841517, which might be a dupe of this bug.) So this has basically bricked the device for me. Restarting doesn't help, I just get a black screen. I can barely get the lockscreen to display. What does: top -t -m 5 show? (from an adb shell)) It showed /system/b2g/b2g at 96%. I ended up kill that process and after it restart the phone is responsive again. I'm not sure why yanking the battery out previously didn't fix it, unless I just gave it more time to finish whatever it was trying to do. ) I ran into this at first, I think it was due to the device being so behind/bogged down that the daemon wasn't running (yet). (In reply to Lucas Adamski from comment #16) > It showed /system/b2g/b2g at 96%. I was hoping to see the whole line, so we could tell which thread was consuming the CPU (top -t shows individual threads). ) adb might be disabled. It is by default for dogfooding. You can enable it by enabling: Settings->Device Information->More Information->Developer->Remote Debugging (In reply to Dave Hylands [:dhylands] from comment #18) > > I was hoping to see the whole line, so we could tell which thread was > consuming the CPU (top -t shows individual threads). Sorry, I'd closed the window by the time I saw your question. :( Spoke too soon. Checked for updates, applied it, now stuck again: User 83%, System 16%, IOW 0%, IRQ 0% User 272 + Nice 0 + Sys 54 + Idle 0 + IOW 0 + IRQ 0 + SIRQ 0 = 326 PID TID PR CPU% S VSS RSS PCY UID Thread Proc 526 550 0 24% R 157880K 52444K fg root DOM Worker /system/b2g 526 549 0 24% R 157880K 52444K fg root DOM Worker /system/b2g 526 555 0 24% R 157880K 52444K fg root DOM Worker /system/b2g 526 546 0 24% R 157880K 52444K fg root DOM Worker /system/b2g 617 617 0 1% R 1088K 444K fg root top top So yeah dholbert is seeing the same thing. 4 DOM Workers consuming most of the CPU. And they're all in the main process. I don't know what the DOM Workers do. I was able to reproduce by flasing my unagi with this image: and then perform an OTA update, which updated to: I tried flashing and then OTA updating to my local built version and it didn't reproduce (my locally built version is a v1-train) I reflashed and OTA updated as in comment 23 and it reproduced (so not a one off). I was able to get into gdb and get some back traces. I'm not sure of the validity of the symbols since the image that was being wasn't from the tree I was in. [clarifying summary] Summary: [B2G][OTA] Homescreen fails to display after OTA update → [B2G][OTA] Ridiculously janky / unresponsive behavior after OTA update (making it hard to even unlock the phone) (In reply to Dave Hylands [:dhylands] from comment #25) > I'm not sure of the validity of the symbols since the image that was being > wasn't from the tree I was in. Yeah... The traces don't look right to me. Do you guys need more information to debug this issue? i just reproduced these symptoms myself when updating to: Gecko Gaia 6544fdb8dddc56f1aefe94482402488c89eeec49 BuildID 20130214070203 Version 18.0 For what its worth, if i pull battery and reboot, I can recover the device back into a usable state. Just fodder for triage drivers under consideration. blocking-b2g: tef? → tef+ I was able to reproduce using a local build. STR: 1 - Modify gecko/toolkit/content/UpdateChannel.sh near the end to override the channel: channel = "foobar"; return channel; 2 - build. 3 - Create an update ./build.sh gecko-update-full 4 - Setup the phone to use the update tools/update-tools/test-update.py ${GECKO_OBJDIR}/dist/b2g-update/b2g-gecko-update.mar 5 - Do a Check Now and then do the update This message: AUS:SVC UpdateManager:get activeUpdate - channel has changed, reloading default preferences to workaround bug 802022 from here: seems to be the key. I'm hypothesising that the reload-default-prefs is the actual trigger. Attachment #714262 - Attachment is obsolete: true I let the process run for a bit longer and grabbed another set of backtraces I picked one of the DOM Workers and just hit n over and over in gdb. It wound up doing these 4 lines continuously: 3428 #endif (gdb) 3408 WorkerRunnable* event; (gdb) 3410 MutexAutoLock lock(mMutex); (gdb) 3412 while (!mControlQueue.Pop(event) && !syncQueue->mQueue.Pop(event)) { (In reply to Tony Chung [:tchung] from comment #29) > Do you guys need more information to debug this issue? I can reproduce at will now, so now it's mostly just trying to figure out whats going on. Assignee: marshall → anygregor Attachment #714586 - Flags: review?(bent.mozilla) thx jdm! dhylands mentions that the bug is not completely gone. Assignee: anygregor → dhylands Whiteboard: leave-open I filed bug 841962 to followup on this problem and removed the leave-open on this bug. At least with the patch applied in this bug, the phone now just takes alot longer to bootup after an update-channel change, but it seems to perform ok. Whiteboard: leave-open Status: NEW → RESOLVED Closed: 10 years ago Resolution: --- → FIXED status-b2g18: --- → fixed status-b2g18-v1.0.0: --- → wontfix status-b2g18-v1.0.1: --- → fixed status-firefox19: --- → wontfix status-firefox20: --- → wontfix status-firefox21: --- → fixed Target Milestone: --- → B2G C4 (2jan on) Verifying fix on V1-train branch - OTA goes smoothly tested with the following: 1. manually flashed to Unagi build 2013-03-20-070206 Gecko Gaia 6c3767c2dea43b5e9aff7d156d36d69649005621 2. revertNightly 3. OTA to Build 2013-03-21-070203 Gecko Gaia 7af427d35c4d557c75b2060022815f07851acc28 Issue seems still to be occurring when OTA from V.1.0.1 builds but there are other bugs to cover that, please refer to bugs 847511 and 842932 (note that switching update channels takes place here) Status: RESOLVED → VERIFIED
https://bugzilla.mozilla.org/show_bug.cgi?id=841041
CC-MAIN-2022-40
refinedweb
1,527
62.17
I have a few custom components that I implemented for 4.0 version. I want to implement something that is the same as a reusable configuration item that comes with 4.1 version. But I do not know what to do. In version 4.0 to implement custom coveo component I inhereted my property template from Base Id Template and implemented a c# class that inherets BaseModel and sets values in a property through ParametersHelper and then I made a model item and a rendering item. In version 4.1 I suppose my template item shoud inheret Base UI Component and I shoud make c# class something like this (as in the example) Can anyone help with it and describe what I shoud do step by step? And I want to add custom property to Coveo Search Interface component and Facet component how can I do it? Answer by François Lachance-Guillemette · Nov 29, 2017 at 01:22 PM You have a great start there! I have to be honest with you, the documentation is lacking about this for now. so let me recap some of the stuff you used and see if you figured it out right :) SitecoreProperty is used to Read Properties from Sitecore. It has to match the template name in Sitecore. SearchUiProperty is used to Write Properties for the Coveo JavaScript Search Framework. DefaultCaption up there will output the value coming from the DefaultCaption template into the data-default-caption attribute. I think that your big missing piece is the Model. Here is a sample of what you need public class FacetDropDownModel : BaseModelWithProperties<FacetDropDownProperties> { } Inheriting this model provides some properties that you can use: The Properties object which contains all the values read from the data source. The RawProperties dictionary which contains key-value pairs for all the attributes required for the Coveo JavaScript Search Framework. You must create a reference to this model in Sitecore, and use it in your custom rendering. You can pretty much copy-paste the Coveo Facet component and change some variables to match your custom model, and it should work. Hope this helps! Don't hesitate if you have more questions, and documentation is on the way for this :) How can I add a preview animation for custom component like in facet as example? If you take a look at the Facet Code, the waiting animation is handled by the .coveo-facet-header-wait-animation class. I have not tried it, but I guess you could override it with your own styling. I mean this How can I disable it for specific facet or create for custom component? How can I read Rendering Parameters in properties class? Meybe i shoud define property in properties class [SearchUiProperty] public string DefaultValue { get; set; } <br> and then set value in model (OnPropertiesInitialized override this)? The framework uses Data Sources for its properties. What is your use case? Any specific reason you want to get rendering parameters? I want to override as example Default Value on a specific page by the rendering parameter instead of duplicate a datasource item Answers Answers and Comments 2 People are following this question.
https://answers.coveo.com/questions/13722/coveo-hive-and-a-custom-component.html
CC-MAIN-2018-26
refinedweb
525
62.68
28191/why-do-we-use-return-statement-in-python In python, we start defining a function with "def" and end the function with "return". A function of variable x is denoted as f(x). What this function does? Suppose, this function adds 2 to x. So, f(x)=x+2 Now, the code of this function will be: def A_function (x): return x + 2 After defining the function, you can use that for any variable and get result. Such as: print A_function (2) >>> 4 We could just write the code slightly differently, such as: def A_function (x): y = x + 2 return y print A_function (2) That would also give "4". Now, we can even use this code: def A_function (x): x = x + 2 return x print A_function (2) That would also give 4. See, that the "x" beside return actually means (x+2), not x of "A_function(x)". The print() function is use to write ...READ MORE The working of nested if-else is similar ...READ MORE For Python 3, try doing this: import urllib.request, ...READ MORE Hello No, we don't have any kind of ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE There is no do...while loop because there ...READ MORE In Python programming, pass is a null statement. The ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/28191/why-do-we-use-return-statement-in-python
CC-MAIN-2020-34
refinedweb
255
85.49
See also: IRC log, agenda/TOC, Mon 11 Dec, Wed 13 Dec DO: In Vancouver, lots of discussion about terminology ... Accept sets, defined sets, etc. ... I could try another pass at the terminology section, or I could split it up, I said then ... Struggling with how to (re)structure this, but no progress since Vancouver, since no clear sense of how to proceed ... I have written up an article about this stuff just from the partial understanding/accept/defined sets and versioning ... but it's not finished TBL: What's interesting in what you just said is "The accept set is bigger than you thought" DO: The larger you make the accept text set, e.g. by not using a strict schema, even if you only understand a small part of the accepted text, you're in good shape to version going forward NM: Another possible direction for discussion: 1) Could we net out where we ended up in Vancouver: concentric circles, Henry's stuff, . . . <DanC_lap> this one, dorchard ? <dorchard> no, that's different but it might be useful... <dorchard> NM: 2) Given an instance and some kind of language definition, what can we say? Well, add into the mix a piece of software, written with some take on the language definition in view ... It has its own quirks, e.g. it can't handle really long names ... DO's document goes in the direction of saying "OK, there's another language there, the language your app processes well, namely OriginalLang-longnames" ... [shift example to deeply-nested tables crash the app, so OriginalLang-TablesNestedDeeperThan3] ... Not sure I'm comfortable with this approach TBL: We've been using the idea of different languages to manage our discussion, what's making you uncomfortable? NM: Well, there are going to be a lot of them, because there are lots of slightly-broken apps ... and it's not clear how to define the corresponding languages ... Maybe we shouldn't try to cover this in the spec., but concentrate on the in-principle shape of the language and its versions ... Core story is about language specs, instances, and information -- try to find the mathematical relations, e.g. HST's stuff in Vancouver ... I'm happy with that. ... But there was also stuff about particular apps whose defects we could model using that same approach ... I don't think we should go there -- it just confuses users ... And I don't think it will work very well, either ... At best, we could add something which explains that we're not tackling that TBL: So just leave things as they are? NM: No, although we haven't looked at those bits lately, I think there's stuff that needs to come out. ER: DO, are you getting your message across? ... Are you saying what you want to about versioning DO: No, we seem to have gotten diverted onto compatibility and terminology , with the problem that if we got that right versioning would fall out ... hasn't happened yet NM: Version skew, most of the parts I was concerned with have gone in the latest version DO: We got stuck in the weeds a bit wrt the diagrams -- if we back up to ToC, can we see a way forward? <DanC_lap> (i think re the line from Language to Syntax, we agreed that it's not 1-1, and then talked about whether it's worth having a line at all that didn't converge) <DanC_lap> ("3rd party"? who are the 1st and 2nd parties here? hmm.) DO: A helpful part was section 8 -- nets it out in terms of namespace strategies for XML languages ... So there's language versioning, XML Language versioning, and versioning with W3C XML Schema as three levels of story [general discussion about 1 vs. 2 XML parts] TBL: Tag soup discussion has brought up the XHTML modularisation spec ... It's long, I haven't read it, but I gather it's about XML modularisation in general ... Anyone use this? HST: I used it for RDDL document at XML Schema namespace URI ... [Summarises the Schema and Schema 1.1 situation, extensibility via substitution groups and wildcards] NW: [Summarises the the NVDL story] TBL: Right, so for NVDL you have to have an NVDL thing which defines how things go together ... It's more web-like to make it work if each namespace owner just does there own thing NM: Schema does need you to put in the hooks HST: True for wildcards, but not for substitution groups TBL: I can see the value of NVDL when you want to continue to control how things go together, say if you're a publisher working with Docbook ... for the Web it's interesting to be able to be looser NW: I haven't looked, it may be that there's a way to write an NVDL story which says generically "do the right thing with every namespace" TBL: I want to just drop a bit of RDF-A in and have it be allowed HST: Difference between one use of NVDL and subst groups is the NVDL allows you to insert stuff invisibly, as it were, where subst groups only let you replace things DO: Can we bring this back to the finding? NW: The Schema part of the finding should talk about XHTML modularisation ... I'll take an action to look at NVDL <scribe> ACTION: NW to produce some information about NVDL for the finding, maybe [recorded in] <dorchard> DO: Sections 8 and 9 would both go into an XML-specific part ... Extension vs version is section 10, not sure about that <DanC_lap> (yeah, owners version, 3rd parties extend. that appeals to me.) <Zakim> DanC_lap, you wanted to suggest that these XHTML modularization requirements apply not just to XML Schema 1.1 but also to CDF (and/or HTML) and to note that the practical DC: Is the CDF WG up to speed with the subst groups story? NM: I think they do better with wildcards DC: Subst groups make more sense to me <DanC_lap> (ht, please minute that "strategy 7" reference reasonably carefully; that seems like a gem, if only historically) TBL: We have the XHTML Modularisation design, and the subst group stuff DC: We're in a very different place wrt those two HST: I forgot to say about substgroups, there is a constraint that the type of the substituting element has to be derived from what it's replacing DO: So we should look at saying something about subst group story in the new part 2 TBL: I'd be happy with moving those sections into an XML part, and including a discussion about XHTML Mod. there, and something about subst group technique, etc. ... There are lots of tools out there: XHTML, subst groups, NVDL DC: I can see that there are alternatives, which could be described in articles by individuals, but I don't think there's consensus on them in the community Schema-based XHTML mod: [See end of discussion in the afternoon] Philippe le Hegaret joins the meeting PlH: Although mainstream WS is done over http, large companies use bindings to many other protocols: UDP, MQ series, JMS, etc. ... So there's an argument to be able to do a GET independently of what the protocol is underneath ... Yes, that means that you can do a GET, on top of SOAP bound to HTTP, and it won't turn into an HTTP GET. . . ... This is because the SOAP via GET is in the REC but not widely implemented ... Most people use WS and SOAP via toolkits, and those toolkits don't use HTTP GET much at all ... In particular when security is involved ... So the argument for WS Transfer is that it's providing GET functionality in the WS world, i.e. independently of transport ... [something about SOAP 1.1. vs. 1.2, scribe didn't get -- NM fill in?] ... Furthermore there are other specs coming along which are building on top of WS Transfer, e.g. metadata transfer ... so to request info about an EPR, e.g. WSDL or ??, you request metadata via WS Transfer NW: But if you just gave the thing a URI, you could just do a GET DO: [Example scribe didn't get] NM: The layering make the story more complex ... First step, get rid of the EPR/URI problem ... Second step, arrange that if you're on the Web, be sure that tranfer requests do actually turn into GET PlH: Once you commit to using HTTP GET, you're outside the WS Stack, you can't use WS Security or Reliable Messaging DO: Not quite true, WS Security does provide for using SSL PlH: That does to Signature, but yes, encryption can be handled via SSL NM: I wouldn't try to automatically map the full WS security stack onto SSL, but more clarity about what HTTPS/SSL can give you would be a good thing ... But you don't get non-repudiation, I understand PlH: Another thing you don't get is timestamping, which you can't get with SSL DC: HTTP requests are all date-stamped ... Secure time service? PlH: I think 'yes' ... If we started work on WS Transfer, how would the TAG react? HST: I think we can't say 'no' -- the stack is there, we can't deny its use for transfer NM: I would be much happier if we can do a better job of getting the community that's using EPRs, etc. to take to heart the value of integration with the Web. I feel good, on the whole, about our recent interactions with the WSA group that led to the note being included in the WSA core; I'm disappointed that. NM: Having said that, I think the WS Transfer work can go ahead TBL: So we're agreeing that WS architecture is separate from Web Architecture ... It's not one information space, to some extent the two information spaces compete TBL: There's no point in trying to force our view of WebArch onto their information space. As David Baron pointed out at the recent AC meeting in Tokyo, forcing two incompatible goals into a single WG can't work. <Zakim> DanC_lap, you wanted to note that the practical requirements on RDFa are (1) that validator.w3.org gives it a thumbs up [which currently requires DTDs] and (2) that HTML authors DC: Should we then consider not hosting the WS work at W3C, given that's it's not compatible with Web architecture PlH: But WS is not entirely against REST, it's just that the toolkits don't typically exploit a RESTful foundation DO: But adding WS Transfer in a way would enable more RESTful WS -- after all, REST is not dependent on http, you can have a RESTful use of SOAP over UDP NM: My employers look at this continually, and there is more recognition of the value of the Web, and how things are going to play out is just not simple DO: What about the perspective that W3C is about the foundation, not the higher levels? The toolkits and stacks don't make that distinction PlH: H. Frystyk Neilsen, for example, is using SOAP w/o WSDL or anything else from the WS Stack <plh> (see) <dorchard> I say again my point, which is the large majority of toolkits and services use it all <dorchard> right now, it's soap + wsdl. <dorchard> Sometime in the future, it will be soap+ws-address+ws-rm+ws-sec, which is the ws-i rsp profile. NM: [extended example of timestamped signed stockquote] ... [and reference back to yesterday's printer/diskdrive example] NM: Net-net: integration of SOAP/EPR-based and HTTP/REST style can be done, but it's not easy PlH: It certainly can be done -- Amazon and Yahoo are doing it NM: The crucial point is that you use the same URI either way, once the EPR/URI problem is solved PlH: The way EPRs are being exchanged, the example in the WS Tranfer example is misleading -- usually you will start with a URI, and only after you get to the dynamic interaction you start seeing EPRs with identifying information in ... as reference parameters DO: The classic case is a session ID -- you could use a customer ID for that, but you don't have to PlH: The fact that the WS Transfer example breaks the Addressing agreement can be fixed TBL: "A man convinced against his will is not convinced at all" <DanC_lap> "A man convinced against his will is not convinced at all" --TBL. indeed. "paper consensus" is another term that comes along. <Norm> quote attribution: Laurence J. Peter NM: If and when the WS community were convinced of the values of URIs, the conversation would go in a different direction TBL: So again, maybe they are just focussed on the stack and what it can do for them, and the URI issue just doesn't arise ... and we can leave them to it NM: But it's just not that simple, they came to me at one point and said "So what's the story about this REST stuff?" DO: Less is more -- a single soap: URI for a whole collection of services TBL: Parallel with phone system -- bigCorp has one 800 number for 100000 employees. . . NM: The only way to safely do this is not to use EPRs, but maybe the marketplace will discover that EPRs w/o identifying parameters have more value TBL: The WS space is still in a developing/exploratory phase, where people are experimenting with different approaches ... and they may stabilise on something which will transliterate into URLs without much difficulty ... So what should the TAG say about WS Transfer DC: We reopened issue 7 PlH: Would the TAG oppose the creation of a WG to do WS Transfer? DO: That wasn't the original question ... Regardless of whether it's done here or elsewhere, what is the relationship of WS Transfer to our position on "When to use GET" ... We could say "This is really harmful to the Web" ... or we could say "These services should be on the Web, let's find a way" ... or we could say "Fine, sure, whatever" VQ: Let's get opinions around the table DO: I think the community is missing some technical pieces which would allow people minting identifying EPRs to mint URIs instead ... particularly QName-to-URI ... Also, just because the toolmakers of the WS stack don't buy into WebArch, doesn't mean that their customers wouldn't like some of it ... There are people out there with a more wholistic view, and we should try to help them TBL: We owe the world a statement about the loss of network effects from having a parallel web ... People have the right to define independent information spaces, we can't stop them ... We have some ideas about possible routes towards convergence, but the TAG can't make that happen ... It's not a topic on which the TAG itself should spend much effort <Zakim> DanC_lap, you wanted to say, re whenToUseGet, read/query methods with a few scalar parameters should use GET. I encourage work on WADL and the like that improve the integration DC: Value WS delivers to programmers is ease of use, and the lack of a DL means that even when they've just got a few read/query parameters, they end up using POST via the WS Stack <DanC_lap> (hmm... I did say "lack of DL"; I suppose WSDL 2.0 can express what I'm talking about, but toolkits don't support the WSDL2/GET stuff.) <dorchard> (also Dan, I don't think very many REST folks will end up using WSDL 2.0 because of the complexity that they don't get value from) <DanC_lap> (right; the SPARQL WG went to the trouble to use WSDL 2.0 for our protocol, which is restful, and it was clear that we were the only ones doing it.) NM: 1) I think it's appropriate to continue to host WS* activities at the W3C NM:. NM:. NM: 4) In the end, the adoption or lack thereof of technologies like URIs vs. EPRs with identifying refparms will and should be driven by users who do or don't value the things they can achieve with one approach or another. I agree with Tim that I don't see much more that the TAG should be doing in the short run to promote such implementation, not because it isn't important, but because we aren't the group to do it. VQ: I don't think we should encourage WS to go elsewhere, so we should work with them VQ: We should try to reiterate the values of WebArch, and try to convince the WS folks of those points ... Look closely at the charter, and at their work as it goes along HST: Agree with most of what's been said ... One pain point that would make a difference to fix is to have a RESTful stack to make the crossover integration easier HST: Not really the TAG's role, however, but we could brainstorm in that area, perhaps ER: My team use WS all the time, I don't think the situation is particularly broken ER: Right tool for the job ... is not always one tool NW: exploit it. . <Ed> Also as a user of web services, I have several services which I would prefer to 'not' have a URI referance for and certinaly not pass all parameters in a URL to a service. not to mention security, partial packet encryption etc. PlH: W3C's decision to do WS Transfer will depend on getting all the right people involved ... I agree the TAG shouldn't spend a lot of time trying to convince WS folks of the value of WebArch, but ... spending some time is good, e.g. finding and asking for a fix for the WS Transfer example <DanC_lap> (does the ws-transfer spec solicit comments? I can't find a 'please send comments/fedback to XYZ' address in ) PlH: But what is important is, for example in the Web of Services workshop, to give some answers to the kinds of questions we get, e.g. PlH: "So you say I should give all my services a name, how do I do that?" PlH: There is real value in having the HTTP binding in WSDL2.0, to support SOAP and http access to the same service, but ... WADL is also important, because it's resource-orientated, as opposed to service-orientated PlH: Trying to focus the workshop on use cases/requirements <Noah> The key is that the resource be identified using the same URI for all of these purposes. That URI can be in an EPR, but as we've said before, any refparms must not contribute to the identification of the resource. PlH: People are being proposed a technology w/o giving them use cases which motivate its use PlH: If the use cases are drawn out, in some cases the answer can be "http will give you all you need" NM: What are the success criteria for the workshop? PlH: To classify use cases -- WebArch, WS, nearly-WebArch-but-lacking-.... ... I'm more interested in the latter, but there will be people who will want more focus on WS <dorchard> (There is not a solicitation for comments. One reason given is IPR concerns) TBL: ER, your choice to use WS across-the-board, or based on particular issues? ER: Sometimes yes, sometimes no TBL: Why the preference for not naming with URIs? ER: I have services on my server whose job is to gate-keep across the firewall, and I don't want public use. NM: We have said in the past that naming and access control should be kept independent ER: We find maintenance and validation is much easier with WS than with REST-based APIs DO: This brings us back to the discussion about WADL and strong typing ER: I find Java folks tend to go for REST and MS shops use WS, because the tooling goes that way [TV Raman joins via Zakim] <dorchard> Which is why I argued why a Web DL would help REST a lot, so they could get client validation of inputs <Norm> I handled the "validate a complex set of parameters" with an XSLT template: TVR: There are very stable RESTful APIs on the Web -- google and in particular yahoo maps DC: So how do you know how to use that API? TVR: You look at the HTML form and you see what it's doing HST: QED: You need a human wisard who can do that, whereas ER's point wrt WS is he just takes the WSDL and drops it onto the hotspot of the tooling and out comes the interface PlH: Server configuration is not a solved problem -- cache management is a black art also ... Point is the not only client side but server side is that for the average developer it's much easier to use WS DO: What's ironic is that the RESTful app could be simpler to develop with the right tooling, but that tooling doesn't exist NM: The center of gravity for WS is heavily stateful and the tooling takes care of that ... And that's not where the center of gravity of REST is DO: People addressed the "Should the TAG do something" question, not the "Is the competition there, and if so is it good or bad?" NM: If the authors of WS Transfer wanted to be as compatible with WebArch as they could, the spec. would look very different TBL: But since they don't want to do that, papering over the cracks is not a good idea DC, PlH: Should W3C be on both sides at once -- not good NM: Still interested in the intersection, where the potential for synergy is being missed ER: We do publish some WS services, for enterprise partners TBL: Available to me as a consumer? ER: Not intentionally, but if you tried you might succeed TBL: Google maps is query-only, on a huge scale ... there's no authentication ... TAG could say -- two well-defined spaces, one where WS is right, one where REST/AJAX is right, and only if you fall into the middle is there a real problem ER: That's what I was saying as well TVR: I think the query vs. update dimension is by far the most important discriminator ... Consider Google Date, which has update functionality, they're not particularly RESTful <DanC_lap> (hmm... AtomPub is pretty RESTful, no?) DO: Async vs. sync. is the other really important dimension NM: EBay is another example, and they offered both RESTful and SOAP APIs, despite being both query and update ... It was tooling based, primarily ... This info comes from a presentation several years ago PlH: I don't think the APIs give the same functionality ... With Amazon, and I could be way out of date on that, you can do everything but checkout <DanC_lap> (WEBDAV doesn't use GET for query operations; it's less RESTful than AtomPub) <Noah> I think Henry may be right. The following regarding eBay REST APIs is at <Noah> Currently, only the GetSearchResults call is supported via REST. <Noah> That suggests that the full eBay auction capability isn't available via REST. I'm surprised, but that's what it appears to say. Thanks. VQ: That's the end of the discussion of this topic <Noah> But, eBay also seems to have a so-called XML API at that claims to allow: <Noah> # Submit items for listing on eBay <Noah> # Get the current list of eBay categories <Noah> # View information about items listed on eBay <Noah> # Get high bidder information for items you are selling <Noah> # Retrieve lists of items a particular user is currently selling through eBay <Noah> # Retrieve lists of items a particular user has bid on <Noah> # Display eBay listings on other sites <Noah> # Leave feedback about other users at the conclusion of a commerce transaction <Noah> Not sure how the XML API is different from the REST API VQ: Suspended until 1330 ... 1,2 March is now at risk ... NW to scribe tomorrow a.m. ... Stuart Williams to be invited to join us by 'phone Wed. a.m. to discuss f2f scheduling ... DC to scribe Tuesday p.m. ========================= TBL: [Picture on whiteboard] <---Map---------Blog------Ebay-------Back end---> Public Partners Query Query+Update+.... Authenticate Session V. high rate relatively low Synchronous Async HT displays an HTML test document... roughly... <H1>test</H1> <SCRIPT>document.write("<P>")</SCRIPT> ... HT: despite the appearance of mismatched tags, this document is DTD-valid. ... note section 18.2.4 of the HTML 4 spec: "... 3. The generated CDATA is re-evaluated" Noah: remind me? a <p> implies the end of the preceding p element? HT: yes ... "HTML documents are constrained to conform to the HTML DTD both before and after processing any script elements" TVR: I doubt the implementations follow that part of the spec HT: quite. TVR: evaluation is multi-pass. e.g. document.write("<scr"); document.write("ipt>") HT: this iterative evaluation is fairly widely understood and used, which makes life more complicated than I had thought. TVR: I think some folks have noticed it, but probably not that many [?] NM: what's the corresponding story in XHTML? HT and Norm disagree about the answer to NM's question. HT, Norm, and NDW basically agree that XHTML is somewhat underspecified in the area of <script> TVR: one goal is to make this sort of mess fade away by making it less necessary HT: but one of the main uses of it is google ads TVR: yes, it's there for compatibility with older browsers ... 2 to 7 years old. ... these ugly idioms persist because, when faced with deploying nice code on 10% of browsers or something ugly on 95%, it's cost-effective to deploy the ugly stuff. TimBL: one of the horrible things about the state-of-the-art is the lack of import/include... ... each of google maps and yahoo's js lib has an import idiom, but they're incompatible. DanC: I gather the javascript designers are working on that TVR: but it's not clear that it will ever be deployable DaveO: so... what's the half-life of a browser? how long do these things take to roll out? TVR: well, it was [5? 8?] years ago when W3C said "let's move to XHTML" and we're not there HT: have people looked at the HTML 5 spec? recently? (a few hands go up) <ht> HT: note the lack of a formal grammar DanC: well, it's only a perl script away HT: while this spec doesn't give a grammar for the input, there's fairly clearly a structure to the output; DOM trees. It seems valuable to write a grammar for those. DanC: for example, can this algortithm ever produce a BLOCKQUOTE inside a P? HT: seems clear author's intent that no, it never does. ... recall that some start tags are implicit ... so just <p>....</p> is an HTML 4 doc DanC: you need a title. we could fix that, so that the concatenation of two HTML documents is an HTML document. TimBL: and you could fix it so that ordinary plain text is HTML HT: John Cowan's tag-soup is like Hixie's algorithm, but it's table-driven, which seems more appealing [from a QA perspective] ... so it seems interesting to see if there's anything in Hixie's approach that can't be captured in tables with Cowan's engine. TVR: note also 3 other relevant implementations: Opera, Mozilla, IE HT: the tables fit better with the Principle of Least Power than C code TVR: working toward a cleaned up XHTML markup made sense, but the complexity of HTML 5 puts it beyond what I can see with computer science techniques TBL: I'm interested in factoring the bytes->Dom transformation into parts that have interesting properties like commutativity... ... consider <div><crypt key=5>...</crypt></div> ... which might, after decryption, become <div><script>doc.write(...)</script></div> ... (whiteboard discussion...) HT: consider <div><crypt .../>text</div>... ... then <div><script .../>text</div>... ... then <div><p>text</div>, which means <div><p>text</p></div> TVR: consider the way php emerged after people got tired of code full of print statements... ... I expect we'll see a parallel evolution on the client, with templates <Zakim> Norm, you wanted to ask if we're interested in general XML on the web or just HTML NDW: can we stem the tide of tag soup at HTML? or should I expect all XML languages to get corrupted likewise? HT: people copy and paste HTML... TVR: exactly; the tag soup corrupts SMIL and RSS presently DO: we have the same issue in [some portal ?] TimBL: I'm not giving up on moving gently toward clean XML HT: I'd like to have a situation where only ?HTML is allowed to be soupy, so at worst we have islands of uncertainty in the midst of otherwise well-formed XML, to which the fixup applies. The problem is telling where the _end_ of the island is NM: I'm not sure how to make a market for clean XML <DanC_> )) <Zakim> DanC_lap, you wanted to note every XML dialect that gets popular seems to get messy; it's evidently cost-effective to have the readers clean up after the authors and to note the security motivation; e.g. code that scrubs HTML in blog comments <Zakim> Noah, you wanted to ask about extensibility of TAG soup NM: I can see the appeal of TAG soup and why it's the pragmatic direction given existing content conventions and tooling. NM: That said, I'm surprised that the tag soup guys aren't telling a story about extensibility. While namespaces are clumsy for users, they do provide for much more distributed evolution of markup conventions. Without namespaces, there's a risk that you can't do CDF-like things, and that standardization of all sorts of new markup in HTML will have to be centralized. NDW: "div and span and class, CSS, and that's all you need" <Zakim> raman, you wanted to add that quoting attrs etc is cosmetic. the content splicing e.g. rss, smil and the mismatched tree is the real issue NM: I'm less concerned than I was that XAML will in the next year or so be a threat in the marketplace to HTML as we know it. I'd like to think HTML can evolve to address some of those needs over time ... does the tag soup community accept that HTML 4 or 5 is as far as it gets? they're not interested in evolution by components? HT: I think folks like Dean Jackson are absolutely in this because they care about composability, SVG, and XForms, and he doesn't want to get too far from the tag soup community TVR: I wonder if that's really Dean's position <Zakim> DanC_lap, you wanted to say something about how I have made peace with standards stuff being perenially a minority of what's going on, always dominated by pop culture <Noah> I'm a bit afraid that in trying to build something that is more practical for current content authors and tools, I'm concerned that they are leaving behind the whole goal of a compositional framework with distributed extensibility. <Noah> I'm not trying to position XAML or Second Life so much as threats. More, they and Flash etc, are indicators that content will increasingly move to frameworks that can grow with technology. . TBL: when I talked to Jon Murai who runs a large internet exchange, he said most of the bytes are ripped DVDs; illegal. <ht> s/[John ? ht help?]/Marai (sp?) at W3C10 Asia]/ <dorchard> <dorchard> <raman> I'm heading for lunch DaveO: I took a crack at moving XML stuff from part 1 to part 2 NM: some XML stuff is still in 3.1 DaveO: well, this was a quick edit; is the TOC close? And there are schema design issues beyond XML NM: what's the split between "Questions" and "Decisions"? hmm... DaveO explains... NM: so maybe "Questions about goals" and "design options"... DaveO: or "requirements" and "design" NM: ... schema validation, text set, defined set... ... should stuff that's outside the defined set be valid? [?] ... some users say "for stuff in the accept set but outside the defined set, yes, the schema language should help me handle that" ... and other users want everything outside the defined set to be invalid. ... that's a useful point to make. DaveO: yeah; where/how? NM: well, that's why I'm trying to understand the split between section 2 and section 3? DaveO: I can see it as a requirement DanC: that appeals to me <Zakim> DanC_lap, you wanted to look at boxed items NM: perhaps "decide carefully whether your language needs to be extensible" after pointing out that extensibility has a cost DO: how about "you should design your _xml_ language for extensibility"? DC: yes, that appeals; there's not much reason to do the redundancy of XML if you're not interested in extensibility NM: as long as we don't imply that 100% of all XML languages should be extensible ... but I'd buy around 99% ... whereas for languages in general it's more like 70% [not taking the numbers too seriously] DO: more and more, the stuff I'm interested in is in the XML-specific part ER: that reminds me of what I want: minor versions are compatible, incompatible versions are major verions TBL: [missed] <Ed> A.B.C ; A=major version, if its no longer backwards compatible you change the major version. NM: that major/minor concept applies reasonably well when control of changes to the language is centralized. But XML allows much more de-centralized evolution <Ed> B = minor version, if its backwards capatible but you've added/changed functionality you change the minor version <Ed> c = build, functionality is largely the same, clarifications or bug fixes. NM: I don't think version numbers cover much of the interesting cases. ... and the finding discusses a lot of the richness DO: yes; section 8 ( ) discusses extensibility and versioning; in the name example, there are no version numbers discussion of ... a SOAP must-understand example... [which is tricky to follow, as it's hard to tell how many languages are being discussed, which is part of the point] NM: perhaps, Ed, using version numbers is just one way of doing self-describing messages. <dorchard> <Zakim> DanC_lap, you wanted to suggest that using major/minor version numbers as an example to illustrate the "compatibility" term seems worth the screen-space <dorchard> DO: major/minor and version numbers are discussed in "7 Versioning" "Version identification has traditionally been done with a decimal separating the major versions from the minor versions"... Ed: so let's adovocate that as the standard way to do versioning DanC: the document only implicitly acknowledges that this is a normal way to do versioning. it mostly says "watch out!" Norm: yes, the choice of 1.1 for XML 1.1 was because we know 2.0 would scare people away ... and people did, in the end, stay away. DanC: the XML 1.1 case argues that Ed's advice is valuable; if the XML Core WG had been constrained to make 1.1 compatible or fess up and call it 2.0, life might have been better DaveO: [missed; noah, help?] ... people go right to version numbers and don't use namespaces. I'd like more people to know the options with namespaces.] NDW: major.minor is sufficient for DocBook. We do have a namespace, but just one. NM: qnames can be minted completely independently, unlike version numbers <dorchard> [I lost track somehow] 2.3 Version Strategy: all new components in new or existing namespace(s) for each compatible version (#3) <raman> suspect there isn't much point in dialing in this late? <DanC_> hard to say, raman; we're in the middle of versioning, scheduled to stop in 37 minutes <raman> will beg off for the day. <Vincent> bye, TV DO: one of the main reason people don't use namespaces to express version changes is XPath. NM: yeah; I wrote something about that in 1999 [pointer?] [... missed some...] DO: yes, I'd run 2 services in parallel for transition between [versions?] ... UBL changes the namespace for every version HT: yes, they use XPath with types. NDW: how did they do that before XPath 2? HT: they don't. ... indeed, that would be painful NM: so they keep the type names still? HT: yes NDW: I'm not sure ... I think they use custom software and change their stylesheets <Zakim> DanC_lap, you wanted to say yes, XPath was why I accepted that HTML should have one namespace rather than 3; I was reluctant at the time, but I see it as synergistic with "don't make aliases" now. And to think about spending more screenspace on simpler strategies <ht> the Gutentag and Gregory paper about UBL versioning DO: so what advice to give? DanC: I'm not sure; I'm content to just tell stories, maybe NM: yeah; helping people understand why various choices were made is one useful contribution. Sometimes we can generalize and give advice, but sometimes not. round-the table to conclude this dicussion of versioning... DanC: I think this "book" is coming along reasonably well; I'd like to think this discussion gives Dave O. some inspiration to write some more. TimBL: feels like 3 parts, to me: (1) versioning in languages in general, which I think can be crisp and mathematical... ... (2) XML. while version numbers are one pattern, I don't think we can pick one best strategy ... (3) something about bottom-up web-like component design ... [OWL? missed?] DaveO: I want to continue on the refactoring; yes, I got enough input to do some more editing/writing ... I see some answer-collections to give... ... part 1 will be mostly terminology and motivate some issues ... part 2 has XML schema 1.0 stuff; maybe 1.1? [?] NDW: I think this coming along well; we've got terminology that we're using effectively in conversation DO: so is finishing part 1 in sight? NDW/DC: yes, in a few months NM: let's try to _use_ the terminology from part 1 in part 2 before "freezing" part 1 Ed: without solving the problem and coming to clear conclusions, it's not a finding. Tell me "do version numbers" or "don't do version numbers," and why, [or don't bother.] DO: how about do one example 2 ways, one with vers# and one without, then say pick one? <ht> Ed: I'd be fine with that. That's a conclusion. HT: I think it's productive to [give terminology about language versions?]. It's going reasonably well. VQ: I'm glad we seem to have gotten unstuck. NM: I agree with much of what Dan C. said. I disagree with Ed; worse than giving no advice is giving advice that's artificially crisp [?]. ... the terminology is working; I say "the difference between the defined and the accept set" often enough that I want a name for it ... more editor signals about which sections are how baked would be helpful.
http://www.w3.org/2001/tag/2006/12/12-tagmem-minutes
CC-MAIN-2021-39
refinedweb
6,622
67.08
. C+. Using (return val of) member function as default parameter of member function 11 November, 2009 § 2 Comments C++ allows the programmer to do some really cool things. When writing code I try to follow the Google C++ Style Guide, so I haven’t gotten much experience with the fringe areas of default parameters. A question was recently asked on the comp.lang.c++.moderated usenet group where the OP wanted to place a member function as the default argument, a la: class foo { int getInteger(); void doSomething(int i = getInteger()) { } }; Many of the responses said that he should overload doSomething as a nullary function and call the original doSomething with the return value of the member function. Within most of these comments was one from Neil Butterworth, who mentioned that the reason this isn’t possible is because the this pointer is not available until the body of the function. He offered that the OP could make getInteger a static function. class foo { static int getInteger(); void doSomething(int i = getInteger()) { } }; And if you don’t believe him, you can use the Comeau C/C++ Online Compiler to see for yourselves. I thought this was really cool. While I may not have a use for it at the moment, it is questions like these that are great conversation starters. About a month ago I subscribed to the clcm mailing list and I now recommend it to others as a way to learn different uses of C++ and interesting conversations about the language. Compile-time virtual function calls in C++ 21 August, 2009 § 7 Comments C++ templates provide some really fascinating abilities, one of which is compile-time virtual function calls. WTL, the Windows Template Library, uses compile-time virtual function calls when deriving types. In this post, I will cover what I mean when I say “compile-time virtual function calls”, and how they create faster code and smaller executables. By far, below is the most common use of templates within C++ code: namespace std { template <typename T> class vector { /* ... */ } } std::vector<int> primeNumbers; But what if we wanted to derive our own type, with a templated class being the base type? See this example: template <typename T> class WindowImpl { int id_of_windowImpl; static LPCTSTR GetWindowCaption() { return NULL; } void DrawWindow() { LPCTSTR windowTitle = T::GetWindowCaption(); /* Draw window and place title at top of UI */ } } Did you notice that there was no mention of the keyword virtual? This is where our little trick comes in. Let’s say we wanted to make our own boring derivation of Window, one that has the title of “Basic Window”. We could simply do this: class BasicWindowImpl : WindowImpl<BasicWindowImpl> { int id_of_basicWindowImpl; static LPCTSTR GetWindowCaption() { return _T("Basic Window"); } } BasicWindowImpl<BasicWindowImpl> basicWindow; When the code is compiled, the static method GetWindowCaption will be called on the derived type. If the method isn’t implemented in the derived type, then the C++ lookup rules state that it will look deeper in the class and find the base implementation that returns NULL. Also, this whole practice is made possible since immediately following the colon, the type has been declared and the typename can be used as a template parameter. What does this mean in terms of efficiency/footprint? First a short high level overview on how virtual functions are implemented. When a method inside of a class is declared virtual, a special data type is added to the class. This data type stores function offsets from the base of the class. When a call is made using a base type, this virtual function table (vftbl or vtable for short) is referenced and the memory address of the function is retrieved and executed. Each instance of a class will hold a pointer to this table (often called a vpointer). This pointer is usually initialized at the end of the constructor by some hidden code that your C++ compiler generates. On a 32-bit machine with 32-bit ints and no virtual functions: basicWindow is 8 bytes (4 bytes for each int that it holds). On a 32-bit machine with 32-bit ints and virtual functions: basicWindow is 12 bytes (the same 8 bytes plus 4 bytes for the vpointer). Also, there is another 4 bytes used for the vtable and it’s one function pointer that is stored and pointing to BasicWindowImpl::GetWindowCaption. In summary, by using compile-time virtual function calls, the sample code cuts it’s memory use in half. The vtable pointer does not consume much memory, but if there were 1,024 BasicWindowImpl instances created, then an extra 4kb of data would be unnecessarily used. Also, there is no need to look up the function address in the table due to the static linking of the overriden function at compile-time. This brings you a smaller and faster executable by making your compiler work just a little harder.
https://msujaws.wordpress.com/tag/c-plus-plus/
CC-MAIN-2017-47
refinedweb
813
57.3
NAME XML::Parser::EasyTree - Easier tree style for XML::Parser SYNOPSIS use XML::Parser; use XML::Parser::EasyTree; $XML::Parser::Easytree::Noempty=1; my $p=new XML::Parser(Style=>'EasyTree'); my $tree=$p->parsefile('something.xml'); DESCRIPTION XML::Parser::EasyTree adds a new "built-in" style called "EasyTree" to XML::Parser. Like XML::Parser's "Tree" style, setting this style causes the parser to build a lightweight tree structure representing the XML document. This structure is, at least in this author's opinion, easier to work with than the one created by the built-in style. When the parser is invoked with the EasyTree style, it returns a reference to an array of tree nodes, each of which is a hash reference. All nodes have a 'type' key whose value is the type of the node: 'e' for element nodes, 't' for text nodes, and 'p' for processing instruction nodes. All nodes also have a 'content' key whose value is a reference to an array holding. EasyTree nodes are ordinary Perl hashes and are not objects. Contiguous runs of text are always returned in a single node. The reason the parser returns an array reference rather than the root element's node is that an XML document can legally contain processing instructions outside the root element (the xml-stylesheet PI is commonly used this way). If the parser's Namespaces option is set, element and attribute names will be prefixed with their (possibly empty) namespace URI enclosed in curly brackets. SPECIAL VARIABLES Two package global variables control special behaviors: - XML::Parser::EasyTree::Latin If this is set to a nonzero value, all text, names, and values will be returned in ISO-8859-1 (Latin-1) encoding rather than UTF-8. - XML::Parser::EasyTree::Noempty If this is set to a nonzero value, text nodes containing nothing but whitespace (such as those generated by line breaks and indentation between tags) will be omitted from the parse tree. EXAMPLE Parse a prettyprined version of the XML shown in the example for the built-in "Tree" style: #!perl -w use strict; use XML::Parser; use XML::Parser::EasyTree; use Data::Dumper; $XML::Parser::EasyTree::Noempty=1; my $xml=<<'EOF'; <foo> <head id="a">Hello <em>there</em> </head> <bar>Howdy<ref/> </bar> do </foo> EOF my $p=new XML::Parser(Style=>'EasyTree'); my $tree=$p->parse($xml); print Dumper($tree); Returns: $VAR1 = [ { 'name' => 'foo', 'type' => 'e', 'content' => [ { 'name' => 'head', 'type' => 'e', 'content' => [ { 'type' => 't', 'content' => 'Hello ' }, { 'name' => 'em', 'type' => 'e', 'content' => [ { 'type' => 't', 'content' => 'there' } ], 'attrib' => {} } ], 'attrib' => { 'id' => 'a' } }, { 'name' => 'bar', 'type' => 'e', 'content' => [ { 'type' => 't', 'content' => 'Howdy' }, { 'name' => 'ref', 'type' => 'e', 'content' => [], 'attrib' => {} } ], 'attrib' => {} }, { 'type' => 't', 'content' => ' do ' } ], 'attrib' => {} } ]; AUTHOR Eric Bohlman (ebohlman@omsdev.com) Copyright (c) 2001 Eric Bohlman. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO XML::Parser
https://metacpan.org/pod/XML::Parser::EasyTree
CC-MAIN-2016-26
refinedweb
485
57.61
This C++ Program which permutations of given character string. The program takes in a character string and prints the permutation of a given character string. The function responsible for printing the permutations of the string is given the string, the starting index and the terminating index of the string. The permutation function goes through the whole string, swaps the first two characters and calls the same function recursively with same string but with an incremented starting index. The terminating condition of this recursive function is that when the passed starting index is same as the terminating index and the passed string is printed and to restore the previous state of the string as soon as the called recursive function is returned, the swapped characters are again re-swapped. Here is source code of the C++ program which prints pascal’s triangle. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C++ Program to Find Permutations of Given Character String */ #include<iostream> using namespace std; /* Function to swap two characters */ void swap(char& a, char& b) { char temp; temp = a; a = b; b = temp; } /* Function to obtain permutations of string characters */ void permutation(string s,int i,int n) { int j; if (i == n) cout << s << "\t"; else { for (j = i; j < s.length(); j++) { swap(s[i],s[j]); permutation(s, i + 1, n); swap(s[i],s[j]); } } } int main() { string s; cout << "Enter the string : "; cin >> s; cout << endl << "The permutations of the given string : " << endl; permutation(s, 0, s.length() - 1); cout << endl; } $ g++ main.cpp $ ./a.out Enter the string : abc The permutations of the given string : abc acb bac bca cba cab Sanfoundry Global Education & Learning Series – 1000 C++ Programs. If you wish to look at all C++ Programming examples, go to C++ Programs.
https://www.sanfoundry.com/cpp-program-find-permutation-of-string-characters/
CC-MAIN-2018-34
refinedweb
309
57.4
Feature #7882closed Allow rescue/else/ensure in do..end Description The keywords rescue, else and ensure can be used when defining methods like so: def foo # rescue # else # ensure # end However when using a block delimited by do..end, you must use begin.. end as well: foo do begin # ... rescue # ... # ... end end It would be nice to be able to drop the extra begin.. end and use rescue, etc. clauses directly: foo do # ... rescue # ... # ... end I cannot think of any ambiguities this syntax would cause, but please correct me if I am wrong. Updated by nobu (Nobuyoshi Nakada) over 9 years ago I remember I've seen the same proposal. What do you think about {} block? foo { ... rescue ... } seems odd to me a little. Or improve do... end only? Updated by rosenfeld (Rodrigo Rosenfeld Rosas) over 9 years ago I don't find it that odd, Nobu, although I think most developers would tend to use do-end anyway as we usually do in Ruby when the block span multiple lines. I like the idea very much actually. Updated by mame (Yusuke Endoh) over 9 years ago - Status changed from Open to Assigned - Assignee set to matz (Yukihiro Matsumoto) I have suggested the same proposal (in Japanese [ruby-dev:31393]). Matz said in [ruby-dev:31423] that it is not clear (to him) whether: loop do : rescue : ensure : end should behave like: begin loop do : end rescue : ensure : end or: loop do begin : rescue : ensure : end end -- Yusuke Endoh mame@tsg.ne.jp Updated by alexeymuranov (Alexey Muranov) over 9 years ago I've heard of a convention to use { ... } for blocks evaluated for a result and do ... end for blocks evaluated for side effects: From this point of view, there probably shouldn't be any differences in the syntax inside the two forms of blocks. Updated by rosenfeld (Rodrigo Rosenfeld Rosas) over 9 years ago Yusuke, I believe it should be the latter. If you want to rescue from the yielding method you have the option of doing it like this in most cases: with_transaction do ... rescue ... end rescue puts 'with_transaction raised outside the yield block' Updated by phluid61 (Matthew Kerwin) almost 9 years ago mame (Yusuke Endoh) wrote: I have suggested the same proposal (in Japanese [ruby-dev:31393]). Matz said in [ruby-dev:31423] that it is not clear (to him) ... Definitely the latter. The rescue statement in the block should only rescue errors that occur inside the block. This is more apparent if you consider that: loop do rescue finally end is equivalent to: x = proc do rescue finally end while true x.call end Similarly replacing ' while' with a method, such as #each; the ' rescue' in the block should not expect to catch exceptions in the implementation of ' each', only the exceptions raised in the body of the block. Updated by hsbt (Hiroshi SHIBATA) over 8 years ago - Target version changed from 2.1.0 to 2.2.0 Updated by shyouhei (Shyouhei Urabe) almost 6 years ago - Has duplicate Feature #11337: Allow rescue without begin inside blocks added Updated by shyouhei (Shyouhei Urabe) almost 6 years ago - Has duplicate Feature #12623: rescue in blocks without begin/end added Updated by nobu (Nobuyoshi Nakada) almost 6 years ago Updated by shyouhei (Shyouhei Urabe) over 5 years ago - Has duplicate Feature #12906: do/end blocks work with ensure/rescue/else added Updated by Nondv (Dmitry Non) over 5 years ago So... Is there any movement? Updated by nobu (Nobuyoshi Nakada) over 5 years ago - Status changed from Assigned to Closed Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/7882
CC-MAIN-2022-27
refinedweb
599
61.87
tricks “using” namespace equivalent in ASP.NET markup routing in mvc dot net tricks (3) Convert string date to Datetime C# Try this: DateTime dt; DateTime.TryParseExact(Updated_Value, "d-M-yyyy", System.Globalization.CultureInfo.InvariantCulture, DateTimeStyles.None, out dt); var newdate = dt.ToString("d-MMM-yyyy"); When I'm working with DataBound controls in ASP.NET 2.0 such as a Repeater, I know the fastest way to retrieve a property of a bound object (instead of using Reflection with the Eval() function) is to cast the DataItem object to the type it is and then use that object natively, like the following: <%#((MyType)Container.DataItem).PropertyOfMyType%> The problem is, if this type is in a namespace (which is the case 99.99% of the time) then this single statement because a lot longer due to the fact that the ASP page has no concept of class scope so all of my types need to be fully qualified. <%#((RootNamespace.SubNamespace1.SubNamspace2.SubNamespace3.MyType)Container.DataItem).PropertyOfMyType%> Is there any kind of using directive or some equivalent I could place somewhere in an ASP.NET page so I don't need to use the full namespace every time? What you're looking for is the @Import page directive. I believe you can add something like: <%@ Import Namespace="RootNamespace.SubNamespace1" %> At the top of the page.
https://code.i-harness.com/en/q/523c
CC-MAIN-2019-39
refinedweb
224
56.45
from lightning import Lightning from numpy import random Lightning was designed as a API-based visualization server, to which data is posted, and from which visualizations are returned. However, there are many use cases where operating without a server is desirable. For example, when doing data analysis locally, or when we're using notebooks like Jupyter. For this use case, Lightning offers a "local" mode that doesn't require a server, or even internet access. This is a particularly easy way to get started with Lightning because it only requires a client installation! Once you've installed the Python client with pip, all you need to do is set local mode to true. lgn = Lightning(ipython=True, local=True)
http://nbviewer.jupyter.org/github/lightning-viz/lightning-example-notebooks/blob/master/misc/severless.ipynb
CC-MAIN-2018-30
refinedweb
119
55.24
Contents We've got a lot to cover and not much time to do it so let's dig right in. Here's a game I'm playing online with my friend, Ken. I'm winning by a little bit mostly because I got good letters like the Z and so on but Ken is catching up. Let's dig right in and come up with our concept inventory. What have we got? Now let's talk about how to implement any of these, see if there's any difficulties, any areas that we think might be hard to implement. The board can be some kind of two-dimensional array, maybe a list of lists is one possibility. One thing I'm not quite clear on now is do I need one board or two? It's clear I need one board to hold all the letters, but then there's also the bonus squares. Should that be part of the same board or should that be a separate board and the letters are layered on top of this background of bonus squares? I'm not quite sure yet, but I'm not too worried about it, because I can make either approach work. A letter can be one character string. A word can be a string. A hand can also be a string. It could also be a list of letters. Either one would be fine. Any collection of letters would be okay. Note that a set would not work for the hand. The hand can't be a set of letters, because we might have duplicates, and sets don't allow duplicates. Now, for the notion of a legal play, we'll have some function that generates legal plays, given a board position and a hand, and then the plays themselves will need some representation. Maybe they can be something like a tuple of say starting position-- for example, "RITZY" starts in this location, the direction in which they're going-- are they going across or down, the two allow about directions--and the word itself. In this case, RITZY. That seems like a good representation for a legal play. I'm not quite sure yet what the representation of a position or a direction should be, but that's easy enough. A score--we'll have some function to compute the score. For letters, we can have a dictionary that says the value of Z is 10. For plays we'll need some function to compute that. For the bonus squares, we'll need some mapping from a position on the board to double word or triple letter or whatever. A dictionary is a set of words. The blank letter--well, we said letters were strings, so that's probably okay. We could use the string space or the string underscore, to represent the blank. Then it's dealing with it that will be an issue later on. Now, I'm a little bit worried about blanks, because in poker Jokers were easy. We just said, replace them by any card and just deal with all the possibilities. Our routines are fast enough that we could probably deal with them all. Here I'm pretty confident we can make it fast enough that that approach will work, but it doesn't quite work because not only do we have to try all possibilities for the letter, but the scoring rules are actually different. When you use a blank instead of a letter, you don't get the letter scores for that blank. We'll have to have scoring know about blanks and not just know about filling things in. That'll be a complication. But overall I went through all the concepts. I've got an implementation for both. Some of them are functions that I don't quite know how to do, but I don't see anything that looks like a show stopper. I think I can go ahead. The difficulty then is not that I have to invent something new in order to solve one of the problems. The difficulty is just that there's so much. When faced with a problem of this size or problems can be much larger, the notion of pacing is an important one. What do I mean by that? It means I want to attack this, and I know I'm not going to solve it all at once. I'm not just going to sit down for 20 minutes and knock out the whole problem. It's going to be a lot longer than that. I want to have pacing in that I have intermediate goals along the way where I can say, okay, now I'm going focus on one part of the problem, and I'm going to get that done. Then when I'm done with that part, then I can move on to the next part. If you don't have that pacing, you can lose your focus. You can get discouraged that there's so much left to do. But if you break it up into bite-sized pieces, then you can say, okay, I'm almost there. I just have to finish a little bit more, and now this piece will be done, and then I can move on to the next piece. The first piece I'm going to look at is finding words from a hand. In other words, I'm going ignore the whole board. I'm going to say pretend the board isn't there and pretend all we have is the hand, and we have the dictionary, a set of legal words. I want to know out of that hand, what words in the dictionary can I make? Let's get started. The first thing I need is to come up with a dictionary of all the words. Now, we've created a small file with about 4,000 words in it, called "word4k.txt." Let's take that file, read it, convert it to uppercase, because Scrabble with Words with Friends use only uppercase letters, split it into a list of words, assign that to a global variable-- we'll call it WORDS and put it in all uppercase, just make sure that it stands out. Let's make this a set so that access to it is easy. We can figure out very quickly whether a word is in the dictionary. Okay, so now we're done. WORDS = set(file('words4k.txt').read().upper().split()) We have our words. Then I want to find all the words within a hand. So the hand will be seven letters, and I want to find all the words of seven letters or less that can be made out of those letters. I'm going start with a very straightforward approach, and then we're going to refine it over time. Here is what I've done: def find_words(hand): "Find all words that can be made from the letters in hand." results = set() for a in hand: if a in WORDS: results.add(a) for b in removed(hand, a): w = a+b if w in WORDS: results.add(w) for c in removed(hand, w): w = a+b+c if w in WORDS: results.add(w) for d in removed(hand, w): w = a+b+c+d if w in WORDS: results.add(w) for e in removed(hand, w): w = a+b+c+d+e if w in WORDS: results.add(w) for f in removed(hand, w): w = a+b+c+d+e+f if w in WORDS: results.add(w) for g in removed(hand, w): w = a+b+c+d+e+f+g if w in WORDS: results.add(w) return results I haven't worried about repeating myself and about making the code long. I just wanted to make it straightforward. Then I said, the first letter a can be any letter in the hand. If that's a word, then go ahead and add that to my set of results. I start off with an empty set of results, and I'm going to add as I go. Otherwise, b can be any letter in the result of removing a from the hand. Now the word that I'm building up is a + b--two-letter word. If that's a word, add it. Otherwise, c can be any letter in the hand without w in it-- the remaining letters in the hand. A new word can is a + b + c. If that's in WORDS, then add it, and we just keep on going through, adding a letter each time, checking to see if that's in the WORDS, adding them up. Here's my definition of removed: def removed(letters, remove): "Return a str of letters, but with each letter in remove removed once." for L in remove: letters = letters.replace(L, '', 1) return letters It takes a hand or a sequence of letters and then the letter or letters to remove. For each of those letters just replace the letter in the collection of letters with the empty string and do that exactly once, so don't remove all of them. Then return the remaining letters. Does it work? Well, if I find words with this sequence of letters in my hand, it comes back with this list. >>> find_words('LETTERS') set(['ERS', 'RES', 'RET', 'ERE', 'STREET', 'ELS', 'REE', 'SET', 'LETTERS', 'SER', 'TEE', 'RE', 'SEE', 'SEL', 'TET', 'EL', 'REST', 'ELSE', 'LETTER', 'ET', 'ES', 'ER', 'LEE', 'EEL', 'TREE', 'TREES', 'LET', 'TEL', 'TEST']) >>> That looks pretty good. It's hard for me to verify right now that I found everything that's in my dictionary, but it looks good, and I did a little bit of poking around in the dictionary for likely things, and all the words I could think of that weren't in this set were not in the dictionary. That's why they weren't included. That's looks pretty good. I'm going to be doing a lot of work here, and I'm going to be modifying this function and changing it. I'd like to have a better set of tests than just one test. I made up a bigger test. I made up a dictionary of hands that map from a hand to a set of words that I found. hands = { ## Regression test 'ABECEDR': set(['BE', 'CARE', 'BAR', 'BA', 'ACE', 'READ', 'CAR', 'DE', 'BED', 'BEE', 'ERE', 'BAD', 'ERA', 'REC', 'DEAR', 'CAB', 'DEB', 'DEE', 'RED', 'CAD', 'CEE', 'DAB', 'REE', 'RE', 'RACE', 'EAR', 'AB', 'AE', 'AD', 'ED', 'RAD', 'BEAR', 'AR', 'REB', 'ER', 'ARB', 'ARC', 'ARE', 'BRA']), 'AEINRST': set(['SIR', 'NAE', 'TIS', 'TIN', 'ANTSIER', 'TIE', 'SIN', 'TAR', 'TAS', 'RAN', 'SIT', 'SAE', 'RIN', 'TAE', 'RAT', 'RAS', 'TAN', 'RIA', 'RISE', 'ANESTRI', 'RATINES', 'NEAR', 'REI', 'NIT', 'NASTIER', 'SEAT', 'RATE', 'RETAINS', 'STAINER', 'TRAIN', 'STIR', 'EN', 'STAIR', 'ENS', 'RAIN', 'ET', 'STAIN', 'ES', 'ER', 'ANE', 'ANI', 'INS', 'ANT', 'SENT', 'TEA', 'ATE', 'RAISE', 'RES', 'RET', 'ETA', 'NET', 'ARTS', 'SET', 'SER', 'TEN', 'RE', 'NA', 'NE', 'SEA', 'SEN', 'EAST', 'SEI', 'SRI', 'RETSINA', 'EARN', 'SI', 'SAT', 'ITS', 'ERS', 'AIT', 'AIS', 'AIR', 'AIN', 'ERA', 'ERN', 'STEARIN', 'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE', 'AI', 'IS', 'IT', 'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'IRE', 'ARS', 'ART', 'ARE']), 'DRAMITC': set(['DIM', 'AIT', 'MID', 'AIR', 'AIM', 'CAM', 'ACT', 'DIT', 'AID', 'MIR', 'TIC', 'AMI', 'RAD', 'TAR', 'DAM', 'RAM', 'TAD', 'RAT', 'RIM', 'TI', 'TAM', 'RID', 'CAD', 'RIA', 'AD', 'AI', 'AM', 'IT', 'AR', 'AT', 'ART', 'CAT', 'ID', 'MAR', 'MA', 'MAT', 'MI', 'CAR', 'MAC', 'ARC', 'MAD', 'TA', 'ARM']), 'ADEINRST': set(['SIR', 'NAE', 'TIS', 'TIN', 'ANTSIER', 'DEAR', 'TIE', 'SIN', 'RAD', 'TAR', 'TAS', 'RAN', 'SIT', 'SAE', 'SAD', 'TAD', 'RE', 'RAT', 'RAS', 'RID', 'RIA', 'ENDS', 'RISE', 'IDEA', 'ANESTRI', 'IRE', 'RATINES', 'SEND', 'NEAR', 'REI', 'DETRAIN', 'DINE', 'ASIDE', 'SEAT', 'RATE', 'STAND', 'DEN', 'TRIED', 'RETAINS', 'RIDE', 'STAINER', 'TRAIN', 'STIR', 'EN', 'END', 'STAIR', 'ED', 'ENS', 'RAIN', 'ET', 'STAIN', 'ES', 'ER', 'AND', 'ANE', 'SAID', 'ANI', 'INS', 'ANT', 'IDEAS', 'NIT', 'TEA', 'ATE', 'RAISE', 'READ', 'RES', 'IDS', 'RET', 'ETA', 'INSTEAD', 'NET', 'RED', 'RIN', 'ARTS', 'SET', 'SER', 'TEN', 'TAE', 'NA', 'TED', 'NE', 'TRADE', 'SEA', 'AIT', 'SEN', 'EAST', 'SEI', 'RAISED', 'SENT', 'ADS', 'SRI', 'NASTIER', 'RETSINA', 'TAN', 'EARN', 'SI', 'SAT', 'ITS', 'DIN', 'ERS', 'DIE', 'DE', 'AIS', 'AIR', 'DATE', 'AIN', 'ERA', 'SIDE', 'DIT', 'AID', 'ERN', 'STEARIN', 'DIS', 'TEAR', 'RETINAS', 'TI', 'EAR', 'EAT', 'TA', 'AE', 'AD', 'AI', 'IS', 'IT', 'REST', 'AN', 'AS', 'AR', 'AT', 'IN', 'ID', 'ARS', 'ART', 'ANTIRED', 'ARE', 'TRAINED', 'RANDIEST', 'STRAINED', 'DETRAINS']), 'ETAOIN': set(['ATE', 'NAE', 'AIT', 'EON', 'TIN', 'OAT', 'TON', 'TIE', 'NET', 'TOE', 'ANT', 'TEN', 'TAE', 'TEA', 'AIN', 'NE', 'ONE', 'TO', 'TI', 'TAN', 'TAO', 'EAT', 'TA', 'EN', 'AE', 'ANE', 'AI', 'INTO', 'IT', 'AN', 'AT', 'IN', 'ET', 'ON', 'OE', 'NO', 'ANI', 'NOTE', 'ETA', 'ION', 'NA', 'NOT', 'NIT']), 'SHRDLU': set(['URD', 'SH', 'UH', 'US']), 'SHROUDT': set(['DO', 'SHORT', 'TOR', 'HO', 'DOR', 'DOS', 'SOUTH', 'HOURS', 'SOD', 'HOUR', 'SORT', 'ODS', 'ROD', 'OUD', 'HUT', 'TO', 'SOU', 'SOT', 'OUR', 'ROT', 'OHS', 'URD', 'HOD', 'SHOT', 'DUO', 'THUS', 'THO', 'UTS', 'HOT', 'TOD', 'DUST', 'DOT', 'OH', 'UT', 'ORT', 'OD', 'ORS', 'US', 'OR', 'SHOUT', 'SH', 'SO', 'UH', 'RHO', 'OUT', 'OS', 'UDO', 'RUT']), 'TOXENSI': set(['TO', 'STONE', 'ONES', 'SIT', 'SIX', 'EON', 'TIS', 'TIN', 'XI', 'TON', 'ONE', 'TIE', 'NET', 'NEXT', 'SIN', 'TOE', 'SOX', 'SET', 'TEN', 'NO', 'NE', 'SEX', 'ION', 'NOSE', 'TI', 'ONS', 'OSE', 'INTO', 'SEI', 'SOT', 'EN', 'NIT', 'NIX', 'IS', 'IT', 'ENS', 'EX', 'IN', 'ET', 'ES', 'ON', 'OES', 'OS', 'OE', 'INS', 'NOTE', 'EXIST', 'SI', 'XIS', 'SO', 'SON', 'OX', 'NOT', 'SEN', 'ITS', 'SENT', 'NOS'])} The idea here is that this test is not so much proving that I've got the right answer, because I don't know for sure that this is the right answers. Rather, this is what we call a regression test, meaning as we change our program we want to make sure that we haven't broken any of these--that we haven't made changes to our functions. Even if I don't know this is exactly the right set, I want to know when I made a change, have I changed the result here. I'll be able to rerun this and say, have we done exactly the same thing. I'll also be able to time the results of running these various hands and see if we can make our function faster. Here is my list of hands. I've got eight hands. Then I did some further tests here. def test_words(): assert removed('LETTERS', 'L') == 'ETTERS' assert removed('LETTERS', 'T') == 'LETERS' assert removed('LETTERS', 'SET') == 'LTER' assert removed('LETTERS', 'SETTER') == 'L' t, results = timedcall(map, find_words, hands) for ((hand, expected), got) in zip(hands.items(), results): assert got == expected, "For %r: got %s, expected %s (diff %s)" % ( hand, got, expected, expected ^ got) return t timedcall(map, find_words, hands) 0.5527249999 I'm testing removing letters--got all those right. Then I'm going through the hands, and I'm using my timedcall() function that we build last time. That returnsin lapsed time and a set of results. I make sure all the results are what I expected. Then I return the time elapsed for finding all the words in those eight hands. It turns out it takes half a second. That kind of worries me. That doesn't sound very good. Sure, if I was playing Scrabble with a friend and they reply in a half second, that'd be pretty good. Much better than me, for example. In this game here it says that I haven't replied to my friend Ken in 22 hours. This is a lot better, but still, if we're going to be doing a lot of work and trying to find the best possible play, half a second to evaluate eight hands-- that doesn't seem fast enough. Why is find_words() so slow? One thing is that it's got a lot of nested loops, and it always does all of them. A lot of that is going to be wasteful. For example, let's say the first two letters in the hand were z and q. At the very start here w is z + q, and now I loop through all the other combinations of all the other letters in the hand trying to find words that start with z + q, but there aren't any words in the dictionary that start with zq. As soon as I got here, I should be able to figure that out and not do all of the rest of these nested loops. What I'm going to do is introduce a new concept that we didn't see before in our initial listing of the concepts, but which is an important one--the notion of a prefix of a word. It's important only for efficiency and not for correctness--that's why it didn't show up the first time. The idea is that given a word there are substrings, which are prefixes of the word. The empty string is such a prefix. Just W is a prefix. W-O is a prefix. W-O-R is a prefix. Now, we always have to decide what we want to do with the endpoints. I think for the way I want to use it I do want to include the empty string as a valid prefix, but I think I don't want to include the entire string W-O-R-D. I'm not going to count that as a prefix of the word. That is the word. I'm going to define this function prefixes(word). It's pretty straightforward. Just iterate through the range, and the prefixes of W-O-R-D are the empty string and these three longer strings. Now here's the first bit that I want you to do for me. Reading in our list of words from the dictionary is a little bit complicated in that we want to compute two things--a set of words and a set of prefixes for all the words in the dictionary. The set together of prefixes for each word--union all of those together. I'm going to put that together into a function readwordlist(), which takes the file name and returns these two sets. I want you to write the code for that function here. Here's my answer. The wordset is just like before. Read the file, uppercase it, and split it. In the prefixset, we go through each word in the wordset and then each prefix of the word, and collect that set p of prefixes and then return them. Now let's see what these prefixes can do for us. I can define a new version of find_words(), and what this one is it looks exactly like the one before except what we do at each level of the loop is we add one statement that says, if the word that we built up so far is not one of the prefixes of a word in the dictionary, then there's no sense doing any of these nested loops. We can continue onto the next iteration of the current loop, and that's what the continue statement says is don't do anything below, rather go back to the for loop that we're nested in and go through the next iteration of that for loop. Normally, I don't like the continue statement and normally, instead of saying if w not in prefixes continue, I would've said if w in prefixes then do this, but that would've introduced another level of indentation for each of these seven levels and I'd be running off the edge of the page, so here I grudgingly accepted the continue statement. The code looks just like before. I've just added seven lines. The exact same line indented into different levels goes all the way through a, b, c, d, e, f, and g. Now if I run the test_words function again, I get not half a second but 0.003 seconds. That's nice and fast. That's 150 times faster than before, 2000 hands per second. The function is long and ugly, but it's fast enough. But still I'd like to clean it up. I don't like repeating myself with code like this. I don't like that this only works exactly for seven letters. I know that I may need more than that because there's only seven letters in a hand, but sometimes you combine letters in a hand with letters on the board. This function won't be able to deal with it. In order to improve this function I have to ask myself, "What do each of the loops do?" "And can I implement that in another way other than with nested loops?" The answer seems to be that each of the nested loops is incrementing the word by one letter, from abcd to abcde, and then it's checking to see if we have a new word, and it's checking to see if we should stop if what we have so far is not a prefix of any word in the dictionary. If I don't want to have nested loops what I want instead is a recursive procedure. I'm going to have the same structure as before. I'm going to start off by initializing the results to be the empty set, and then I'm going to have some loops that add elements to that set, and then I'm going to return the results that I have built up. Then I'm going to start the loops in motion by making a call to this recursive routine. What I want you to do is fill in the code here.. Let's go back to our list of concepts and say what have we done so far and what's next? We think we did a good job with the dictionary, and we did a good job with our hands here. In terms of legal play, well, we've got words, so we're sort of part way there, but we haven't hooked up the words to the board. Maybe that's the next thing to do--is say, let's do a better job of hooking up the letters and the hand with the words in the dictionary and placing them in the right place on the board. I don't want to have to deal with the whole board. Let's just deal with one letter at a time. Let's say there is one letter on the board, and I have my hand and I want to say I can play W-O-R and make up a word. Just to make it a little bit more satisfying than one letter, let's say that there is a set of possible letters that are already on the board, and we can place our words anywhere, but we're not going to worry about placing some letters here and having them run off the board. We're not going to worry about placing letters here and having them run into another letter. We're just going to say what words can I make out of my hand that connect with either a D or an X or an L. So I need a strategy for that. Let's just consider one letter at a time. What I need to find is all the plays that take letters in my hand-- [HANDSIE] let's say those are the seven letters in my hand. Take those letters and combine them with a D and find all the words. What can those words consist of? They can have some prefix here, which can be any prefix in our set of prefixes that come solely from the letters in my hand. Then the letter D that's already there doesn't have to come from my hand. Then some more letters. I'll think of this as a prefix plus a suffix where I make sure that I know that D is already there. Here is word-plays--takes a hand and a set of letters that are on the board, and it's going t o find all possible words that can be made from that hand, connecting to exactly one of the letters on the board. We're going break it up into a prefix that comes only from the hand, then the letter from the board, and then the remainder of the suffix that comes from the hand. The same structure as we had before--we start off with an empty set of result words. In the end we're going to return that set of result words. Then we're going to go through all the possible prefixes that come exclusively from the hand, then the possible letters on the board, and add a suffix to the prefix plus the letter on the board from the letters in the hand except for we can no longer use the letters in the prefix. Find_prefixes is just like find_words except we're collecting things that are in the prefixes rather than things that are in the list of words. Now I want you to write add_suffixes. Given a hand, a prefix that we found before, results set that you want to put things into, find me all the words that can be made by adding on letters from the hand into the prefix to create words.. We can write some assertions here. Here we have some letters in my hand, seven letters, and some possible letters on the board, and here's a long list of possibilities for plays I could make. We can already see that this would be useful for cheating--I mean, augmenting or studying your word game play. And to make it even more useful, let's write a function that tells us what the longest possible words are. Given the definition of word play, write a definition of longest words. There we go--we just generate the words from word plays, and then we sort them by length in reverse order so that longest are first. Here's my solution. It's pretty straightforward. We just sum the points of each letter for every letter in the word. Now, I want you to write me a function called topn. Again, takes a hand and set the board letters and the number, which defaults to 10, and give me the n best words and highest scoring words, according to the word score function. Again, pretty straight forward. We get the word plays, and we sort them in reverse order again so that bgigest are first, this time by word score, and then we just take the first n. By doing the subscripting like that, it works when n is too big. It works when n equals none. Now, just an aside here, as the great American philosopher, Benjamin Parker once said, "With great power comes great responsibility." We have a great power here to go through all the words in the dictionary and come up with all the best plays. Now, I could read in the official Scrabble dictionary and I could apply the board position that you saw in my game with Ken and I could come up with a bunch of good plays. But that wouldn't be fair to my friend Ken, unless we had previously agreed that it was legal and fair to do so. I'm not going to do that. I got to resist that temptation. And throughout your career as an engineer, these types of temptations or these types of possibilities are going to come up. Having strong ethics is part of learning to be a good software engineer. So now, in terms of our pacing, we've achieved milestone #2. We can stop sprinting again. We can relax. You can have a drink. We can lay down. We can congratulate ourselves or do whatever we want to do. Let's go back to our list of concepts, go back to our diagram of where we were and say, "What should be the next step?" Well, what can we do now? We can take a hand and we can take a single letter on the board and we can say, "Yes, I can pick letters out of the hand and maybe do a S-I-D-E." That would be a good play, except if there was an X here, then it would not be a good play. Similarly, if there were letters in the opposite direction, that could be a bad play. But sometimes, there's letters in the opposite direction and it makes a good play. Where here I'm forming two words at once, and the rules are the play I have to make has to be all in one direction and all adjacent to each other so forming one consecutive word. Then if it incidentally forms some other words in the other directions, that's okay. But I can't put some in this direction and then put some others down in that direction. I think my next goal will be to place words on a row while worrying about the crosswords in the opposite direction. Now let's be a little bit more precise about what the rules are and what it means to play a word within a row and how that hooks up to the other columns. Now, the rules say that at least one letter that you play has to be adjacent to an existing letter on the board. We'll mark with red asterisks such squares. We call these anchor squares. These are the squares that we can start from. Then we build out in each direction, forming consecutive letters into a single word. Now, the anchor squares do have to be adjacent to an existing letter, but they don't have to be adjacent all within a row. They can be adjacent in either direction. Let's expand the board beyond a single row and let's populate this with some more letters. Imagine that this board goes on in both directions. There's probably an E here or something like that. If we restrict our attention just to this row, notice that we've now introduced a new anchor point. This square is adjacent to an existing letter, and so that also counts as an anchor. Now we want to find a word, which consists of a prefix plus a suffix. We get to define the game. We can say that for every anchor point, the prefix is going to be zero or more letters to the left of the anchor point, not counting the anchor point itself. Then the suffix will be the anchor point and everything to the right. Of course, we have to arrange so that prefix plus suffix together form a word which is in the dictionary. Now here's a cool play that comes from the dictionary. BACKBENCH is a word, and note that if we just have this rule of word equals prefix plus suffix where the suffix has to start with an anchor, then there'd be four possible ways of specifying this one move. We could anchor it here with no suffix. We could anchor it here with these three letters as a suffix. We could anchor it here with these letters as a suffix. Or we could anchor it here with all these as a suffix and just H as the prefix. Now, it seems wasteful to degenerate the same result four times, so we can arbitrarily and without loss of completeness make up a rule which says there's no anchor within a prefix. We couldn't use this as a the anchor, because then there'd be anchors within the prefix. Likewise, we couldnaEUt use this one or this one. We can only use this one as the prefix in order to generate this particular word. The anchor will also come from the hand, and the suffix can be a mix of hand and board. Here, this is the anchor. The prefix is empty. The anchor letter comes from the hand. Then there's a mix of letters for the rest of the word. Now, what are the rules for a prefix. Let's summarize. A prefix is zero or more characters, can't cover up an anchor square, and they can only cover empty squares. For example, for this anchor square here, the prefix can go backward, but it can't cover this anchor. So the possible lengths for this prefix are zero to two characters. Any prefix can be zero characters, and here there's room for two, but there's not room for three, because then it would cover up an anchor. In that case, all the letters in the prefix come from the hand, but consider this anchor. For this anchor, we're required to take these two letters as part of the prefix, because we can't go without them because this abuts. These two must be part of the prefix, and this one can't be part of the prefix because it's an anchor. If we wanted that we generate it from this anchor, rather than from this one. That means the length of a prefix for this anchor has to be exactly two. Similarly, the length of the prefix for this anchor has to be exactly one, has to include this character, because if we place a letter here, this is adjacent-- it's got to be part of the word--and this is an anchor so we can't cover it. So we see that a prefix either the letters come all from the hand o or they come all from the board. What I want you to do is for the remaining anchors here, tell me what the possible lengths are. Either put a single number like this or a range of numbers--number-number. The answers are for this anchor the prefix has got to be one character--the A. This anchor--we can't cover another anchor, so it's got to be zero. This anchor--we conclude this if we want, but we can't go on to the other anchor, so it's zero to one. Here we've got to include the D but nothing else, so it's 1. Now, there's one more thing about anchors I want to cover, which is how we deal with the words in the other direction. For these five anchors there are no letters in the other direction. So these are completely unconstrained. We say that any letter can go into those spots. But in these two anchors, there's adjacent letters, and it would be okay. We could form a word going in this direction. But we can do that only if we can also form a word going in this direction. Let's say there are no more. This is either the edge of the board or the next row is all blanks. Then we can say, well, what letters can go here? Only the letters that form a word when the first letter is that word and the second letter is U. In our dictionary, it turns out that that possibility is the set of letters M, N, and X. MU, NU, and XU are all words in our dictionary, believe it or not. The Scrabble dictionaries are notorious for having two- and three-letter words that you've never heard of. Similarly here--what are two-letter words that end in Y? It' the set M, O, A, B. You've probably heard of most of those. When we go to place words on a particular row, we can pre-compute the crosswords and make that be part of the anchor. What we're going to do is have a process that goes through, finds all the anchor points, and finds all the sets of letters--whether it's any letter for these five anchors, or whether it's a constrained set of anchor letters for these two anchors. Sounds complicated, but we can make it all work. Let me say that once you've got this concept, the concept of the anchor sets and the cross words, then basically we're done. We've done it all. We can handle a complete board no matter how complicated, and we can get all the plays. It's just a matter of implementing this idea and then just fleshing it out. We've congratulated ourselves for getting this far. We've still got a ways to go. Now the question is what do we do next? It may seem a little bit daunting, but there's so much to do, and when I get that feeling, I remembered the book Bird by Bird by Anne Lamott, a very funny book. In it, she relates the story of how when she was in elementary school and there was a big book report due where she had to write up descriptions of multiple different birds. And she was behind and it was due soon, and she went to her father and complained, "How am I ever gonna get done. I'm behind," and her father just told her, "Bird by bird." "Just go and take the first bird, write up a report on that, and then take the next bird off the list and keep continuing until you're done." Let's go bird-by-bird and finish this up. What do we have left to do? Well, we got to figure out how to put letters on one particular row while dealing with the crosswords, then we got to expand from that to all the rows and then we got to do the columns as well, and then we got to worry about the scoring. There was a couple of minor things to be put off like dealing with the blanks. That's a lot to do, and let's go bird-by-bird . The thing I want to do next is say let's just deal with a single row at a time. Let's not worry about the rest of the row. Let's now worry about going in columns. Just deal with one row but also have that row handle the cross letters--the cross words. I'm going to need a representation for a row, and I think I'm going to make that be a list. There are many possible choices, but a list is good. I choose a list rather than a tuple, because I want to be able to change it in place. I want to be able to modify the row as we go, as the game evolves. Row is going to be a list of squares. If the square is a letter, we'll just use that letter as the value of the square. If the square is an empty spot which has nothing in it, I think I'll use dot just to say nothing is there. A trick that you learn once you've done this kind of thing a lot of times, is to say I'm going to be look at letters, I'm going to be looking at locations in the row, I'm going to be looking at their adjacent letters and going to the right and going to the left. If I'm trying to fill in from this anchor, I'll move to the left to put in the prefix and I'll move to the right to extend the word. It seems to me like I'm always going to have to be making checks of saying what's the next character, is it a letter, is it an anchor, what is it? Also, oops, did I get past the end of the board. It seems like I'm going to have to duplicate the amount of code I have to write to check both the case when I go off the board and when I don't go off the board. One way to avoid that is to make sure you never go off the board. It cuts the amount of code in half to some extent. A way to do that is to put in extra squares-- to say here are the squares on the board, but let's make another extra square on each side and just fill that in say the value of what's in that square is a boarder, not a real square that you can play in, but if I'm here and I say what's the value of square number i -1, I get an answer saying it's a border rather than getting an answer that's saying when you go i - 1 from position 0 you get an error. I think I'll use a vertical bar to indicate a border. I'll have one there, and at the end of my row, I'll have another border. Now I've sort of got everything. I got borders, letters, empty squares. The only thing left is anchors. I think what I'll do here is I'll introduce a special type for anchor. I could have used something like a tuple or a set of characters, but I want to make sure I know, and I want to have something in my code that says if the value of row[ i ] is an instance of anchor, then I want to do something. So we'll make anchor be a class, and I want it to be a class that contains the set of letters. I can do that in a very easy way. I can use a class statement to say I'm going to define a new class. The class is called an anchor, and the class is a subset of the set class. Then I don't need anything else for the definition of the class. All I have to know is that anchors are a type of set, but they're a particular type of set. They're a set of anchor letters. Here's a code for that. I define a class of anchor. I have all of my allowable letters. Then I say any is an anchor, which allows you to put any letter onto that anchor spot. Now I want to represent this row where here are the borders, here are the empty spots, and here are the particular letters, and this representation-- the schematic representation as a string does not account for a fact that after the A we're going to have two restricted anchors that have to contain these characters. So we'll define them--use the names mnx and moab to be the two anchors that are restricted to have only those letters. Now our row is equal to the border square is element number 0. Then the A is element number 1. Then we have these two restricted anchors, two more empty spots, another anchor where anything can go--the B and the E, and so on. There's our whole row, and while I'm at it I might as well define a hand. Now my next target, the next bird to cross off the list is to define a function row_plays, which takes a hand and a row in this format and returns a set of legal plays from the row. Now, rather than just return legal words, I'm using this notion of a play, where a play is a pair of location within the row and the word that we want to play. You can imagine it's going to take the same general approach that we've used before, start with an empty set, do something to it, and then return the results that we built up. What is it that we want to do? We want to consider each possible allowable prefix, and to that we want to add all the suffixes, keeping the words. Now, prefixes of what? That's the first thing to figure out. What I'm going to do is enumerate the row--enumerate actually just the good bits. The row from the first position to the last position, and that tells me I don't want the borders. I don't want to consider playing on the borders. I just want to consider playing on the interior of the row. Enumerate that starting from position number 1. One would be where the A is. Now I have an index--a number 1, 2, 3--and I have the square, which is going to be a, and then an anchor and then an anchor and so on. Where do I want to consider my rows? We're going to anchor them on an anchor so I can ask a square an instance of an anchor. If it is an anchor, then there's two possibilities. If it's an anchor like this, there's only one allowable prefix. The prefix which is the letters that are already there just to the left of the anchor. We want to just consider that one prefix and then add all the suffixes. If it's an anchor like this one, then there can be many prefixes. We want all possible prefixes that fit into these spots here, consider each one of those, and for each one of those consider adding on the suffixes. What I'm going to do is define a function, legal _prefix, which gives me a description of the legal prefix that can occur at position i within a row. There are two possibilities. I could combine the possibilities into one, but I'm going to have a tuple of two values returned. I'm going to have legal_prefix return the actual prefix as a string if there is one, like in this case, and return the maximum size otherwise. For this anchor here, this would be legal_prefix of one, two, three, four, five, six-- that's for legal_prefix when i = 6. The result would be that there are now characters to the left. It'll be the empty string for the first element of the tuples. The maximum size of the prefix that I'm going to allow is two characters. Now, if I asked here--that's index number one, two, three, four, five, six, seven, eight, nine-- when i = 9, the result would be that the prefix is BE, and the maximum size is the same as the minimum size. It's the exact size of 2. I define legal_prefix in order to tell me what to do next based on the two types of anchors. Now, I can go back to row plays. I can call legal_prefix, get my results, and say if there is a prefix, then I want to add to the letters already on the board. Otherwise, I have an empty space to the left, and I want to go through all possible prefixes. Here's what we do if there is a prefix already there. Now we can calculate the start of our position. Remember a row play is going to return the starting location of the word. We can figure that out. It's the i position of the anchor minus the length of the prefix. In fact, let me go and change this comment here. I is not very descriptive. Let's just call that start. Now we know what the starting location is for the word. When we find any words we can return that. Then we go ahead and add suffixes. With the suffixes, some of the letters are going to come out of the hand. We're adding suffixes to the prefix that's already there on the board. Starting in the start location, going through the row, accumulating the results into the result set, and then I needed this one more argument. I actually made a mistake and left this out the first time, and it didn't work. We'll see in a bit what that's there for. Now if we have empty space to the left of the anchor, now we've got to go through all the possible prefixes, but we already wrote that function--find_prefixes. That's good. Looks like we're converging. We're not writing that much new stuff. Now, out of all the possible prefixes for the hand, we only want to look at the ones that are less than or equal to the maximum size. If the prefix is too big, it won't fit into the empty spot. It will run into another word, and we don't want to allow that. We can calculate the start position again. Then we do the same thing. We add suffixes. What do we add them to? We'll the prefix that we just found from the hand. Since the prefix came from the hand, the remaining letters left in the hand we have to subtract out those prefix letters. Here we didn't have to subtract them out, because they prefix letters were already on the board. We're adding to the prefix from the start, from the row, results are accumulated, and we have this anchored equals false again. We're almost there. Just two things left to do--add_suffixes and legal_prefix. Add_suffixes we had before, but it's going to be a little bit more complicated now, because we're dealing with the anchors. Legal_prefix is just a matter of looking to the left and see how much space is there. Here is legal prefix. It's pretty easy... Here's the answers of what they should be for each of these positions within a row. Here's what I did. I introduced two global variables--the previous hand and previous results. I am making a cache, like a memoization cache, but it's only for one hand, because we're only dealing with one hand at a time. Then I say, we then find_prefixes if the hand that you were given is equal to the previous hand, then return the previous results. I'm only going to update the previous hand and the previous results in the case where the prefix is the empty string. And that's how I know I'm at the top level call when the prefix is the empty string. For all the recursive calls, the prefix will be something else. I'm only storing away the results when I'm at the top level call and I'm updating previous hand and previous results. With that, efficiency improvement to find prefixes, now when I do timedcalls of row plays for this fairly complex row, it's only about a thousandth of a second. If I had a complete board that was similarly complex and say fifteen rows or so in the board, then it'd still be around one or two hundredths of a second and that's pretty good performance. Here's my answer--very simple--iterate over over the rows and over the squares in each row print one out. A comma at the end of the print statement says put in a space but not a new line. At the end of each row, that's where I put in a new line. Now let's do a little bit of planning. We did row plays. What is row play return? Well, it's a set of plays where each play is an i-word pair, where i is the index into the row where the word starts. We eventually want to get all plays. Before we can get there, I'm going to introduce another function called horizontal plays, which does row plays across all the possible rows, but only going in the across direction not in the down direction. That'll take a hand and a board as input. A board is just a list of rows. It'll return a set of plays where a play, like in a row play, is the position in the word except now the position is not going just to be i, the position is an i-j pair. It's going to be at this column in this row along with the word. It's a set of tuples that look like that. Let's define horizontal plays. Well, you know the drill by know--familiar structure. We start out with an empty set of results. We're going to build them up somehow and then get the results. Now, how are we going to do that? Let's enumerate over all the rows in the board. We just want the good ones--the one from 1 to -1. We don't want the rows at the top and the bottom, which are off the board or the border squares. For each good row, I'm going to write a function called set_anchors which takes the row and modifies that row and mutates the row to have all the anchors in it. Remember before when I called row plays I passed in manually all the anchors. Here, I'm going to have the program do it for me. Now, for each word, I want to find all the plays within that row and properly add them in to results. I want to do something with the row plays of hand within that row. And I want you to tell me what code should go here. It could be a single line or it could be a loop over the results that come back from row plays. Figure out what goes here so that it can return the proper results. Here is my answer, I call row plays on the row and that gives me a set of results which are of the form i--index into the row-- and a word, and I can't just add that into my results set, because that doesn't tell me what row number I'm in. Instead, I want to add the tuple i and j. I've already got the row number j. Add the tuple of the position i, j along with the word into the results. That's all we have to do. Okay. Back here, check off another bird. Just one left. Well, okay, I lied. It's not quite one left. There is also scoring we'll have to do next, but one left for this part. So all plays, like horizontal plays, takes a hand and the board. What should it return? Well, it's going to be a set, the position in which we start the word. Well, that can be the same as before, an i-j position is perfectly good. Now, we're also going to have some words going across and some words going down. Now, we want our results to be a three tuple. It's a set of an ij position followed by a direction--across or down, followed by a word, a set of those. Now onto the all_plays function--takes a hand in the board, it's going to return all plays in both directions on any square so the play is a position, direction, and a word where the position is an ij pair, j picked up row number, i the column number. Direction is either across or down, and those will just be global variables. We don't have to decide for now how to represent it. I used a trick here--I said all the horizontal plays, the cross plays, we get from calling horizontal plays directly on the hand in the board. The vertical plays--I didn't have to write a separate function. All I have to do is transpose the board--flop the i and the j, call horizontal plays on that, and that gives me a set of results, but they're the results in the wrong direction. They're ji pairs rather than ij pairs. Now your task is write the code to put that all together to take these two sets, one of them is in reversed order, so they have to be swapped around. Neither set has a direction associated with it. Assemble them all into the resulting set that should be returned by all plays. Here's my answer, so I took all the i, j, w pairs from the horizontal plays and just reassembled them with the i, j, putting in the indication that they're going in the across direction and keeping the same word. Then I do the same thing for the vertical plays. They came out in the j, i order. I reassembled them back in the proper i, j order with an indication that we are going in the down direction. And then I took these two sets and just unioned them together. Now, I need some definition for across and down. I can do it this way. I could have just used strings or any unique value that could use a string across and down, but I'm going to say that across is equal to incrementing one at a time in the i direction and zero in the j direction. Down is incrementing zero in the i direction and one in the j direction. Now all that's left is to set up this matrix bonus, say where the double and triple bonuses are. Here's what I've done. I've just drawn a picture of the bonus and I called it the bonus_template. But I only drew one quadrant, one quarter of the board, because I noticed that they were all symmetric and so this function bonus_template takes a quadrant in terms of a string, mirrors each rows and then mirrors each set of rows, where mirror of a sequence is just sequence plus the rest of the sequence except for the last one, so there's going to be a middle piece that we'll reflect it around. I made one template for the Scrabble game and one for the Words With Friends game, and then you choose which bonus you want to use, and then I defined these constants for double words, triple words, double letters, and triple letters, and I wasn't quite sure what to use. I know I didn't want to use letters like d and t because I'd get confused with the letters in the hand. I used 2 and 3 for double and triple words, a colon because it has 2 dots for double letters, and a semicolon because it's a little bit bigger than a colon for triple letters. Even though we're so close to the end, it's still good hygiene to write tests, so I took some time to write some, ran them, all the tests pass. You can see here's a tiny little bonus template--a quarter of an array, which looks like that. When you apply bonus_template to it, you get this nice symmetric array. Now what I'd like you to do is modify the show(board) function so that it prints out these bonus entries so that if there's no letter over this 3 in the corner it should print the 3 rather than just printing a dot. Here's my solution--I capture the j and i coordinates by using the enumerate function. I print the square if it's a letter or if it's the border that's off the square. Otherwise, I just print the bonus. Now, one thing I like to be able to do is when I get a play, I want to be able to actually modify the board to indicate what that play is. I want you to write a function to do that for me. It takes a play, but remember a play is tuple of a score, a start position indicated by i, n, j, a direction indicating by delta i and delta j, and the actual word, a string. Let's make this look a little bit better by making it be a tuple. Write the code that will modify the board and in addition to modifying it, let's also return the board Here's my answer--I just enumerated the letters in the word and the position into the word, updated the board, marching down from the start position j, i, and multiplying n, the position into the word, by the deltas specified by the direction. Now, very exciting. We're at the culmination. One more function to write. That's the best play. Given a hand and a board, return the highest scoring play. If there are no plays at all, just return None. Here's my answer. We got all the pieces. We call all plays. We get back. There is a collection of plays. We sort them and take the last one--that'll be the highest. We don't even have to specify what to sort by, because of the score was the first element of the play, so we're automatically sorting by that, and then return the best one if there are any plays otherwise. Otherwise, then I specified no play here in case I change my mind, but I can say no play equals None. Now, I could write something that plays a complete game, but instead I'm just going to have a simple function show_best, which takes a hand and a board, displays the current board, and then displays the best play. When I type it in to the interpreter, this is what I get. It found the backbench that we had sort of laid out there, scored 64 points, and out of all the possible plays, it found the optimal one. So, we did it. We made it all the way through. Congratulations.
https://www.udacity.com/wiki/cs212/unit-6
CC-MAIN-2016-50
refinedweb
10,348
79.19
Subscriber portal I'm getting an error trying to use my Windows 8 Class Library. here's the actual error: Error 3 error C2871: 'RTLib' : a namespace with this name does not exist C:\Development\Main\PDF viewer\C++\MainPage.xaml.cpp 28 1 PdfShowcase.CPP (Windows 8.1) Here's what I did: then I get the error above. Should I have used a Windows Runtime Library instead of a Class Library? Is that the problem? Thank you. Gene Windows Store Developer Solutions, follow us on Twitter: @WSDevSol|| Want more solutions? See our blog Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site. Would you like to participate?
https://social.msdn.microsoft.com/Forums/en-US/cc27114e-dbf8-4894-8ccb-99e7c1b425c7/namespace-does-not-exist-error?forum=winappswithnativecode
CC-MAIN-2018-09
refinedweb
135
60.11
Creating applications for Windows Phone is a delightful experience. Although the current market status might not ratify such statement. It’s an open opportunity to leverage the knowledge on Microsoft technologies you might have or an easier learning curve for those who are new to programming in general. With new development platforms, the easiest way to learn about them and start building a solid foundation is finding clear examples and exercises that walks you through them and shows you the path to follow. This post will introduce the basic concepts of the platforms that Windows Phone has to offer and in separate posts an implementation exercise will be presented for each of them. At the end you will have an idea which one might seem more compatible with what you want to achieve. Introduced in Windows Phone 8.1 among all the features brought to life in the OS one that benefits developers is the availability of 3 development platforms. So you can choose from: Silverlight, Windows Runtime XAML, and JavaScript to create your applications. Silverlight The initial development platform that Windows Phone introduced was based on Silverlight. A technology that came out of Windows Presentation Foundation [WPF] as an alternative for Web applications. Providing a plug-in execution experience in the browser similar to local applications running in the system. As mobile development trends grow; the plug-in approach in browsers started to be questioned in favor of HTML5. This technology platform found its path to evolve into Windows Phone 7 as a new way to create fluid application experiences distancing itself from the previous releases of Windows Mobile. Creating a Silverlight application consist of describing the user interface elements in a definition language named XAML and defining the classes and methods that inject the functionality in code-behind files. XAML uses tags the same way as XML or HTML, making it easier and familiar for Web developers to make the transition. A tag defines an instance of an object that might have a visual representation or not. Each tag can be accompanied by a list of attributes that define the instance’s properties. These attributes could be expressed in-line with the tag or in separate blocks contained by that instance. The code-behind files can be created using C# or Visual Basic for most of the cases. They can also be combined with Visual C++ to create what is called a hybrid application interacting with Direct3D and other gaming frameworks which require performance advantages only possible with native code. The code-behind classes inherit from a base class named PhoneApplicationPage that lives in the Microsoft.Phone.Controls namespace. This base class enables fundamental services such as: navigation, page orientation, data binding, etc. into every page creating the views of the phone application. But plain classes performing specialized functionality can also exist independently from XAML UI. As with Silverlight in the browser, many programming techniques can be applied when creating Phone applications. Model-View-ViewModel (MVVM) being one of them. Although, you need to be careful not to add too much processing layers into the application because that could hinder its reliability when running in a low-cost device. The same way as in the browser scenario, a Silverlight application for Phone is built into a .xap file; which is a compressed file containing the application’s main assembly (.dll) as well as its assets, metadata and third-party assemblies referenced on it. When deployed, the application runs in-process hosted in a sandboxed environment offered by a process called TaskHost.exe. In Windows Phone 8.1 a new version of Silverlight was introduced. With a wider access to the Windows Runtime API. Applications created on Silverlight 8.1 run in a different execution context or host process called AgHost.exe allowing access to an extended set of API classes and services from the system. Windows Runtime XAML Windows Runtime XAML is the platform introduced in Windows 8 to create Modern Apps. With the active efforts of Microsoft to converge the development platforms on Windows this platform arrived to Windows Phone 8.1. Allowing the creation of Universal Applications that can run on Windows and Windows Phone. Sharing a significant amount of code (around 90%) with some specific changes to adapt to the different form factors. As the name implies, this platform is also based on XAML and if not looked closer it is hard to distinguish its code from Silverlight. One particular difference is the base class used for the code-behind pages, for Windows Runtime is it Page that lives in the Windows.UI.Xaml.Controls namespace. In general most of the Phone Silverlight classes live in the namespace Microsoft.Phone, while in Windows Runtime they are in the Windows.* namespaces. This given the code convergence that exists between the two operating systems. The convergence of the platforms is not yet fully complete. At this point there are things that can only be accomplished in Silverlight that can’t be achieve with Windows Runtime XAML, and vice-versa. But towards the future the expansion of the Windows Runtime in the Phone is going to grow. For new projects the safest bet is to develop them using WinRT XAML. But if you have an existing application on Silverlight you need to balance things up to determine the impact of transitioning it. On the execution side the differences are quite significant. Applications created on WinRT XAML are packaged on .appx files. As with Silverlight this package is also a compressed file format that contains the main application assembly (.exe) plus its assets, metadata and third-party referenced assemblies. The application assembly is an out-of-process server that lives on its own but stills runs sandboxed protecting the stability of the operating system from malicious code or application errors. As with Silverlight the languages you can use to create applications in this platform are C# and Visual Basic. But now you can use Visual C++ on its own to code native applications combined with XAML. This provides a tremendous advantage for game developers and advanced applications that rely on performance elements like Photo and Audio manipulations where C++ is a better fit. JavaScript Also known as WWA (Windows Web Application) or WinJS. Creating applications for Windows Runtime with JavaScript is a lot like creating a web site. The application is composed of html, css, and js files. The WinRT JS supports most of the HTML5 constructs and CSS level 3 features. Running in the context of the Windows Runtime some additional features are gained, for example: better touch support, more control over the layout of the UI, access to OS services and networking features, extra controls, etc. As with a web page that is executed or interpreted in a browser, WWA apps run hosted by a process named WWAHost.exe which bring some restrictions in terms of functionality, such as: not being able to open pop-ups or messages boxes with the alert function, prevents them from resizing windows, and other security aspects that blocks code injection and applications to execute malicious code. Think of this particular changes as what happens when you execute the same web site in a different browser, each browser has its own level of supported features. WWA apps are packaged in an .appx file the same way as with Windows Runtime XAML apps, although the contents are completely different, in this case the package contains the same source files plus some additional configuration files. For this reason the execution context prevents code from being injected from external sources into the application. In future separate posts an example of each of these platforms will be presented to complete the introduction and get you initiated into Windows Phone development. Undoubtedly application development on multiple platforms is a facility for the developer, because the knowledge possessed can create applications with minimal effort. As expressed in the beginning the learning curve uncomplicated. Excellent article.
https://blogs.msdn.microsoft.com/juanmejia/2014/07/28/hello-windows-phone-developer-platforms/
CC-MAIN-2017-09
refinedweb
1,327
53.71
12 January 2011 10:14 [Source: ICIS news] LONDON (ICIS)--Praxair Rus has signed a contract to supply oxygen, nitrogen and argon to Russian steel company NTMK, the company said on Wednesday. Praxair’s Russian subsidiary will build new air separation plants and will supply more than 3,000 tonnes/day of industrial gases to NTMK at its ?xml:namespace> The new plants were scheduled to start up in late 2013 and would replace older air separation plants. They would produce liquid products for the local market. No financial details of the deal were disclosed. NTMK’s parent company Evraz Group is a large vertically-integrated steel, mining and vanadium business with operations in Russia, Ukraine, Europe, US, Canada and South Africa.
http://www.icis.com/Articles/2011/01/12/9425045/praxair-to-supply-industrial-gases-to-russian-steel.html
CC-MAIN-2014-52
refinedweb
122
52.19
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of Open status. Section: 19.9.2 [template.bitset], 28.7.8 [quoted.manip] Status: Open Submitter: Zhihao Yuan Opened: 2013-12-02 Last modified: 2016-02-10 Priority: 3 View all other issues in [template.bitset]. View all issues with Open status. Discussion: Example: char16_t('1') != u'1' is possible.The numeric value of char16_t is defined to be Unicode code point, which is same to the ASCII value and UTF-8 for 7-bit chars. However, char is not guaranteed to have an encoding which is compatible with ASCII. For example, '1' in EBCDIC is 241. I found three places in the standard casting narrow char literals: bitset::bitset, bitset::to_string and quoted. PJ confirmed this issue and says he has a solution used in their <filesystem> implementation, and he may want to propose it to the standard. The solution in my mind, for now, is to make those default arguments magical, where the "magic" can be implemented with a C11 _Generic selection (works in clang): #define _G(T, literal) _Generic(T{}, \ char: literal, \ wchar_t: L ## literal, \ char16_t: u ## literal, \ char32_t: U ## literal) _G(char16_t, '1') == u'1' [Lenexa 2015-05-05: Move to Open] Ask for complete PR (need quoted, to string, et al.) Will then take it up again Expectation is that this is correct way to fix this Proposed resolution: This wording is relative to N3797.[Drafting note: This is a sample wording fixing only one case; I'm just too lazy to copy-paste it before we discussed whether the solution is worth and sufficient (for example, should the other `charT`s like `unsigned char` just don't compile without supplying those arguments? I hope so). — end drafting note] Modify 19.9.2 [template.bitset] p1, class template bitset synopsis, as indicated: namespace std { template <size_t N> class bitset { public: […]')); […] }; […] } Modify 19.9.2.1 [bitset.cons] as indicated:')); -3- Requires:: pos <= str.size(). […]-3- Requires:: pos <= str.size(). […]
https://cplusplus.github.io/LWG/issue2348
CC-MAIN-2019-09
refinedweb
347
62.48
.6. If you are on a Mac or Linux platform, you already have Python installed. If you are on Windows, you need to install a distribution, such as those found at, if you have not already done so. For more information, see Install Supported Python Implementation. MATLAB has equivalencies for much of the Python standard library, but not everything. For example, textwrap is a module for formatting blocks of text with carriage returns and other conveniences. MATLAB also provides a textwrap function, but it only wraps text to fit inside a UI control. Create a paragraph of text to play with. T = ).'; Call the textwrap.wrap function by typing the characters py. in front of the function name. Do not type import textwrap. wrapped = py.textwrap.wrap(T); whos wrapped Name Size Bytes Class Attributes wrapped 1x7 8 py.list wrapped is a Python list, which is a list of Python strings. MATLAB shows this type as py.list. Convert py.list to a cell array of Python strings. wrapped = cell(wrapped); whos wrapped Name Size Bytes Class Attributes wrapped 1x7 840 cell Although wrapped is a MATLAB cell array, each cell element is a Python string. wrapped{1} ans = Python str with no properties. MATLAB(R) is a high-level language and interactive environment for Convert the Python strings to MATLAB strings using the char function. wrapped = cellfun(@char, wrapped, 'UniformOutput', false); wrapped{1} ans = 'MATLAB(R) is a high-level language and interactive environment for' Now each cell element is a MATLAB string. Customize the output of the paragraph using keyword arguments. The previous code uses the wrap convenience function, but the module provides many more options using the py.textwap.TextWrapper functionality. To use the options, call py.textwap.TextWrapper with keyword arguments described at. Create keyword arguments using the MATLAB pyargs function with a comma-separated list of name/value pairs. width formats the text to be 30 characters wide. The initial_indent and subsequent_indent keywords begin each line with the comment character, %, used by MATLAB. tw = py.textwrap.TextWrapper(pyargs(... 'initial_indent', '% ', ... 'subsequent_indent', '% ', ... 'width', int32(30))); wrapped = wrap(tw,T); Convert to a MATLAB argument and display the results. wrapped = cellfun(@char, cell(wrapped), 'UniformOutput', false); fprintf('%s\n', wrapped{:}) %)..
http://www.mathworks.com/help/matlab/examples/call-python-from-matlab.html?s_tid=gn_loc_drop&requestedDomain=www.mathworks.com&nocookie=true
CC-MAIN-2017-43
refinedweb
373
59.09
Jan 24, 2013 11:25 PM|LINK I'm doing a project where I need to override most of the events on several controls. I'm creating a class library that I want to drop into projects, change the base page and be done. In this case, I wanto edit the onClick event for every button without changing the type. Because this is sharepoint webpart development I am creating the buttons dynamically: Button myButton = new Button(); and would like to keep that instead of having to change all references to a button as: CustomButton myButton = new CustomButton(); I think what I am doing on the button itself is (The code is at work) this: public class CustomButton : Button { public void override onClick { etc } } The syntax is probably off but its close. My thanks for looking at this issue! Contributor 5399 Points Jan 25, 2013 01:18 AM|LINK Hi, Click is an event, not a overridable method that you can override by inherit from it. So, what you can do for this is, add a Click event handler to the button and put all the code that you wish to override in that handler. E.g. button.Click += new EventHandler(OnClick); For detail, you can refer to Event Handling in C# Jan 25, 2013 07:18 AM|LINK OK I get that but that will only work on a button by button basis. The solution I have come up with, and I'm not sure if this is bad practice... is to create my own control that inherits button, then create another control that inherits my custom button. Then I strip out "using System.Web.UI.WebControls;" from my page so that the 2nd control I created (which is called Button) is the only one it can reference. so.. myButton inherits Button Button inherits myButton I can still do Button mb = new Button(); Except that now it has my custom code and I can drop it in a project, erase the using System.Web.UI.WebControls, add using MyCustomButton. I tested just now with this and it doesn't error out. Tomorrow at work I will test out in a real application on dev to see what happens. Contributor 5399 Points Jan 25, 2013 07:35 AM|LINK Hi, for the event handler, 1 event handler can use by many object, as long as the event arguments are the same. For an example, OnClick event handler can use by many button object. As long all of them are button. Any of the base class inherits "System.Web.UI.WebControls", the object will always have "System.Web.UI.WebControls" content. No matter how many layers of the inheritant. :) So, good luck for you try. Jan 25, 2013 11:33 AM|LINK I don't think I explained well :) I do want it to retain the properites/methods of a button, I just want to wrap the events of the button and, indeed, all events on the page in a try/catch block automatically. and in doing so, not have to update dozens of applications with the custom control. Think of it as... I'm trying to emulate the Global.asax within sharepoint webpart, because it has none. I think this will work. Thanks for the help! All-Star 118619 Points 5 replies Last post Jan 26, 2013 11:43 PM by Decker Dong - MSFT
http://forums.asp.net/p/1877728/5281450.aspx/1?Re+Overriding+control+events
CC-MAIN-2013-20
refinedweb
563
71.14
After being separated for 40 years, the Indonesian children adopted by the Dutch successfully met their mother in Pringsewu Lampung ... 😭😭 After 40 years apart, Indonesian children adopted by Dutch citizens meet biological mother - BBC News Indonesia BBC Indonesia 04 May 2018 Andre Kuik could not help crying when he first met his mother after 40 years, he was adopted Dutch citizen since the age of five months and lived in the Land of the Windmills. Tired of traveling from the Netherlands to Pringsewu, Lampung, immediately disappeared when meeting with the mother, brother and sister for the first time. BBC Indonesia follows Andre's trip from Holland to Lampung. For Andre Kuik and his partner, Marjolein Wissink, travel to Lampung in mid-April, is the third time. But unlike before, this time he must meet his biological mother, Kartini (65 years) and her siblings. Her feelings are uncertain. After arriving in Jakarta after flying about 15 hours from the Netherlands, Andre could not sleep at night. The next day, he and Marjolein rushed to Lampung with a flight in the morning. "Very happy, nervous and I really feel they are very close," said Andre when he arrived at Pringsewu, a few kilometers from his mother's house. Anxiety was visible on Andre's face and his eyes looked into every corner of the car window as we got closer to his mother's house. From behind the car window, dozens of villagers seemed to be crowding and looking curious about the arrival of the 'lost child'. Andre accelerated his pace as he saw the figure of a small, hooded woman standing in front of the house welcoming her. The two hugged each other tightly and cried, the whole family and also his mother's neighbors surrounded them and wept. "I feel this is not real," said Andre. "Very happy, the missing child iso meet meneh (can meet again), iso behind meneh (can come back again), lanang can return (my son can return)," said Kartini in Indonesian and Java. Separated from the age of four days Kartini only had time to carry and nursed Andre at birth until the age of four days in February 1978. Andre's father, Theo Kohler, who is thought to have mixed blood of Java and Europe, urged Kartini to leave her third son at Panti Secanti hospital, Gisting Lampung . Kartini had returned to the hospital with her two children Wely and Untung, but could not see her son. "He said I could not meet, until at home I talked to my husband, angry mother how can not meet her son, husbands are silent," said Kartini. After that he never heard the news of his baby who could not get a name. "Had wanted to find but to where, I had sick mikirin child missing," said Kartini. She keeps asking her husband about Andre's whereabouts, but never gets an answer. When pregnant fourth child, Theo left Kartini and was not heard from until now. At the age of more than four months, Andre adopted Dutch citizen Jan Kuik and Mieke Kuik. In the adoption documents and notarial deeds, Andre's adoptive parents got their adopted children from the Pangkuan si Cilik Foundation in Jakarta led by Lies Darmadji on June 23, 1976. It is unclear how Andre could be at the Foundation as a baby. From Jakarta, Andre brought Kuik couples to Den Ham Dutch. There Andre was raised with a foster brother and sister from Thailand and adoptive sister from Indonesia. "At home talked openly about adoption issues, my parents always say if you want to go back to your homeland, we will support," explained Andre. But as a child, Andre admitted never too concerned about his status as an adopted child. "I'm always happy and do not mind about adoption, but I'm curious about where I come from, my face resembles who my father or mother is, whether I have brothers and sisters," Andre said. Today Andre has learned that he has two older brothers Wely and Untung and a younger sister Dewi Agustina. One of his older siblings, Untung had died as a child because of illness. "If her face resembles her father," Kartini said as she stared into the face of her third child. A smile spread across his face. Andre admitted relieved to find out that Kartini has no intention of surrendering herself and has breastfed her for four days. "I know he has no intention of handing me over," Andre said. During a one-week visit, Andre wanted to know more about his family, through their food, habits and work, among others, to the fields and to see the making of bricks, which became the daily work of his brother and sister, "I will learn Indonesian, so I can communicate secar a straight away when I return again (here) next year, "said Andre. Did not look for success In 2013, Andre and Marjolein visited Indonesia and he took himself to Lampung. The first visit to his home country left a deep impression. "I feel I am in my own community, my skin color is the same, friendliness, and it feels deep in me," said Andre. A year later, Andre and Marjolein had time to find his parents through the sisters at Panti Secanti Hospital where he was born. Although he had met someone who knew his father, he could not find his family. "Sister at the clinic where I was born, volunteered to join in, there happened to be an acquaintance from my parents in Gisting, Lampung, she can tell a little story about my parents," explained Andre. But meeting with his father's acquaintance in his youth did not give him any clue to find his parents. "In addition, we had time to get in touch with some other people to look for it, because we did not get a clear lead, then we stopped looking," Marjolein said. Even so, Andre still keep the desire to meet his biological parents, especially after the birth of his son who is now 1.5 years old. The search for children adopted by the Dutch At the end of 2017, Andre heard news from his colleagues in the Netherlands who managed to meet with his biological parents in Indonesia. The incident made Andre back to search with the help of the Foundation Mijn Roots. "I am 40 years old and I think people here do not live long, I think if I do not find them now, when again," explained Andre. Armed with the adoption documents of his adoptive parents, the search for his biological family began. "If the documents are not so clear, but we can get information from people who were then living with their parents, we feel confident to find it," explains Eko Murwantoro, the biological parents search team of the Mijn Roots Foundation. To make sure Kartini was Andre's parents, the Mijn Roots Foundation did a DNA test and the results were positive. Andre is one of the 24 adopted children of Dutch citizens who managed to get back to their families through the help of Yayasan Mijn Roots. "Someone is late did not find their parents, but managed to meet with his brother or sister, but still many who have not succeeded as well," said Eko. The Mijn Roots Foundation was founded by Christine Verhaagen and Ana van Keulen three years ago to help adoption children find their biological parents. Several years ago Lucy Hommels joined the foundation. • • Photo & Video Source: BBC Indonesia Congratulations! This post has been upvoted from the communal account, @minnowsupport, by sul @maul
https://steemit.com/esteem/@maulid/after-being-separated-for-40-years-the-indonesian-children-adopted-by-the-dutch-successfully-met-their-mother-in-pringsewu-423a4add04725
CC-MAIN-2022-05
refinedweb
1,273
67.49
Please sign up to be able to read, comment and contribute to our website. I’ve seen a hell of a lot of people wondering why Theresa May’s response wasn’t harsher. In truth, she was as harsh as she can afford to be. Before condemning Theresa May, remember that she has worked to remove British citizenships from ISIS fighters who went to Syria, and it was the #Labour MPs who opposed this, most notably Jeremy Corbyn, Diane Abbott and John McDonnell. The Manchester attacks wouldn’t have happened if the attacker wouldn’t have been allowed back in the UK, courtesy of Labour. You want to know who killed your children? They did. In the aughts, Tony Blair allowed in 2.2 million migrants. Only about Less than a million of them were migrants from the newly welcomed into the EU Eastern European countries. The rest were from the third world, most notably Pakistani and Africans. What is the difference, you ask? Eastern Europeans work. They are more likely to have at least trade skills compared to the local population, due to the good communist education system, but also significantly more likely to be in work. The average number of adults in employment in the General population is 34%. The average number of Eastern European Adult immigrants likely to be in work in the UK is over 60%. When was the last time you saw a Romanian trying to blow up an innocent group of people? We might want to – I had the urge at times myself when seeing the misery and corruption of the west, but we don’t. Neither do the Poles, Serbs, Hungarians, Czechs and Slovakians. Why do I say Eastern European specifically? Because a lot of the alleged western European migrants are in fact non European migrants who, after obtaining European passports from another country, come to the UK for the high benefits. However, we are not the issue here. The issue are the over 1.2 million non European migrants Tony Blair brought into the UK. The kids you see nowadays going to ISIS, stabbing people uncontrollably, driving into passer- byes, attacking police, going to ISIS? They are second generation migrants. They might be born in Britain but they aren’t British- they are raised in Muslim enclaves almost totally separated from the British mainstream, and you have Tony Blair to thank for that. This is a tactic successfully employed by the European Left- import third world welfare migrants, get them settled in and tax the locals top pay for their benefits. As they are unlikely to ever try to advance, you have a prison welfare voting block that will keep you in power. As of now there are some 6 million Muslims in Britain. We know their attitudes due to Pew Research and said values aren’t ours. As only less than 20% of them work full time, the rest of us are paying them to sit home and make more children, who will grow up to hate and outvote us. Do you think Theresa May didn’t want to tell the world “Islam is the problem”? She does. She came as close as she could to it, but before the elections where there’s 2-3 million Muslim votes to sway? She can’t afford it. Look at the woman’s actions- she got through the measure to strip British citizenship from islamist terrorists. She spoke at the UN about ending the refugee influx in Britain. She passed and implemented previous Tory laws that will stop Muslim immigrants living on welfare from importing wives from abroad. She pushed through David Cameron’s brilliant scheme that only limits to two the number of children that benefits will be paid for- effectively ending “pay to fuck”. Sadly this doesn’t affect children already born. These are things Theresa May can do LEGALLY. What she is doing is inching towards getting out from under ECHR- and that can’t be done until the laws have been reviewed and put in place to allow for the removal of suspects and convicted terrorists. Behind the scenes, the Police is armed and Stop and Search is keeping you safe. Buit the more draconian laws needed would push Britain to being a police state, and the British public, not the brightest, would disagree. Close the mosques? Easier said than done but you have umpteen courts who’d stop you. Stop inbound immigration? The best way of doing that is making Britain inhospitable to migrants , which is what the benefit changes are aiming to do. Because islamization is on its way my friends. It’s been allowed in and fostered by labour. And now we have to step carefully because they’re already here and no government will go to the steps that cause riots. Remember Children Of Men? A lot of people got very emotional at that movie’s treatment of refugees. What nobody asked was how many terrorist attacks did it take for the Brits to be willing to go there. The problem is Islam. There is no denying that. But let’s be very honest the problem are the British left especially Labour who let them in, who allowed them to set up a separate law system to the point Muslims in Europe think they have the right to kill people for eating on the street during Ramadan- a Shariah Law offense as Gathes of Vienna pointed it out. The problem is the Left- because they need the welfare voters to outvote the locals. They don’t care about the working class concerns- they have had 50 years to brainwash people so much that parents whose children have been murdered by terrorists are now saying they forgave said terrorists. How insane, how animalized do you have to be to not go against the person who killed your child? Animals are better than us- animals even protect the young of other species. The left brainwashed humans let their children die and be raped for virtue points. Rotherham? The Labour council stopped the police form investigating and they are doing it again in the nearby Knightley. No prime minister would implement a measure that brings millions of muslims in the streets together with their leftie friends. So they have to work stealthily and hope for the best. Making Britain unhospitable to welfare migrants. Monitoring the internet- this is a direct threat to the big social media corporations that foster ISIS propaganda more than it is to your porn. But you won’t see it. The government is cornered. WE are conquered. Raise or be killed, your choice. But don’t blame a government for refusing to start riots. If you enjoy our work, please share it on your medium of choice. While we are a free site and make no money from traffic, more visitors mean a larger the number of people who get to see an alternative view. Thank you
http://politicalpragmatism.com/viewpost.php?id=160
CC-MAIN-2020-24
refinedweb
1,163
72.56
10 May 2012 14:17 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> However, Frankfurt-based Verband der Chemischen Industrie (VCI) said first-quarter chemical production was 4% below the 2011 first quarter, and the expansion in production will likely only be moderate in coming months. VCI left its previous forecast of zero growth in “Order books in the industry are filling up again, and economic indicators are encouraging,” said Klaus Engel, the president of VCI and CEO of Germany-based specialty chemicals major Evonik. However, Engel warned not to attach too much importance to the latest numbers as the eurozone sovereign debt crisis still dampens demand in the EU, which is by far the most important export market for Germany’s chemical producers. The sequential growth in the first quarter from the 2011 fourth quarter included all major segments of the industry, with the exception of pharmaceuticals, in which production was down by 2.2%. Prices rose by 0.6% compared with the 2011 fourth quarter, reflecting higher raw material costs. Compared with the 2011 first quarter, prices were up by 3.0% year on year. Sales rose by 3.5% from the fourth quarter to €43.1bn ($56.0bn), with domestic sales up by 5.0% and export sales up by 2.5%. VCI noted that demand from emerging markets in Asia and from the Compared with the 2011 first quarter, chemical industry sales were down by 1.5% year on year. Chemical industry capacity plant utilisation averaged 84.1% during the quarter, compared with 81.7% in the 2011 fourth quarter. (
http://www.icis.com/Articles/2012/05/10/9558543/germany-q1-2012-chemical-production-rises-1.5-from-q4-2011-vci.html
CC-MAIN-2014-35
refinedweb
263
57.67
Lab 6: Nonlocal & Mutability Due at 11:59pm on Friday, 7/19 and 2 and submit through Ok. - The remaining questions are optional. It is recommended that you complete these problems in your own time. Topics Consult this section if you need a refresher on the material for this lab. It's okay to skip directly to the questions and refer back here should you get stuck. Nonlocal We say that a variable defined in a frame is local to that frame. A variable is nonlocal to a frame if it is defined in the environment that the frame belongs to but not the frame itself, i.e. in its parent or ancestor frame. So far, we know that we can access variables in parent frames: def make_adder(x): """ Returns a one-argument function that returns the result of adding x and its argument. """ def adder(y): return x + y return adder Here, when we call make_adder, we create a function adder that is able to look up the name x in make_adder's frame and use its value. However, we haven't been able to modify variable in parent frames. Consider the following function: def make_withdraw(balance): """Returns a function which can withdraw some amount from balance """ >>> withdraw = make_withdraw(50) >>> withdraw(25) 25 >>> withdraw(25) 0 """ def withdraw(): if amount > balance: return "Insufficient funds" balance = balance - amount return balance return withdraw The inner function withdraw attempts to update the variable balance in its parent frame. Running this function's doctests, we find that it causes the following error: UnboundLocalError: local variable 'balance' referenced before assignment Why does this happen? When we execute an assignment statement, remember that we are either creating a new binding in our current frame or we are updating an old one in the current frame. For example, the line balance = ... in withdraw, is creating the local variable balance inside withdraw's frame. This assignment statement tells Python to expect a variable called balance inside withdraw's frame, so Python will not look in parent frames for this variable. However, notice that we tried to compute balance - amount before the local variable was created! That's why we get the UnboundLocalError. To avoid this problem, we introduce the nonlocal keyword. It allows us to update a variable in a parent frame! Some important things to keep in mind when using nonlocal - nonlocalcannot be used with global variables (names defined in the global frame). - If no nonlocal variable is found with the given name, a SyntaxErroris raised. - A name that is already local to a frame cannot be declared as nonlocal. Consider this improved example: def make_withdraw(balance): """Returns a function which can withdraw some amount from balance """ >>> withdraw = make_withdraw(50) >>> withdraw(25) 25 >>> withdraw(25) 0 """ def withdraw(amount): nonlocal balance if amount > balance: return "Insufficient funds" balance = balance - amount return balance return withdraw The line nonlocal balance tells Python that balance will not be local to this frame, so it will look for it in parent frames. Now we can update balance without running into problems. Required Questions Nonlocal Codewriting For the following question, write your code in lab06.py. Q1: Make Adder Increasing Write a function which takes in an integer n and returns a one-argument function. This function should take in some value x and return n + x the first time it is called, similar to make_adder. The second time it is called, however, it should return n + x + 1, then n + x + 2 the third time, and so on. def make_adder_inc(n): """ >>> adder1 = make_adder_inc(5) >>> adder2 = make_adder_inc(6) >>> adder1(2) 7 >>> adder1(2) # 5 + 2 + 1 8 >>> adder1(10) # 5 + 10 + 2 17 >>> [adder1(x) for x in [1, 2, 3]] [9, 11, 13] >>> adder2(5) 11 """"*** YOUR CODE HERE ***"def adder(x): nonlocal n value = n + x n = n + 1 return value return adder Use Ok to test your code: python3 ok -q make_adder_inc ***"# Iterative solution for i in range(len(lst)): lst[i] = fn(lst[i]) # Recursive solution def map(fn, lst): """Maps fn onto lst using mutation. >>> original_list = [5, -1, 2, 0] >>> map(lambda x: x * x, original_list) >>> original_list [25, 1, 4, 0] """ if lst: # True when lst != [] temp = lst.pop(0) map(fn, lst) lst.insert(0, fn(temp)) Use Ok to test your code: python3 ok -q map Optional Questions Q.
https://inst.eecs.berkeley.edu/~cs61a/su19/lab/lab06/
CC-MAIN-2020-29
refinedweb
725
59.64
In lesson 6.3 -- Local variables, we said, “An identifier’s linkage determines whether other declarations of that name refer to the same object or not”, and we discussed how local variables have no linkage. no linkage Global variable and functions identifiers can have either internal linkage or external linkage. We’ll cover the internal linkage case in this lesson, and the external linkage case in lesson 6.7 -- External linkage. internal linkage external linkage An identifier with internal linkage can be seen and used within a single file, but it is not accessible from other files (that is, it is not exposed to the linker). This means that if two files have identically named identifiers with internal linkage, those identifiers will be treated as independent. Global variables with internal linkage Global variables with internal linkage are sometimes called internal variables. To make a non-constant global variable internal, we use the static keyword. static Const and constexpr global variables have internal linkage by default (and thus don’t need the static keyword -- if it is used, it will be ignored). Here’s an example of multiple files using internal variables: a.cpp: main.cpp: This program prints: 3 Because g_x is internal to each file, main.cpp has no idea that a.cpp also has a variable named g_x (and vice versa). g_x main.cpp a.cpp For advanced readers The use of the static keyword above is an example of a storage class specifier, which sets both the name’s linkage and its storage duration (but not its scope). The most commonly used storage class specifiers are static, extern, and mutable. The term storage class specifier is mostly used in technical documentations. storage class specifiers extern mutable storage class specifier The one-definition rule and internal linkage In lesson 2.6 -- Forward declarations and definitions, we noted that the one-definition rule says that an object or function can’t have more than one definition, either within a file or a program. However, it’s worth noting that internal objects (and functions) that are defined in different files are considered to be independent entities (even if their names and types are identical), so there is no violation of the one-definition rule. Each internal object only has one definition. Functions with internal linkage Because linkage is a property of an identifier (not of a variable), function identifiers have the same linkage property that variable identifiers do. Functions default to external linkage (which we’ll cover in the next lesson), but can be set to internal linkage via the static keyword: add.cpp: This program won’t link, because function add is not accessible outside of add.cpp. add add.cpp Quick Summary We provide a comprehensive summary in lesson 6.11 -- Scope, duration, and linkage summary. Why does this not compile? main.cpp does not see the variable in a.cpp. a.cpp int g_x{ 2 }; main.cpp #include <iostream> int main() { std::cout << g_x << '\n'; return 0; } cpp files can't see each other. The next lesson shows how your files can access each other's entities. Clarification : if i dont use STATIC on a global non-constant variable it can be referenced in another file ? if i have int g_x{5}; std::cout << g_x ; i can see the variable in main.cpp ? QUESTION : if we wanted a constant variable that is global with external linkage how do we do it ? since all constant variables are internal. Keep reading. This is answered in a couple of lessons. Why do const objects have internal linkage by default? Mainly so you can define them in a header file and #include them where you need them without running into problems with the one definition rule. Ah, ok thanks. Hello, Should we favor the use of static keyword for functions within a file as much as possible for a bigger program? Is it considered good practice? Hi! Yes, if you don't intend on using a function outside of the current file, the function should be `static`. Where should the `static` keyword go - header file, .cpp file or both? I ask this because I noticed that the compiler (i.e. Visual Studio) doesn't seem care. If I've understood correctly, you should use the static keyword both for the definition in the .cpp file as well as the declaration in the .h file. This should ensure that if you #include your .h file, the linker doesn't complain about getting a mismatch in declaration and definition when you run your program. I am no expert though, so it's probably best to just try it out! You don't separate `static` variables or functions into header and source files. `static` is used to give these entities internal linkage, ie. make them accessible only in the current _source file_. If you want something to be accessible across files, it must have external linkage (and not be `static`) or be `inline` and have internal linkage. Ah, I see. So, if you don't want to use a symbol beyond a given translation unit, there is no point declaring it in a header file. Correct Lesson updated, thanks for the suggestion! Thanks for this tutorial, very well explained! Regarding "The one-definition rule and internal linkage" part. Specifically this statement - "we noted that the one-definition rule says that an object or function can’t have more than one definition, either within a file or a program." But what about "variable shadowing", isn't that more than one definition in a single file (program)? I think in the situation of "variable shadowing", the two variables are not the same object, thus it does not break the one-definition rule. Under the section titled Functions with Internal Linkage, the example given confuses me a bit. Shouldn't the code in main.cpp include the header file add.h? Otherwise, how does main.cpp potentially see anything in that header file? I know the point of the exercise is to make function add() static (internal) instead of the default external. Is there another way that main.cpp was "seeing" the add.cpp file without #including the code. I hope I'm making sense here.... You don't _need_ headers. There's no connection between a header and its source file. All a header does is provide forward declarations for the definitions in the source file. In the example you're referring to, we manually forward declared `add` in main.cpp. A header wouldn't have done anything different. I saw "static" written after the "const" keyword. Like that: Has this the same meaning with "static" written before "const"? Like that: ps: I know that const variable has internal linkage by default, and no need for static anyway. Yes, it's the same. C++ isn't very strict about where the `const` needs to go. I am just trying to understand the difference between "int x" and "static int x" so i wrote the following code- //Const.cpp int x = 8; //Internal.cpp #include <iostream> #include "Const.cpp" int main() { std::cout << x << "\n"; } In this case i get the following error Error LNK1169 one or more multiply defined symbols found But if I write the following- static int x = 8; Then it compiles. This voilates "Internal global variables definitions" Can you please help me with this? Never include .cpp files. `x` is defined once in Const.cpp and then again in main.cpp. Hello! I am loving these tutorials probably way more than I should lol XD . No seriously they helped me understand c++ better than I ever imagined. Thank you Alex and Nascardriver for the effort you put in these tutorials. I wanted to ask if anyone knows a website for teaching HTML/CSS or python, I mean a website or an app or a youtube channel (it doesn't matter) that teaches those languages not just the syntax but also how and why those languages are the way they are and if possible they also show you the best practices and what not. If there isn't though how can I know those stuff on my own? I mean the best practices and what to do and what not to do... It's honestly sad I see no tutorials as good as yours guys that really focuses on the reader understanding the fundamentals and following best practices :( Anyway enough with my rambling, please help if you know anything :) For python youtube channels, I would recommend Sentdex and TechWithTim. For html/css, I'd recommend FreeCodeCamp. Hi, I have two questions: 1) "The most commonly used storage class specifier values are static, extern, and mutable." I am not an advanced learner, but I am curious to know about it : does that sentence mean if we use any of those keyword with identifiers (static, or extern or mutable), we refer to them as "storage class specifier"? 2) Is this use of 'static' here different from 'static' used for instance variables of a class in Object Oriented? The meaning of `static` depends on the context. It can be used as a storage specifier, to cause internal linkage, and to declare instance-less members. You've seen the first 2 already, we show the third version later. "Because linkage is a property of an identifier (not of a variable), functions have the same linkage property that variables do. " Should this be rephrased to: "Because linkage is a property of an identifier (not of a variable), functions have the same linkage property that identifiers do." Updated. Thanks! How does this interact with #include? For example if I have would then mps and mpss each utilize their own ever lasting copy of g so that the purpose of having that g centralised is defeated because it's stored twice? (I'm guessing that the better solution would be to have it in a constants namespace). Each get their own `g`. Making `g` `inline` solves this. It's though technically different than having one g with external linkage, right? The effect of using inline is that occurrences of g in mps and mpss are treated at compile time as if they were the literal "9.8". So in that case there's never even allocated memory for the double g. Nope, "inline" is a misleading name. It doesn't mean that `inline` functions/variables get inlined (though, they might). `inline` variables don't have to be `const` either, you can reassign values during run-time. `constexpr` variables are closer to what you've explained. Their value can be computed at compile-time, so it's likely they disappear during compilation. Thanks for the clarification. I looked up the inline specifier and it looks like this: if you declare something `inline constexpr` it's given external linkage and the permission to multiple definitions. Thus it may or may not be optimized away due to either of `inline` or `constexpr` but if it is not optimized away then at least it's only allocated once. Pre cpp17 you could either (as said in the lecture) declare it as const and external and define it once (where it would most surely wouldn't get optimized away and allocated once) or define it multiple times as internal and constexpr (where it most surely would be optimized away but allocated multiple times if the optimisation failed). I only was unsure if it's given external linkage in the case of `inline constexpr` declaration by default and indeed: "Inline const variables at namespace scope have external linkage by default (unlike the non-inline non-volatile const-qualified variables)". Says the reference. Quite possible you said that in the lecture already and I missed it. I was thinking that till now that warnings are actually legit errors in main code that program would ignore to compile if "Treat warnings as errors" was disabled. Thanks for clearing my doubt @nascardriver Hello where did you find that? I don't that option when I press properties>general please help :( Under "Functions with internal linkage", if i add int add(int x, int y) { return x - y; } to main.cpp and remove int add(int x, int y); then i get this error. Source.cpp(1,12): error C2220: the following warning is treated as an error Source.cpp(1,12): warning C4505: 'add': unreferenced local function has been removed why is that? since add.cpp has a static function, the program should auto ignore the function in add.cpp and compile without any problem right ? But it behaves differently here. You're not getting an error as a violation of a language rule. You're getting an error because you got a warning and your treating warnings as errors. You're getting a warning as a friendly reminder from the compiler that you wrote a function but you're never using it (`add` in "add.cpp"). The code is fine, but you made a mistake, and you told your compiler not to let you make mistakes. If you want to compile the code anyway, you can temporarily disable "Treat warnings as errors" (or similar) in your project settings. Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/internal-linkage/
CC-MAIN-2021-17
refinedweb
2,217
64.81
Please see the code below: public class Customer { private readonly IList<Order> _orders = new List<Order>(); public string FirstName { get; set; } public string LastName { get; set; } public string Province { get; set; } public IEnumerable<Order> Orders { get { return _orders; } } //This line internal void AddOrder(Order order) { _orders.Add(order); } } This is what I would expect to see. I am looking at some code, however the line I have identified with a comment is replaced with this: public OrderCollection Orders { get; } = new OrderCollection(); What is the benefit of encapsulating the IEnumerable in a class like this? I am talking from the perspective of DDD – an area I am trying to sharpen up!
https://extraproxies.com/what-is-the-benefit-of-encapsulating-a-collection-inside-a-class/
CC-MAIN-2021-21
refinedweb
110
53.55
Route Blob storage events to a custom web endpoint (preview) an open source, third-party tool called RequestBin. Note RequestBin is an open source tool that is not intended for high throughput usage. The use of the tool here two ways to launch the Cloud Shell: If you choose to install and use the CLI locally, this article requires that you are running the latest version of Azure CLI (2.0.14 Blob storage account To use Azure Storage, you need a storage account. Blob storage events are currently available today only in Event Grid is currently in preview, and available only for storage accounts in the westcentralus and westus2 regions. events from the Blob storage account, let's create the endpoint for the event message. Rather than write code to respond to the event, we will create an endpoint that collects the messages so you can view them. RequestBin is an open source, third-party tool that enables you to create an endpoint, and view requests that are sent to it. Go to RequestBin, and click Create a RequestBin. Copy the bin URL, because you need it when subscribing to the topic. You subscribe to a topic to tell Event Grid which events you want to track. The following example subscribes to the Blob storage account you created, and passes the URL from RequestBin as the endpoint for event notification. Replace <event_subscription_name> with a unique name for your event subscription, and <URL_from_RequestBin> with the value from the preceding section. By specifying an endpoint when subscribing, Event Grid handles the routing of events to that endpoint. For <resource_group_name> and <storage_account_name>, use the values you created earlier. az eventgrid resource event-subscription create \ --endpoint <URL_from_RequestBin> \ --name <event_subscription_name> \ --provider-namespace Microsoft.Storage \ --resource-type storageAccounts \ --resource-group <resource_group_name> \ --resource-name <storage_account_name> Trigger an event from Blob storage Now, let's trigger an event to see how Event Grid distributes the message to your endpoint. First, let's configure the name and key for the storage account, then we'll RequestBin URL that you created earlier. Or, click refresh in your open RequestBin:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-event-quickstart
CC-MAIN-2018-05
refinedweb
350
61.97
Creating a new Django project on PythonAnywhere So you want to create a web application, but you don't really want to do all the faffing around that is involved in setting up and configuring web servers? Note: This tutorial is for Django 1.3 on PythonAnywhere. If you use a different version of Django, you will get weird and unhelpful errors. If you just want to follow the official django tutorial on PythonAnywhere, check out FollowingTheDjangoTutorial instead Well, that's one of the reasons we created PythonAnywhere. This tutorial will take you through the process of creating a working Django site with an admin interface and a front page that tells you the time. At the end of the tutorial, there's also an overview of options you can use if you already know Django, and you have already coded up a web app which you want to use on PythonAnywhere. To follow along with this tutorial you will need a PythonAnywhere account. Go and sign up if you don't already have one then come back here. Contents - Creating a new Django project on PythonAnywhere - Contents - Quickstarting a new project - Creating an app inside the Django project - Configuring the database and enabling the admin interface - Defining your urls - Creating a template - Writing the first view - Serving static files (css, images etc) - Existing apps / manual config Quickstarting a new project Log into PythonAnywhere, go to the Web tab, and click on Add a new web app link. This will pop up a dialog, whose first page asks you to enter your own domain or gives you the option of using <username>.pythonanywhere.com. Just select that for now and click next. Now you will be presented with a list of the various Web frameworks you can choose from. Select Django. This will bring up some options - feel free to change the Project Name to something more descriptive. The default directory is fine, or if you prefer you can put the app into your Dropbox - but you'll need to have set up a shared Dropbox folder already. There's more info on that in the Files tab on your dashboard Click the Next button, and after a few seconds, the wizard dialog will disappear and a very basic app will be up and running! Just click the URL at the top of the screen to go see it - you should see the default "Welcome to Django" page. Creating an app inside the Django project Django suggest that you structure your sites as projects which contain one or more apps - the idea is that you can re-use an app in different projects. Let's create our first app inside your project. In order to do this you start a Bash Console -- you can do this from the Consoles tab on the dashboard. In the console, enter the following commands: cd mysite python ./manage.py startapp myapp Replace mysite with the name of your project, if you chose a different one. If you do an ls, you'll see that Django has created a new folder called myapp inside your project. We'll need this console again later, so why not keep it open for now, and proceed with the rest of the tutorial in a different tab. Configuring the database and enabling the admin interface Django needs a database connection to do pretty much anything. PythonAnywhere provides support for MySQL databases but for now we will just use sqlite3, a file based database that does not require setting up a database account and password. The file that contains all the settings information for Django is called, naturally enough, settings.py. By default it lives in your Django project directory. Go to the Files tab on your dashboard, find your project folder, and then find settings.py inside it. We will need to change the DATABASE and INSTALLED_APP sections so that it looks like the code below. With your username and project name replaced on the line beginning with NAME. Do not change any other bits of it at this time. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': '/home/<my name>/<my_test_project>/db.sqlite', 'USER': '', 'PASSWORD': '', 'HOST': '', ', 'mysite.myapp', ) Again, replace mysite.myapp with your-project-name.your-app-name if necessary. Now that you have told Django what database to use you have to run a management command in order for it to create the initial tables and the first admin user. Go back to your Bash console and enter the following commands: python ./manage.py syncdb You will be asked a series of questions. You should enter a username, email address, and password for the first admin user. You will need this information to log in to Django's admin interface so make sure that you remember the details somehow. The last message that Django will print out is No fixtures found -- once it's printed that and returned to the Bash prompt, you're ready to continue. Defining your urls The next step is defining the urls for your application. This means you are starting to tell Django what to do when a user visits a certain location in your web site. The file to edit is called urls.py and it should be in the same directory as your settings.py file. Open it up and make it look like the file below - you'll need to uncomment 3 lines to do with admin and fix the first url line. Remember that you have to delete the leading spaces as well as the # marks to uncomment. Remember, as always, that each time you see mysite or myapp, you should replace them with your project name or app name, if necessary. from django.conf.urls.defaults import patterns, include, url # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # This is going to be our home view. # We'll uncomment it later # url(r'^$', 'mysite.myapp.views.home', name='home'), # Uncomment the admin/doc line below to enable admin documentation: # url(r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: url(r'^admin/', include(admin.site.urls)), ) In this file you are defining two url patterns. The first one matches a blank string which is what happens when a user visits http://<your username>.pythonanywhere.com The second one matches "admin/" which is Django's default admin interface, which should now be working. You can check that now if you like. First go to the Web tab and click the reload button to activate your changes. Then, visit: (don't forget the /admin/ at the end of the URL!) You should be able to log in with the username and password that you provided earlier. When you are done come back here to continue. If you get an error rather than the admin interface then go back through each of these steps and check that everything is exactly like the examples. You also might want to check the error logs, available from the Web tab, and see if they can give you additional clues about where the the mistake might be. Creating a template Now that you have a working admin interface it is time to create a template. First you need to create a directory for your template inside your app folder: - Go to the Files tab. - Go to your mysitedirectory - Then go down to myapp - Enter templates into the "Enter a new directory name" field at the top of - the page and click the "New" button next to it. Now create a new file inside the templates folder called home.html, and make it look like this. <html> <head> <title>My Python Anywhere hosted Django app</title> </head> <body> <h1>My Python Anywhere hosted Django app</h1> <p>Well, since it's already {{ right_now.minute }} past {{ right_now.hour }} UTC, that is as far as we are going to take you in this tutorial.</p> <p>What you do next is up to you...</p> </body> </html> The values inside the {{ }} are going to be replaced by dynamic content when we complete our final task. Which is writing a view. Writing the first view Views are Django functions which take a request and return a response. We are going to write a very simple view called home which uses the home.html template and uses the datetime module to tell us what the time is whenever the page is refreshed. The file we need to edit is called views.py and it will be inside mysite/myapp/ Copy the code below into it and save the file. from datetime import datetime from django.shortcuts import render def home(request): return render(request, 'home.html', {'right_now':datetime.utcnow()}) Now that we've defined the view, we can uncomment this line in urls.py # url(r'^$', 'mysite.myapp.views.home', name='home'), The last step is reloading your web app so that the changes are noticed by the web server. Go and do that now, back in the Web tab with the big reload button. If you have followed along with this tutorial you should now have a working, dynamic page at http://<your username>.pythonanywhere.com/ You can continue to experiment with this by changing the view and the template as well as progress through the full Django tutorial starting here to see what else is possible. But for now, that's the end of the tutorial. Happy coding! Serving static files (css, images etc) Static files are an important part of what makes a web page look right. When you quickstart a Django app on PythonAnywhere two locations are automatically created for static files Any files placed in the static or media folders inside your project will be available at and respectively. You can change this by editing the static file entries on the tab for your web app. Existing apps / manual config If you already have a web app, the idea is that it should be just as easy to host your project on PythonAnywhere, as it is to host in on your own PC using the Django dev server. There's just a couple of subtleties: - adding the right path to sys.pathin wsgi.py - setting up your database in settings.py- you'll need the full path for sqlite - setting up your static files Note down the path to your project's parent folder and the project name There are several ways you might have got a Django project into PythonAnywhere - maybe you started one from scratch using django-admin.py startproject. Maybe you pulled it in from GitHub or another code sharing site using git or a similar VCS tool. Maybe it's in your Dropbox! Either way, the thing to do is make a note of the path to parent folder of the project root. The project root is the folder which contains settings.py; for example, let's say it's /home/my_username/projects/my_project/ In this case, you want to make a note of the path to the project's parent folder /home/my_username/projects You also need to make a note of the name of the project root folder in this case: my_project Those two together should add up to the full path to the project root. Crystal-clear? If you have problems with import errors, check out our Guide to sys.path and import errors Edit the wsgi file Go to your PythonAnywhere Dashboard and click on the Web tab, then click the Add a new web app button. In the first step, just enter the domain where you want to host your site, then in the second, instead of selecting "Django", select Manual configuration. This will set up a web app but won't try to create a new Django project for you. Once you've clicked next on the next page, you near the top of the page some text saying something like It is configured via a WSGI file stored at: /var/www/something_com_wsgi.py" The filename is a link, and if you click there you'll be taken to an editor displaying a WSGI configuration file that tells PythonAnywhere how to manage your site. If you're a WSGI wizard then you can probably work out what to do from here. But if not, just follow the steps below: Delete the contents and replace them with the below, replacing /home/my_username/projects with the path to the parent folder of your project, which you noted down earlier, and my_project with your project name. # +++++++++++ DJANGO +++++++++++ import os import sys ## assuming your Django settings file is at '/home/my_username/projects/my_project/settings.py' path = '/home/my_username/projects' if path not in sys.path: sys.path.append(path) os.environ['DJANGO_SETTINGS_MODULE'] = 'my_project.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() If you're using a version of Django > 1.6 (which you really shouldn't with this guide, which is for 1.3), you'll need to replace the last two lines with this: from django.core.wsgi import get_wsgi_application application = get_wsgi_application() Your Django app should now work, and you can visit at Again, if you have any problems, check out the guide to sys.path and import errors Setup the database in settings.py, and syncdb You need to make sure of three things: - if using sqlite, you must have the full path to your database - if using MySQL, you'll need the database name, password, and host - ( yourusername.mysql.pythonanywhere-services.comif you're using our MySQL service) - finally, make sure all your apps are in INSTALLED_APPS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': '/home/my_username/my_test_project/db.sqlite', # absolute location is required 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '' } } INSTALLED_APPS = ( #[...] 'django.contrib.admin', 'my_test_project.my_app', ) Now open up a Bash console to perform the initial database creation cd <your project name> ./manage.py syncdb Follow the usual prompts to create an admin user and password. Static files Reload the web server and enjoy! Now you just need to reload the web server so that it notices the changes you have made. Visit the Web tab on the PythonAnywhere dashboard and click the "Reload web app" button. That's it. At this stage you have a working admin interface and can visit it at.
https://help.pythonanywhere.com/pages/DjangoTutorial
CC-MAIN-2017-09
refinedweb
2,399
71.34
ODBC Installation and Validation on UNIX® Systems This chapter provides detailed information about ODBC installation and validation on UNIX® and related operating systems. It discusses the following topics: Troubleshooting for Shared Object Dependencies — how to validate dependencies on shared objects. Performing a Stand-alone Installation — installing the InterSystems ODBC client driver and supported driver manager on UNIX®. Custom Installation and Configuration for iODBC — installing and configuring the iODBC driver manager, and configuring PHP for iODBC. Key File Names — specific file names of some of important installed components. Performing a Stand-alone Installation By default, a full ODBC installation is performed with a standard InterSystems installation. If you perform a custom installation (as described in the Installation Guide), you can select the “SQL client only” option to install only the client access components (ODBC client driver). In addition, however, a stand-alone installer is provided for InterSystems ODBC. To use this installer: Create the directory where you wish to install the client, such as /usr/irisodbc/. Copy the appropriate zipped tar file into the directory that you just created. The ./dist/ODBC/ directory contains zipped tar files with names like the following: ODBC-release-code-platform.tar.ZCopy code to clipboard where release-code is a release-specific code (that varies among InterSystems versions and releases) and platform specifies the operating system that the ODBC client runs on. Go to the directory you created and manually unpack the .tar file, as follows: # gunzip ODBC-release-code-platform.tar.Z # tar xvf ODBC-release-code-platform.tarCopy code to clipboard This creates bin and dev directories and installs a set of files. Run the ODBCInstall program, which will be in the directory that you created. This program creates several sample scripts and configures irisodbc.ini under the mgr directory. For example: # pwd /usr/irisodbc # ./ODBCInstallCopy code to clipboard In some releases, the ./dist/ODBC/ directory contains the following command to display the platform name that identifies the file you need: # ./cplatname identify This command is not present in releases where it is not required. SQL Gateway Drivers for UNIX® and Related Platforms The <install-dir>/bin/ directory contains the following versions of the shared object used by the SQL Gateway. This enables you to connect from InterSystems IRIS to other ODBC client drivers. These files are not installed if you perform a stand-alone installation. linked against iODBC cgate.so — supports 8-bit ODBC. cgateiw.so — supports Unicode ODBC. linked against unixODBC cgateu.so — supports 8-bit ODBC. cgateur64.so — supports 8-bit ODBC for 64-bit unixODBC For more information, see “Using an InterSystems Database as an ODBC Data Source on UNIX®”. When using third-party shared libraries on a UNIX® system, LD_LIBRARY_PATH must be defined by setting the InterSystems IRIS LibPath parameter (see “LibPath” in the Configuration Parameter File Reference). This is a security measure to prevent unprivileged users from changing the path. Custom Installation and Configuration for iODBC If you want to build your own iODBC driver manager to operate under custom conditions, you can do so. The iODBC executable and include files are in the directory install-dir/dev/odbc/redist/iodbc/. You need to set LD_LIBRARY_PATH (LIBPATH on AIX®) and the include path in order to use these directories to build your applications. If you want to customize the iODBC driver manager, you can also do that. Download the source from the iODBC Web site () and follow the instructions. Configuring PHP with iODBC You can use InterSystems ODBC functionality in conjunction with PHP (PHP: Hypertext Processor, which is a recursive acronym). PHP is a scripting language that allows developers to create dynamically generated pages. The process is as follows: Get or have root privileges on the machine where you are performing the installation. Install the iODBC driver manager. To do this: Download the kit. Perform a standard installation and configuration, as described earlier in this chapter. Configure the driver manager for use with PHP as described in the iODBC+PHP HOWTO document on the iODBC web site (). Note that LD_LIBRARY_PATH (LIBPATH on AIX®) in the iODBC PHP example does not get set, due to security protections in the default PHP configuration. Also, copy libiodbc.so to /usr/lib and run ldconfig to register it without using LD_LIBRARY_PATH. Download the PHP source kit from and un-tar it. Download the Apache HTTP server source kit from and un-tar it. Build PHP and install it. Build the Apache HTTP server, install it, and start it. Test PHP and the Web server using info.php in the Apache root directory, as specified in the Apache configuration file (often httpd.conf). The URL for this is. Copy the InterSystems-specific initialization file, irisodbc.ini to /etc/odbc.ini because this location functions better with the Apache Web server if the $HOME environment variable is not defined. Configure and test the libirisodbc.so client driver file. Copy the sample.php file from the InterSystems ODBC kit to Apache root directory (that is, the directory where info.php is located), and tailor it to your machine for the location of your InterSystems installation directory. You can then run the sample.php program, which uses the SAMPLES namespace, by pointing your browser to Key File Names Depending on your configuration needs, it may be useful to know the specific file names of some of the installed components. In the following lists, install-dir is the InterSystems installation directory (the path that $SYSTEM.Util.InstallDirectory() returns on your system). The install-dir/bin/ directory contains the following driver managers: libiodbc.so — The iODBC driver manager, which supports both 8-bit and Unicode ODBC APIs. libodbc.so — The unixODBC driver manager, for use with the 8-bit ODBC API. Between releases of the ODBC specification, various data types such as SQLLen and SQLULen changed from being 32-bit values to 64-bit values. While these values have always been 64-bit on iODBC, they have changed from 32-bit to 64-bit on unixODBC. As of unixODBC version 2.2.14, the default build uses 64-bit integer values. InterSystems drivers are available for both 32-bit and 64-bit versions of unixODBC. InterSystems ODBC client drivers are provided for both ODBC 2.5 and ODBC 3.5. The ODBC 3.5 versions will convert 3.5 requests to the older 2.5 automatically, so in most cases either driver can be used. The install-dir/bin/ directory contains the following versions (*.so or *.sl): libirisodbc — default driver for 8-bit ODBC 2.5 libirisodbc35 — supports 8-bit ODBC 3.5 libirisodbciw — supports Unicode ODBC 2.5 libirisodbciw35 — supports Unicode ODBC 3.5 libirisodbciw.dylib — supports Unicode ODBC for MAC OS libirisodbcu. — default driver for 8-bit ODBC 2.5 libirisodbcu35 — supports 8-bit ODBC 3.5 libirisodbcur64 — supports 8-bit ODBC 2.5 for 64-bit unixODBC libirisodbcur6435 — supports 8-bit ODBC 3.5 for 64-bit unixODBC The install-dir/mgr/irisodbc.ini file is a sample ODBC initialization file. The files for the test programs are discussed in “Testing the InterSystems ODBC Configuration”.
https://docs.intersystems.com/irisforhealthlatest/csp/docbook/Doc.View.cls?KEY=BNETODBC_unixinst
CC-MAIN-2020-45
refinedweb
1,181
51.55
with queue program Jackii Jonsen Greenhorn Joined: Jan 08, 2005 Posts: 7 posted Jan 12, 2005 18:47:00 0 Hello.....is there any one who can help me with completing this problem? I'm learning about Queue and this program is assigend to me as last program of this semester... I want to get it worked...but..I have no idea how to deal with this program.. So..is there any one who can complete this program so it works? It's due date is tomorrow and...I hope someone can spend some time for it.. I know it's actaully too much to ask someone to spend his own time for my work...but please consider it as you are rescuing someone from the bottom hell.. I will fail if it is not finished by tomorrw...Please do me this biggest favor of my whole life... ------------------------------------------------------------------------------------------------------ This is the driection of the program. ------------------------------------------------------------------------------- ----------------------- The Cafe would like to computerize its seating and waiting procedures. The dining room has 1 table for 8 customers, 2 tables for 6 customers and 4 tables for 4 customers. The waiting area can hold up to 20 people. Assuming that the parties exit n the same order that they enter, write a program that will allow the Cafe to keep track of the people in the eating area and the people in the waiting area. You may not separate any party so that they can be seated at different tables. Your output should show the results of each party entering the restaurant. If the waiting area is full or will become more than the allowable amount of people, the party is not to be allowed in. (Note: number of customers should match the spaces in the table. Ex> party of 4 only sit on the table for 4 customers. ) (Note: If more than 20 people are in waiting area, print something like "Sorry we don't have place for you") -------- The output should look similar to the following: 1. Isle--party of 4 is seated 2. Hood--party of 7 is seated 3. Time--Party of 8 is assigned to the waiting area 4. Gun--party of 12 is not allowed in the restaurant 5. Isle--party of 4 leaves the eating area 6. Hood--party of 7 leaves the eating area Time--party of 8 is seated 7. Lee--party of 6 is seated -------- Use the following data for your program. Final Data: 1. Dean Isle--party of 6 arrives 2. Justin Time--party of 4 arrives 3. Ray Gun--party of 7 arrives 4. Cal Ander--party of 4 arrives 5. Frank Lee--party of 3 arrives 6. Tim Buhr--party of 5 arrives 7. Eve Ning--party of 4 arrives 8. Pam Flet--party of 4 arrives 9. Lee Way--party of 8 arrives 10. Dan DeLion--party of 4 arrives 11. Carol Ling--party of 5 arrives 12. Dean Isle--leaves 13. Dan Druff--party of 4 arrives 14. Justin Time--leaves 15. Ray Gun--leaves 16. Barb Ell--party of 3 arrives 17. Bill Board--party of 8 arrives 18. Don Key--party of 6 arrives 19. Chuck Wagon--party of 4 arrives 20. Cal Ander--leaves 21. Frank Lee--leaves 22. Tim Buhr--leaves 23. Joe Kuhr--party of 6 arrives 24. Eve Ning--leaves 25. Will Power--party of 10 arrives. ------------------------------------------------------------------------------- ------------------------ And the program I and my friend tried is here: ------------------------------------------------------------------------------- ------------------------- import java.util.*; public class Queue2 { final static byte z = 4; public static void main ( String [] args) { ListQueue[] wait = new ListQueue[5]; ListQueue[] Seat = new ListQueue[5]; byte[] table = {0,3,1,3,2}; byte WaitArea = 0; for (int i = 0; i < Wait.length; i++) { Wait = new ListQueue(); Seat = new ListQueue(); } Family Isle= new Family("Dean Isle" , 6); WaitArea = arrive(Isle, Seat, table, waitArea); Family JTime = new Family("Justin Time" , 4); WaitArea = arrive(JTime, Seat, table, waitArea); Family Gun= new Family("Ray Gun" , 7); WaitArea = arrive(Gun, Seat, table, waitArea); Family Ander= new Family("Cal Ander" , 4); WaitArea = arrive(Ander, Seat, table, waitArea); Family Lee= new Family("Frank Lee" , 3); WaitArea = arrive(Lee, Seat, table, waitArea); Family Buhr= new Family("Tim Buhr" , 5); WaitArea = arrive(Buhr, Seat, table, waitArea); Family Ning= new Family("Eve Ning" , 4); WaitArea = arrive(Ning, Seat, table, waitArea); Family Flet= new Family("Pam Flet" , 4); WaitArea = arrive(Flet, Seat, table, waitArea); Family Way= new Family("Lee Way" , 8); WaitArea = arrive(Way, Seat, table, waitArea); Family DeLion= new Family("Dan DeLion" , 4); WaitArea = arrive(DeLion, Seat, table, waitArea); Family Ling= new Family("Carol Ling" , 5); WaitArea = arrive(Ling, Seat, table, waitArea); WaitArea = depart(Isle, Wait, Seat, table, WaitArea); Family Druff= new Family("Dan Druff" , 4); WaitArea = arrive(Druff, Wait, Seat, talbe, WaitArea); WaitArea = depart(JTime, Wait, Seat, talbe, WaitArea); WaitArea = depart(Gun, Wait, Seat, talbe, WaitArea); Family Ell= new Family("Barb Ell" , 3); WaitArea = arrive(Ell, Seat, table, waitArea); Family Board= new Family("Bill Board" , 8); WaitArea = arrive(Board, Seat, table, waitArea); Family Dkey= new Family("Don Key" , 6); WaitArea = arrive(DKey, Seat, table, waitArea); Family Wagon= new Family("Chuck Wagon" , 4); WaitArea = arrive(Wagon, Seat, table, waitArea); WaitArea = depart(Ander, Seat, table, waitArea); WaitArea = depart(Lee, Seat, table, waitArea); WaitArea = depart(Buhr, Seat, table, waitArea); Family Kuhr= new Family("JoeKuhr" , 6); WaitArea = arrive(Khur, Seat, table, waitArea); WaitArea = depart(Ning, Seat, table, waitArea); Family Power= new Family("Will Power" , 10); WaitArea = arrive(Power, Seat, table, waitArea); } public static byte depart(Family F, ListQueue[] Wait, ListQueue[] Seat, byte[] b, byte W) { byte size = F.mySize; if (size%2==1) size++; Seat[Size=z].dequeue(); System.out.println(F.myName + ", party of " + F.mySize + ", has left."); b[size - z] --; if (!Wait[Size-z].isEmpty()) { Family N = Family ( Wait[Size-z].dequeue()); Seat[Size-z].enqueue(N); b[Size-z]++; System.out.println(N.myName + ", party of " + N.mysize + ", has been seated from the waiting area."); W -= N.mySize; } return W; } public static byte arrive(Family F, ListQueue[] Wait, ListQueue[] Seat, byte[] b, byte W) { \ byte size = F.mySize; if (size%2==1) size++; if (size > 8) { System.out.println(F.myName + ", party of " + F.mySize + ", has been seated."); } else if (W <= 20 - size) { Wait[size = z].enqueue(F); W += F.mySize; System.out.println(F.myName + ", party of " + F.mySize + ", is in the waiting romm."); } else { System.out.println(F.myName + ", party of " + F.mySize + ", has been asked to leave."); } return W; } } -------------------------------------------------------------------------------------------------------- I think one of the problem is that it doesn't have "family class" that supports the data of parties... I konw it's akward....but please help............. Thank you so much...bottom of my heart. [ EJFH: Added more meaningful subject line. ] [ January 13, 2005: Message edited by: Ernest Friedman-Hill ] Jimmy Die Ranch Hand Joined: Nov 20, 2003 Posts: 97 posted Jan 13, 2005 00:05:00 0 Hi, I think that your on the right track. I do not see a family class and you have created instances of family. Why don't you try to create some code for your family class and pass it through the post as a start. We can look at it and tweek it! I would finish you homework for you but I'm doing someone elses on a different thread . Give it a shot and let's see what can be done in 7 hours! Jimmy Die Barry Higgins Ranch Hand Joined: Jun 05, 2003 Posts: 89 posted Jan 13, 2005 03:46:00 0 There are a few ambiguities here in the spec! If a table for 4 departs and the next in the queue are parties of 8 and 4 respectively can the table of 4 "skip" the queue to take the table or do they have to wait until the table of 8 is seated before they can take their table? Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24189 34 I like... posted Jan 13, 2005 07:40:00 0 As Jimmy says, you'll need to write a Family class. One thing that would help would be to use the standard Java convention of naming variables with a lower class initial letter, and classes with an upper case initial; this makes the code much easier to read at a glance. The easier the code is to read, the easier it is to work on! [Jess in Action] [AskingGoodQuestions] I agree. Here's the link: subject: Help with queue program Similar Threads Need Resources and Book to Build FaceBook-like Social Networking Array Problem....sort of Restaurant management system Man walking How is this design? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/398239/java/java/queue-program
CC-MAIN-2015-18
refinedweb
1,448
66.84
What's new in Opera development snapshots: 28 February edition By Divya Maniannimbupani. Tuesday, February 28, 2012 11:46:19 PM There is a new Opera Next out! Download it from the links in the Desktop team's blog post or wait a while for it to show up on your Opera Next updates (Browser Identification section should show Presto/2.10.269 in opera:about). Major Updates We now have better precision handling of fixed point values used for lengths and font-sizes. This has been a significant issue with Opera as many units were rounded off. Vadim has a neat demo of how this works in reality. Check it in Opera Next and marvel at the precision! Updating media query implementation to match the latest drafts of the specs, and to pass all the W3C Media Queries tests. Now you can use dpi, dpcm and dppx as unit values for the resolution media query feature. Here is an example. This also fixes issues with Opera applying rules that are within invalid/incorrect media queries. CSS - Not quite CSS, but malformed fonts never stopped loading, but now they do. - Setting cursorproperty on input elements now works. - The spec allows border-radiusto inherit. Previously this used to fail in Opera, now it no longer does. - There used to be random artifacts on linear gradients used with position: absolute, but this has been fixed. - Inset Box Shadow's offsets were calculated incorrectly if border-top-width is 0 or border-left-width is 0. This has been fixed. - Opera's ::first-letterCSS selector used to select even punctuation characters. But this is no longer the case. HTML - For some reason, entering 5210000010001001 into a field that is of type number caused validation error. This is no longer the case. - Setting bgcolor=transparenton a table element seems to not make the underlying background color show but rather render table blue. SVG - Hiding an element using JavaScript seems to prevent hover on the underlying element. This has now been fixed. - Previously, animated SVG with display: nonewould trigger repaints of the whole view. This was one of the primary reasons why Dabblet was slow in Opera. This has now been fixed. XML - XML document had no document.elementFromPointbut this has now been fixed. - XML namespaces were also not output correctly when XML was serialised for innerHTML. This has now been fixed. ECMAScript - Regex matching failed to match BOM for \s. This has now been fixed. Array.prototype.joinand Array.prototype.concathave been made faster. - Improved parseIntperformance. - The incorrect cache resolution over string values has now been fixed. - Not quite ECMAScript, but relevant nonetheless. The line numbers were previously reported relative to the script tag in stack traces (when you do try {} catch (e) {}). This has been fixed. Thanks to fearphage for the bug report! Number.prototype.toString()was not returning accurate values accurate values for large non-base 10 numbers. This has now been fixed (seems like Chrome suffers from this bug). - Fixes for JSON.stringify(). A bunch of tests from JSON-test-suite were imported and used to fix our JSON.stringify() implementation. Thanks Luke Smith for this test suite! APIs WebRTC - The getUserMediaimplementation has been updated to accept MediaStreamOptions. - If you had allowed a camera to be accessed from multiple domains, it used to crash on reload, it does not now. Canvas - Applying a shadow previously prevented subsequent fill of a canvas area. DOM - If you had previously set the charset of a scriptelement from within JavaScript, you would have noticed that it gets ignored (e.g. script.charset = "ISO-8859-1". But we no longer do so. - selectionStart/selectionEnd were working incorrectly in a text field. This has been fixed. - Constants on DOMException Interface and Node interface had writableand configurableset to true. This has now been fixed. - Read-only properties like event.targetif set in your script would previously throw an exception. This has been updated to only throw when in ECMAScript strict mode and not otherwise. initEventon a dispatched event previously threw an error, this has been rectified to have no effect at all. - Calling preventDefault()on a non-cancelable event previously returned true and was executed but has now been fixed to have no effect. Thanks for the report Romuald Quantin! - Previously scrollevent did not fire when scrolling within a textarea. This has now been fixed. - By now you must be sensing a theme to all the DOM fixes - prevent unwanted errors from throwing. In this vein, we also have stopped XHR from firing error events and returning status code as 0 when httpresponses are anything but 200. In this snapshot, Opera will transparently pass through the right HTTP response codes. - You can now set the responseTypefor a XMLHTttpRequestto be json. This means the data returned would be a JavaScript object parsed from the JSON returned by the server as a response to this request. Misc How do browsers do page scroll when you press the "page down"/"space" button? How do they know how much to scroll by? There is no standard way of doing this, but we had an interesting issue where Opera was doing this significantly differently from other browsers. From our investigation, it seems like Chrome 15+, Safari 5+, IE9+ scroll by innerHeight - (innerHeight * 12.5%)while Gecko does so by innerHeight - (innerHeight * 10%). This snapshot aligns our page scroll on page down/space key to match WebKit/Trident's behaviour. Here is a fun screenshot of this behaviour across current Opera/IE/Firefox. - 32-bit builds running on 64-bit OS now include "WOW64" in the User-Agent String. Martin KadlecBS-Harou # Wednesday, February 29, 2012 3:48:41 PM The evil number There is not getUserMedia method on navigator object. netwolf # Thursday, March 1, 2012 8:47:26 PM Originally posted by BS-Harou:I'd really like to know why it's exactly this number. It's not a power of 2, might be a prime,... what is so special about it that it's the only number that's causing issues in this context? Chairul Adli Muhammaddown2down # Saturday, March 3, 2012 4:28:39 AM rseiler # Friday, March 9, 2012 6:44:08 AM Samee Ullahsame2cool # Sunday, March 11, 2012 5:40:58 PM Hanvordhanvord # Monday, March 12, 2012 5:34:39 AM Vlad Paulvlad74paul # Wednesday, March 14, 2012 7:55:46 PM Mimis Mum (MM)mimi_s_mum # Thursday, March 15, 2012 6:59:58 PM Originally posted by rseiler:But not in this case. I think Opera's original page-down behaviour is superior and should be retained, because: Opera developers would do well to ask users opinions first when changing something not for better functionality, but only for the same of being in line with other browsers, especially if the change means an altered or lost functionality to long time users. Isn't it what the user forums are for? Kate Joanhealthlife # Monday, March 19, 2012 2:16:02 AM
http://my.opera.com/ODIN/blog/2012/02/28/whats-new-in-opera-development-snapshots-28-february-edition?cid=84591072
CC-MAIN-2013-48
refinedweb
1,160
66.94
algernon alternatives and similar packages Based on the "Server Applications" category. Alternatively, view algernon alternatives based on common mentions on social networks and blogs. traefik10.0 9.3 algernon VS traefikThe Cloud Native Application Proxy etcd10.0 9.7 algernon VS etcdDistributed reliable key-value store for the most critical data of a distributed system Caddy10.0 9.1 algernon VS CaddyFast, multi-platform web server with automatic HTTPS nsq9.9 7.5 algernon VS nsqA realtime distributed messaging platform consul9.9 9.9 algernon VS consulConsul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure. Vault9.9 9.8 algernon VS VaultA tool for secrets management, encryption as a service, and privileged access management minio9.9 9.8 algernon VS minioHigh Performance, Kubernetes Native Object Storage apex9.6 2.1 algernon VS apexBuild, deploy, and manage AWS Lambda functions with ease (with Go support!). Ponzu9.3 0.0 algernon VS PonzuHeadless CMS with automatic JSON API. Featuring auto-HTTPS from Let's Encrypt, HTTP/2 Server Push, and flexible server framework written in Go. RoadRunner9.3 9.8 algernon VS RoadRunner🤯 High-performance PHP application server, load-balancer and process manager written in Golang Jocko9.2 0.4 algernon VS JockoKafka implemented in Golang with built-in coordination (No ZK dep, single binary install, Cloud Native) Easegress9.0 9.6 algernon VS EasegressA Cloud Native traffic orchestration system SFTPGo8.8 9.5 algernon VS SFTPGoFully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob devd8.8 0.0 algernon VS devdA local webserver for developers Fider8.6 8.8 algernon VS FiderOpen platform to collect and prioritize product feedback discovery8.4 1.1 algernon VS discoveryA registry for resilient mid-tier load balancing and failover. Flagr8.3 4.1 algernon VS FlagrFlagr is a feature flagging, A/B testing and dynamic configuration microservice Key Transparency8.2 1.0 algernon VS Key TransparencyA transparent and secure way to look up public keys. Rendora8.2 0.0 algernon VS Rendoradynamic server-side rendering using headless Chrome to effortlessly solve the SEO problem for modern javascript websites Trickster8.1 7.8 algernon VS TricksterOpen Source HTTP Reverse Proxy Cache and Time Series Dashboard Accelerator GeoDNS in Go8.0 5.6 algernon VS GeoDNS in GoDNS server with per-client targeted responses flipt8.0 9.0 algernon VS fliptAn open-source, on-prem feature flag solution jackal7.7 6.5 algernon VS jackal💬 Instant messaging server for the Extensible Messaging and Presence Protocol (XMPP). Sparta7.1 7.9 algernon VS Spartago microservices, powered by AWS Lambda Golang API Starter KitGo Server/API boilerplate using best practices DDD CQRS ES gRPC Walrus6.1 9.3 algernon VS Walrus🔥 Fast, Secure and Reliable System Backup, Set up in Minutes. go-feature-flag6.0 8.9 algernon VS go-feature-flagA simple and complete feature flag solution, without any complex backend system to install, all you need is a file as your backend. 🎛️ goproxy5.8 1.4 algernon algernon VS AegisServerless Golang deploy tool and framework for AWS Lambda Eru5.5 9.0 algernon VS EruEru, a simple, stateless, flexible, production-ready orchestrator designed to easily integrate into existing workflows. Can run any virtualization things in long or short time. marathon-consul5.5 0.0 algernon VS marathon-consulIntegrates Marathon apps with Consul service discovery. dudeldu4.6 0.0 algernon VS dudelduA simple SHOUTcast server. Simple CRUD App w/ Gorilla/Mux, MariaDBSimple CRUD Application with Go, Gorilla/mux, MariaDB, Redis. lets-proxy23.6 7.1 algernon VS lets-proxy2Reverse proxy with automatically obtains TLS certificates from Let's Encrypt lama.sh3.4 0.0 algernon VS lama.shRun "curl -L lama.sh | sh" to start a web server Euterpe3.1 9.1 algernon VS EuterpeSelf-hosted music streaming server 🎶 with RESTful API and Web interface. Think of it as your very own Spotify! psql-streamer2.8 0.0 algernon VS psql-streamerStream database events from PostgreSQL to Kafka cortex-tenant2.6 5.7 algernon VS cortex-tenantPrometheus remote write proxy that adds Cortex tenant ID based on metric labels autobd2.4 0.0 algernon VS autobdautobd is an automated, networked and containerized backup solution nginx-prometheus2.2 0.0 algernon VS nginx-prometheusTurn Nginx logs into Prometheus metrics yakvs2.1 1.1 algernon VS yakvsA small, networked, in-memory key-value store. simple-jwt-provider2.0 4.8 algernon VS simple-jwt-providerSimple and lightweight provider which exhibits JWTs, supports login, password-reset (via mail) and user management. go-proxy-cache1.9 8.3 algernon VS go-proxy-cacheSimple Reverse Proxy with Caching, written in Go, using Redis. protoxy1.8 0.7 algernon VS protoxyA proxy server than converts JSON request bodies to protocol buffers riemann-relay0.2 0.0 algernon VS riemann-relayService for relaying Riemann events to Riemann/Carbon destinations go-bp0.2 6.8 algernon VS go-bpgo-macaron starter web application Moxy0.2 8.1 algernon VS MoxyMocker + Proxy Application Do you think we are missing an alternative of algernon or a related project? Popular Comparisons README <!-- title: Algernon description: Web server with built-in support for Lua, Markdown, Pongo2, Amber, Sass, SCSS, GCSS, JSX, Bolt, PostgreSQL, Redis, MariaDB, MySQL, Tollbooth, Pie, Graceful, Permissions2, users and permissions keywords: web server, QUIC, lua, markdown, pongo2, application server, http, http2, HTTP/2, go, golang, algernon, JSX, React, BoltDB, Bolt, PostgreSQL, Redis, MariaDB, MySQL, Three.js theme: material --> Web server with built-in support for QUIC, HTTP/2, Lua, Markdown, Pongo2, HyperApp, Amber, Sass(SCSS), GCSS, JSX, BoltDB (built-in, stores the database in a file, like SQLite), Redis, PostgreSQL, MariaDB/MySQL, rate limiting, graceful shutdown, plugins, users and permissions. All in one small self-contained executable. Distro Packages Quick installation (development version) Requires Go 1.14 or later. Clone algernon outside of GOPATH: git clone cd algernon go build -mod=vendor This may also work: go get -u github.com/xyproto/algernon Releases and pre-built images See the release page for releases for a variety of platforms and architectures. The docker image is a total of 9MB. Technologies Written in Go. Uses Bolt (built-in), MySQL, PostgreSQL or Redis (recommended) for the database backend, permissions2 for handling users and permissions, gopher-lua for interpreting and running Lua, http2 for serving HTTP/2, QUIC for serving over QUIC, blackfriday for Markdown rendering, amber for Amber templates, Pongo2 for Pongo2 templates, Sass(SCSS) and GCSS for CSS preprocessing. logrus is used for logging, [goja-babel](github.com/jvatic/goja-babel) for converting from JSX to JavaScript, tollbooth for rate limiting, pie for plugins and graceful for graceful shutdowns. Design decisions - HTTP/2 over SSL/TLS (https) is used by default, if a certificate and key is given. - If not, regular HTTP is used. - QUIC ("HTTP over UDP", supported by Chromium) can be enabled with a flag. - /data and /repos have user permissions, /admin has admin permissions and / is public, by default. This is configurable. - The following filenames are special, in prioritized order: - index.lua is Lua code that is interpreted as a handler function for the current directory. - index.html is HTML that is outputted with the correct Content-Type. - index.md is Markdown code that is rendered as HTML. - index.txt is plain text that is outputted with the correct Content-Type. - index.pongo2, index.po2 or index.tmpl is Pongo2 code that is rendered as HTML. - index.amber is Amber code that is rendered as HTML. - index.hyper.js or index.hyper.jsx is JSX+HyperApp code that is rendered as HTML - data.lua is Lua code, where the functions and variables are made available for Pongo2, Amber and Markdown pages in the same directory. - If a single Lua script is given as a commandline argument, it will be used as a standalone server. It can be used for setting up handlers or serving files and directories for specific URL prefixes. - style.gcss is GCSS code that is used as the style for all Pongo2, Amber and Markdown pages in the same directory. - The following filename extensions are handled by Algernon: - Markdown: .md (rendered as HTML) - Pongo2: .po2, .pongo2 or .tpl (rendered as any text, typically HTML) - Amber: .amber (rendered as HTML) - Sass: .scss (rendered as CSS) - GCSS: .gcss (rendered as CSS) - JSX: .jsx (rendered as JavaScript/ECMAScript) - Lua: .lua (a script that provides its own output and content type) - HyperApp: .hyper.js or .hyper.jsx (rendered as HTML) - Other files are given a mimetype based on the extension. - Directories without an index file are shown as a directory listing, where the design is hardcoded. - UTF-8 is used whenever possible. - The server can be configured by commandline flags or with a lua script, but no configuration should be needed for getting started. Features and limitations - Supports HTTP/2, with or without HTTPS (browsers may require HTTPS when using HTTP/2). - Also supports regular HTTP. - Can use Lua scripts as handlers for HTTP requests. - The Algernon executable is compiled to native and is reasonably fast. - Works on Linux, OS X and 64-bit Windows. - The Lua interpreter is compiled into the executable. - Live editing/preview when using the auto-refresh feature. - The use of Lua allows for short development cycles, where code is interpreted when the page is refreshed (or when the Lua file is modified, if using auto-refresh). - Self-contained Algernon applications can be zipped into an archive (ending with .zipor .alg) and be loaded at start. - Built-in support for Markdown, Pongo2, Amber, Sass(SCSS), GCSS and JSX. - Redis is used for the database backend, by default. - Algernon will fall back to the built-in Bolt database if no Redis server is available. - The HTML title for a rendered Markdown page can be provided by the first line specifying the title, like this: title: Title goes here. This is a subset of MultiMarkdown. - No file converters needs to run in the background (like for SASS). Files are converted on the fly. - If -autorefreshis enabled, the browser will automatically refresh pages when the source files are changed. Works for Markdown, Lua error pages and Amber (including Sass, GCSS and data.lua). This only works on Linux and OS X, for now. If listening for changes on too many files, the OS limit for the number of open files may be reached. - Includes an interactive REPL. - If only given a Markdown filename as the first argument, it will be served on port 3000, without using any database, as regular HTTP. Handy for viewing README.mdfiles locally. - Full multithreading. All available CPUs will be used. - Supports rate limiting, by using tollbooth. - The helpcommand is available at the Lua REPL, for a quick overview of the available Lua functions. - Can load plugins written in any language. Plugins must offer the Lua.Codeand Lua.Helpfunctions and talk JSON-RPC over stderr+stdin. See pie for more information. Sample plugins for Go and Python are in the pluginsdirectory. - Thread-safe file caching is built-in, with several available cache modes (for only caching images, for example). - Can read from and save to JSON documents. Supports simple JSON path expressions (like a simple version of XPath, but for JSON). - If cache compression is enabled, files that are stored in the cache can be sent directly from the cache to the client, without decompressing. - Files that are sent to the client are compressed with gzip, unless they are under 4096 bytes. - When using PostgreSQL, the HSTORE key/value type is used (available in PostgreSQL version 9.1 or later). - No external dependencies, only pure Go. - Requires Go >= 1.14 or GCC >= 10 ( gccgo). Q&A Q: What is the benefit of using this? In what scenario would this excel? Thanks. -- [email protected]. A: Good question. I'm not sure if it excels in any scenario. There are specialized web servers that excel at caching or at raw performance. There are dedicated backends for popular front-end toolkits like Vue or React. There are dedicated editors that excel at editing and previewing Markdown, or HTML. I guess the main benefit is that Algernon covers a lot of ground, with a minimum of configuration, while being powerful enough to have a plugin system and support for programming in Lua. There is an auto-refresh feature that uses Server Sent Events, when editing Markdown or web pages. There is also support for the latest in Web technologies, like HTTP/2, QUIC and TLS 1.3. The caching system is decent. And the use of Go ensures that also smaller platforms like NetBSD and systems like Raspberry Pi are covered. There are no external dependencies, so Algernon can run on any system that Go can support. The main benefit is that is is versatile, fresh, and covers many platforms and use cases. For a more specific description of a potential benefit, a more specific use case would be needed. Utilities - Comes with the alg2dockerutility, for creating Docker images from Algernon web applications ( .algfiles). - http2check can be used for checking if a web server is offering HTTP/2. Installation OS X Arch Linux - Install algernonfrom AUR, using your favorite AUR helper. Any system where go is available This method is using the latest commit from the main branch: go get -u github.com/xyproto/[email protected] If needed, first: - Set the GOPATH. For example: export GOPATH=~/go - Add $GOPATH/bin to the path. For example: export PATH=$PATH:$GOPATH/bin Overview Running Algernon: Screenshot of an earlier version: The idea is that web pages can be written in Markdown, Pongo2, Amber, HTML or JSX (+React), depending on the need, and styled with CSS, Sass(SCSS) or GCSS, while data can be provided by a Lua script that talks to Redis, BoltDB, PostgreSQL or MariaDB/MySQL. Amber and GCSS is a good combination for static pages, that allows for more clarity and less repetition than HTML and CSS. It˙s also easy to use Lua for providing data for the Amber templates, which helps separate model, controller and view. Pongo2, Sass and Lua also combines well. Pongo2 is more flexible than Amber. The auto-refresh feature is supported when using Markdown, Pongo2 or Amber, and is useful to get an instant preview when developing. The JSX to JavaScript (ECMAscript) transpiler is built-in. Redis is fast, scalable and offers good data persistence. This should be the preferred backend. Bolt is a pure key/value store, written in Go. It makes it easy to run Algernon without having to set up a database host first. MariaDB/MySQL support is included because of its widespread availability. PostgreSQL is a solid and fast database that is also supported. Screenshots Markdown can easily be styled with Sass or GCSS. This is how errors in Lua scripts are handled, when Debug mode is enabled. One of the poems of Algernon Charles Swinburne, with three rotating tori in the background. Uses CSS3 for the Gaussian blur and three.js for the 3D graphics. Screenshot of the prettify sample. Served from a single Lua script. JSX transforms are built-in. Using React together with Algernon is easy. Samples The sample collection can be downloaded from the samples directory in this repository, or here: samplepack.zip. Getting started Run Algernon in "dev" mode This enables debug mode, uses the internal Bolt database, uses regular HTTP instead of HTTPS+HTTP/2 and enables caching for all files except: Pongo2, Amber, Lua, Sass, GCSS, Markdown and JSX. algernon -e Then try creating an index.lua file with print("Hello, World!") and visit the served web page in a browser. Enable HTTP/2 in the browser (for older browsers) - Chrome: go to chrome://flags/#enable-spdy4, enable, save and restart the browser. - Firefox: go to about:config, set network.http.spdy.enabled.http2draftto true. You might need the nightly version of Firefox. Configure the required ports for local use - You may need to change the firewall settings for port 3000, if you wish to use the default port for exploring the samples. - For the auto-refresh feature to work, port 5553 must be available (or another host/port of your choosing, if configured otherwise). Prepare for running the samples git clone make -C algernon Launch the "welcome" page - Run ./welcome.shto start serving the "welcome" sample. - Visit Create your own Algernon application, for regular HTTP mkdir mypage cd mypage - Create a file named index.lua, with the following contents: print("Hello, Algernon") - Start algernon --httponly --autorefresh. - Visit. - Edit index.luaand refresh the browser to see the new result. - If there were errors, the page will automatically refresh when index.luais changed. - Markdown, Pongo2 and Amber pages will also refresh automatically, as long as -autorefreshis used. Create your own Algernon application, for HTTP/2 + HTTPS mkdir mypage cd mypage - Create a file named index.lua, with the following contents: print("Hello, Algernon") - Create a self-signed certificate, just for testing: openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 3000 -nodes - Press return at all the prompts, but enter localhostat Common Name. - For production, store the keys in a directory with as strict permissions as possible, then specify them with the --certand --keyflags. - Start algernon. - Visit. - If you have not imported the certificates into the browser, nor used certificates that are signed by trusted certificate authorities, perform the necessary clicks to confirm that you wish to visit this page. - Edit index.luaand refresh the browser to see the result (or a Lua error message, if the script had a problem). Basic Lua functions // Return the version string for the server. version() -> string // Sleep the given number of seconds (can be a float). sleep(number) // Log the given strings as information. Takes a variable number of strings. log(...) // Log the given strings as a warning. Takes a variable number of strings. warn(...) // Log the given strings as an error. Takes a variable number of strings. err(...) // Return the number of nanoseconds from 1970 ("Unix time") unixnano() -> number // Convert Markdown to HTML markdown(string) -> string // Return the directory where the REPL or script is running. If a filename (optional) is given, then the path to where the script is running, joined with a path separator and the given filename, is returned. scriptdir([string]) -> string // Return the directory where the server is running. If a filename (optional) is given, then the path to where the server is running, joined with a path separator and the given filename, is returned. serverdir([string]) -> string Lua functions for handling requests // Set the Content-Type for a page. content(string) // Return the requested HTTP method (GET, POST etc). method() -> string // Output text to the browser/client. Takes a variable number of strings. print(...) // Return the requested URL path. urlpath() -> string // Return the HTTP header in the request, for a given key, or an empty string. header(string) -> string // Set an HTTP header given a key and a value. setheader(string, string) // Return the HTTP headers, as a table. headers() -> table // Return the HTTP body in the request (will only read the body once, since it's streamed). body() -> string // Set a HTTP status code (like 200 or 404). Must be used before other functions that writes to the client! status(number) // Set a HTTP status code and output a message (optional). error(number[, string]) // Serve a file that exists in the same directory as the script. Takes a filename. serve(string) // Serve a Pongo2 template file, with an optional table with template key/values. serve2(string[, table) // Return the rendered contents of a file that exists in the same directory as the script. Takes a filename. render(string) -> string // Return a table with keys and values as given in a posted form, or as given in the URL. formdata() -> table // Return a table with keys and values as given in the request URL, or in the given URL (`/some/page?x=7` makes the key `x` with the value `7` available). urldata([string]) -> table // Redirect to an absolute or relative URL. May take an HTTP status code that will be used when redirecting. redirect(string[, number]) // Permanent redirect to an absolute or relative URL. Uses status code 302. permanent_redirect(string) // Transmit what has been outputted so far, to the client. flush() Lua functions for formatted output // Output rendered Markdown to the browser/client. The given text is converted from Markdown to HTML. Takes a variable number of strings. mprint(...) // Output rendered Amber to the browser/client. The given text is converted from Amber to HTML. Takes a variable number of strings. aprint(...) // Output rendered GCSS to the browser/client. The given text is converted from GCSS to CSS. Takes a variable number of strings. gprint(...) // Output rendered HyperApp JSX to the browser/client. The given text is converted from JSX to JavaScript. Takes a variable number of strings. hprint(...) // Output rendered React JSX to the browser/client. The given text is converted from JSX to JavaScript. Takes a variable number of strings. jprint(...) // Output rendered HTML to the browser/client. The given text is converted from Pongo2 to HTML. The first argument is the Pongo2 template and the second argument is a table. The keys in the table can be referred to in the template. poprint(string[, table]) // Output a simple HTML page with a message, title and theme. // The title and theme are optional. msgpage(string[, string][, string]) Lua functions related to JSON Tips: - Use JFile(filename )to use or store a JSON document in the same directory as the Lua script. - A JSON path is on the form x.mapkey.listname[2].mapkey, where [, ]and .have special meaning. It can be used for pinpointing a specific place within a JSON document. It's a bit like a simple version of XPath, but for JSON. - Use tostring(userdata)to fetch the JSON string from the JFile object. // Use, or create, a JSON document/file. JFile(filename) -> userdata // Takes a JSON path. Returns a string value, or an empty string. jfile:getstring(string) -> string // Takes a JSON path. Returns a JNode or nil. jfile:getnode(string) -> userdata // Takes a JSON path. Returns a value or nil. jfile:get(string) -> value // Takes a JSON path (optional) and JSON data to be added to the list. // The JSON path must point to a list, if given, unless the JSON file is empty. // "x" is the default JSON path. Returns true on success. jfile:add([string, ]string) -> bool // Take a JSON path and a string value. Changes the entry. Returns true on success. jfile:set(string, string) -> bool // Remove a key in a map. Takes a JSON path, returns true on success. jfile:delkey(string) -> bool // Convert a Lua table, where keys are strings and values are strings or numbers, to JSON. // Takes an optional number of spaces to indent the JSON data. // (Note that keys in JSON maps are always strings, ref. the JSON standard). json(table[, number]) -> string // Create a JSON document node. JNode() -> userdata // Add JSON data to a node. The first argument is an optional JSON path. // The second argument is a JSON data string. Returns true on success. // "x" is the default JSON path. jnode:add([string, ]string) -> // Given a JSON path, retrieves a JSON node. jnode:get(string) -> userdata // Given a JSON path, retrieves a JSON string. jnode:getstring(string) -> string // Given a JSON path and a JSON string, set the value. jnode:set(string, string) // Given a JSON path, remove a key from a map. jnode:delkey(string) -> bool // Return the JSON data, nicely formatted. jnode:pretty() -> string // Return the JSON data, as a compact string. jnode:compact() -> string // Sends JSON data to the given URL. Returns the HTTP status code as a string. // The content type is set to "application/json; charset=utf-8". // The second argument is an optional authentication token that is used for the // Authorization header field. jnode:POST(string[, string]) -> string // Alias for jnode:POST jnode:send(string[, string]) -> string // Same as jnode:POST, but sends HTTP PUT instead. jnode:PUT(string[, string]) -> string // Fetches JSON over HTTP given an URL that starts with http or https. // The JSON data is placed in the JNode. Returns the HTTP status code as a string. jnode:GET(string) -> string // Alias for jnode:GET jnode:receive(string) -> string // Convert from a simple Lua table to a JSON string JSON(table) -> string Lua functions for making HTTP requests Quick example: GET("") // Create a new HTTP Client object HTTPClient() -> userdata // Select Accept-Language (ie. "en-us") hc:SetLanguage(string) // Set the request timeout (in milliseconds) hc:SetTimeout(number) // Set a cookie (name and value) hc:SetCookie(string, string) // Set the user agent (ie. "curl") hc:SetUserAgent(string) // Perform a HTTP GET request. First comes the URL, then an optional table with // URL parameters, then an optional table with HTTP headers. hc:Get(string, [table], [table]) -> string // Perform a HTTP POST request. It's the same arguments as for `Get`, except // the fourth optional argument is the POST body. hc:Post(string, [table], [table], [string]) -> string // Like `Get`, except the first argument is the HTTP method (like "PUT") hc:Do(string, string, [table], [table]) -> string // Shorthand for HTTPClient():Get() GET(string, [table], [table]) -> string // Shorthand for HTTPClient():Post() POST(string, [table], [table], [string]) -> string // Shorthand for HTTPClient():Do() DO(string, string, [table], [table]) -> string Lua functions for plugins // Load a plugin given the path to an executable. Returns true on success. Will return the plugin help text if called on the Lua prompt. Plugin(string) // Returns the Lua code as returned by the Lua.Code function in the plugin, given a plugin path. May return an empty string. PluginCode(string) -> string // Takes a plugin path, function name and arguments. Returns an empty string if the function call fails, or the results as a JSON string if successful. CallPlugin(string, string, ...) -> string Lua functions for code libraries These functions can be used in combination with the plugin functions for storing Lua code returned by plugins when serverconf.lua is loaded, then retrieve the Lua code later, when handling requests. The code is stored in the database. // Create or uses a code library object. Optionally takes a data structure name as the first parameter. CodeLib([string]) -> userdata // Given a namespace and Lua code, add the given code to the namespace. Returns true on success. codelib:add(string, string) -> bool // Given a namespace and Lua code, set the given code as the only code in the namespace. Returns true on success. codelib:set(string, string) -> bool // Given a namespace, return Lua code, or an empty string. codelib:get(string) -> string // Import (eval) code from the given namespace into the current Lua state. Returns true on success. codelib:import(string) -> bool // Completely clear the code library. Returns true on success. codelib:clear() -> bool Lua functions for file uploads // Creates a file upload object. Takes a form ID (from a POST request) as the first parameter. // Takes an optional maximum upload size (in MiB) as the second parameter. // Returns nil and an error string on failure, or userdata and an empty string on success. UploadedFile(string[, number]) -> userdata, string // Return the uploaded filename, as specified by the client uploadedfile:filename() -> string // Return the size of the data that has been received uploadedfile:size() -> number // Return the mime type of the uploaded file, as specified by the client uploadedfile:mimetype() -> string // Save the uploaded data locally. Takes an optional filename. Returns true on success. uploadedfile:save([string]) -> bool // Save the uploaded data as the client-provided filename, in the specified directory. // Takes a relative or absolute path. Returns true on success. uploadedfile:savein(string) -> bool Lua functions for the file cache // Return information about the file cache. CacheInfo() -> string // Clear the file cache. ClearCache() // Load a file into the cache, returns true on success. preload(string) -> bool Lua functions for data structures Set // Get or create a database-backed Set (takes a name, returns a set object) Set(string) -> userdata // Add an element to the set set:add(string) // Remove an element from the set set:del(string) // Check if a set contains a value // Returns true only if the value exists and there were no errors. set:has(string) -> bool // Get all members of the set set:getall() -> table // Remove the set itself. Returns true on success. set:remove() -> bool // Clear the set set:clear() -> bool List // Get or create a database-backed List (takes a name, returns a list object) List(string) -> userdata // Add an element to the list list:add(string) // Get all members of the list list:getall() -> table // Get the last element of the list // The returned value can be empty list:getlast() -> string // Get the N last elements of the list list:getlastn(number) -> table // Remove the list itself. Returns true on success. list:remove() -> bool // Clear the list. Returns true on success. list:clear() -> bool // Return all list elements (expected to be JSON strings) as a JSON list list:json() -> string HashMap // Get or create a database-backed HashMap (takes a name, returns a hash map object) HashMap(string) -> userdata // For a given element id (for instance a user id), set a key // (for instance "password") and a value. // Returns true on success. hash:set(string, string, string) -> bool // For a given element id (for instance a user id), and a key // (for instance "password"), return a value. // Returns a value only if they key was found and if there were no errors. hash:get(string, string) -> string // For a given element id (for instance a user id), and a key // (for instance "password"), check if the key exists in the hash map. // Returns true only if it exists and there were no errors. hash:has(string, string) -> bool // For a given element id (for instance a user id), check if it exists. // Returns true only if it exists and there were no errors. hash:exists(string) -> bool // Get all keys of the hash map hash:getall() -> table // Remove a key for an entry in a hash map // (for instance the email field for a user) // Returns true on success hash:delkey(string, string) -> bool // Remove an element (for instance a user) // Returns true on success hash:del(string) -> bool // Remove the hash map itself. Returns true on success. hash:remove() -> bool // Clear the hash map. Returns true on success. hash:clear() -> bool KeyValue // Get or create a database-backed KeyValue collection (takes a name, returns a key/value object) KeyValue(string) -> userdata // Set a key and value. Returns true on success. kv:set(string, string) -> bool // Takes a key, returns a value. // Returns an empty string if the function fails. kv:get(string) -> string // Takes a key, returns the value+1. // Creates a key/value and returns "1" if it did not already exist. // Returns an empty string if the function fails. kv:inc(string) -> string // Remove a key. Returns true on success. kv:del(string) -> bool // Remove the KeyValue itself. Returns true on success. kv:remove() -> bool // Clear the KeyValue. Returns true on success. kv:clear() -> bool Lua functions for external databases // Query a PostgreSQL database with a SQL query and a connection string PQ([string], [string]) -> table The default connection string is host=localhost port=5432 user=postgres dbname=test sslmode=disable and the default SQL query is SELECT version(). Database connections are re-used if they still answer to .Ping(), for the same connection string. Lua functions for handling users and permissions // Check if the current user has "user" rights UserRights() -> bool // Check if the given username exists (does not look at the list of unconfirmed users) HasUser(string) -> bool // Check if the given username exists in the list of unconfirmed users HasUnconfirmedUser(string) -> bool // Get the value from the given boolean field // Takes a username and field name BooleanField(string, string) -> bool // Save a value as a boolean field // Takes a username, field name and boolean value SetBooleanField(string, string, bool) // Check if a given username is confirmed IsConfirmed(string) -> bool // Check if a given username is logged in IsLoggedIn(string) -> bool // Check if the current user has "admin rights" AdminRights() -> bool // Check if a given username is an admin IsAdmin(string) -> bool // Get the username stored in a cookie, or an empty string UsernameCookie() -> string // Store the username in a cookie, returns true on success SetUsernameCookie(string) -> bool // Clear the login cookie ClearCookie() // Get a table containing all usernames AllUsernames() -> table // Get the email for a given username, or an empty string Email(string) -> string // Get the password hash for a given username, or an empty string PasswordHash(string) -> string // Get all unconfirmed usernames AllUnconfirmedUsernames() -> table // Get the existing confirmation code for a given user, // or an empty string. Takes a username. ConfirmationCode(string) -> string // Add a user to the list of unconfirmed users // Takes a username and a confirmation code // Remember to also add a user, when registering new users. AddUnconfirmed(string, string) // Remove a user from the list of unconfirmed users // Takes a username RemoveUnconfirmed(string) // Mark a user as confirmed // Takes a username MarkConfirmed(string) // Removes a user // Takes a username RemoveUser(string) // Make a user an admin // Takes a username SetAdminStatus(string) // Make an admin user a regular user // Takes a username RemoveAdminStatus(string) // Add a user // Takes a username, password and email AddUser(string, string, string) // Set a user as logged in on the server (not cookie) // Takes a username SetLoggedIn(string) // Set a user as logged out on the server (not cookie) // Takes a username SetLoggedOut(string) // Log in a user, both on the server and with a cookie // Takes a username Login(string) // Log out a user, on the server (which is enough) // Takes a username Logout(string) // Get the current username, from the cookie Username() -> string // Get the current cookie timeout // Takes a username CookieTimeout(string) -> number // Set the current cookie timeout // Takes a timeout number, measured in seconds SetCookieTimeout(number) // Get the current server-wide cookie secret. This is used when setting // and getting browser cookies when users log in. CookieSecret() -> string // Set the current server-side cookie secret. This is used when setting // and getting browser cookies when users log in. Using the same secret // makes browser cookies usable across server restarts. SetCookieSecret(string) // Get the current password hashing algorithm (bcrypt, bcrypt+ or sha256) PasswordAlgo() -> string // Set the current password hashing algorithm (bcrypt, bcrypt+ or sha256) // ‘bcrypt+‘ accepts bcrypt or sha256 for old passwords, but will only use // bcrypt for new passwords. SetPasswordAlgo(string) // Hash the password // Takes a username and password (username can be used for salting sha256) HashPassword(string, string) -> string // Change the password for a user, given a username and a new password SetPassword(string, string) // Check if a given username and password is correct // Takes a username and password CorrectPassword(string, string) -> bool // Checks if a confirmation code is already in use // Takes a confirmation code AlreadyHasConfirmationCode(string) -> bool // Find a username based on a given confirmation code, // or returns an empty string. Takes a confirmation code FindUserByConfirmationCode(string) -> string // Mark a user as confirmed // Takes a username Confirm(string) // Mark a user as confirmed, returns true on success // Takes a confirmation code ConfirmUserByConfirmationCode(string) -> bool // Set the minimum confirmation code length // Takes the minimum number of characters SetMinimumConfirmationCodeLength(number) // Generates a unique confirmation code, or an empty string GenerateUniqueConfirmationCode() -> string Lua functions that are available for server configuration files // Set the default address for the server on the form [host][:port]. // May be useful in Algernon application bundles (.alg or .zip files). SetAddr(string) // Reset the URL prefixes and make everything *public*. ClearPermissions() // Add an URL prefix that will have *admin* rights. AddAdminPrefix(string) // Add an URL prefix that will have *user* rights. AddUserPrefix(string) // Provide a lua function that will be used as the permission denied handler. DenyHandler(function) // Return a string with various server information. ServerInfo() -> string // Direct the logging to the given filename. If the filename is an empty // string, direct logging to stderr. Returns true on success. LogTo(string) -> bool // Returns the version string for the server. version() -> string // Logs the given strings as INFO. Takes a variable number of strings. log(...) // Logs the given strings as WARN. Takes a variable number of strings. warn(...) // Logs the given string as ERROR. Takes a variable number of strings. err(...) // Provide a lua function that will be run once, when the server is ready to start serving. OnReady(function) // Use a Lua file for setting up HTTP handlers instead of using the directory structure. ServerFile(string) -> bool // Serve files from this directory. ServerDir(string) -> bool // Get the cookie secret from the server configuration. CookieSecret() -> string // Set the cookie secret that will be used when setting and getting browser cookies. SetCookieSecret(string) Functions that are only available for Lua server files This function is only available when a Lua script is used instead of a server directory, or from Lua files that are specified with the ServerFile function in the server configuration. // Given an URL path prefix (like "/") and a Lua function, set up an HTTP handler. // The given Lua function should take no arguments, but can use all the Lua functions for handling requests, like `content` and `print`. handle(string, function) // Given an URL prefix (like "/") and a directory, serve the files and directories. servedir(string, string) Commands that are only available in the REPL helpdisplays a syntax highlighted overview of most functions. webhelpdisplays a syntax highlighted overview of functions related to handling requests. confighelpdisplays a syntax highlighted overview of functions related to server configuration. Extra Lua functions // Pretty print. Outputs the values in, or a description of, the given Lua value(s). pprint(...) // Takes a Python filename, executes the script with the `python` binary in the Path. // Returns the output as a Lua table, where each line is an entry. py(string) -> table // Takes one or more system commands (possibly separated by `;`) and runs them. // Returns the output lines as a table. run(string) -> table // Lists the keys and values of a Lua table. Returns a string. // Lists the contents of the global namespace `_G` if no arguments are given. dir([table]) -> string Markdown Algernon can be used as a quick Markdown viewer with the -m flag. Try algernon -m README.md to view README.md in the browser, serving the file once on a port >3000. In addition to the regular Markdown syntax, Algernon supports setting the page title and syntax highlight style with a header comment like this at the top of a Markdown file: <!-- title: Page title theme: dark code_style: lovelace replace_with_theme: default_theme --> Code is highlighted with highlight.js and several styles are available. The string that follows replace_with_theme will be used for replacing the current theme string (like dark) with the given string. This makes it possible to use one image (like logo_default_theme.png) for one theme and another image ( logo_dark.png) for the dark theme. The theme can be light, dark, redbox, bw, github, wing, material, neon, default, werc or a path to a CSS file. Or style.gcss can exist in the same directory. An overview of available syntax highlighting styles can be found at the Chroma Style Gallery. HTTPS certificates with Let's Encrypt and Algernon Follow the guide at certbot.eff.org for the "None of the above" web server, then start algernon with --cert=/etc/letsencrypt/live/mydomain.space/cert.pem --key=/etc/letsencrypt/live/mydomain.space/privkey.pem where mydomain.space is replaced with your own domain name. First make Algernon serve a directory for the domain, like /srv/mydomain.space, then use that as the webroot when configuring certbot with the certbot certonly command. Remember to set up a cron-job or something similar to run certbot renew every once in a while (every 12 hours is suggested by certbot.eff.org). Also remember to restart the algernon service after updating the certificates. A way to refresh the certificates without restarting Algernon will be implemented in the future. Releases - Arch Linux package in the AUR. - Windows executable. - OS X homebrew package - Algernon Tray Launcher for OS X, in App Store - Source releases are tagged with a version number at release. Requirements go 1.14or later is supported. - For go 1.10, 1.11, 1.12and 1.13+ gcc-go <10version 1.12.7of Algernon is the latest supported version. Access logs Can log to a Combined Log Format access log with the --accesslog flag. This works nicely together with goaccess. Example usage Serve files in one directory: algernon --accesslog=access.log -x Then visit the web page once, to create one entry in the access.log. The wonderful goaccess utility can then be used to view the access log, while it is being filled: goaccess --no-global-config --log-format=COMBINED access.log If you have goaccess setup correctly, running goaccess without any flags should work too: goaccess access.log .alg files .alg files are just renamed .zip files, that can be served by Algernon. There is an example application here: wercstyle. Logo license Thanks to Egon Elbre for the two SVG drawings that I remixed into the current logo (CC0 licensed). Listening to port 80 without running as root For Linux: sudo setcap cap_net_bind_service=+ep /usr/bin/algernon Other resources General information - Version: 1.12.12 - License: MIT - Alexander F. Rødseth <[email protected]> Stargazers over time *Note that all licence references and agreements mentioned in the algernon README section above are relevant to that project's source code only.
https://go.libhunt.com/algernon-alternatives
CC-MAIN-2021-43
refinedweb
6,920
57.98
tf.convert_to_tensor( value, dtype=None, name=None, preferred_dtype=None ) Defined in tensorflow/python/framework/ops.py. Converts the given value to a Tensor. This function converts Python objects of various types to Tensor objects. It accepts Tensor objects, numpy arrays, Python lists, and Python scalars. For example: import numpy as np def my_func(arg): arg = tf.convert_to_tensor(arg, dtype=tf.float32) return tf.matmul(arg, arg) + arg # The following calls are equivalent. value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]])) value_2 = my_func([[1.0, 2.0], [3.0, 4.0]]) value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)) This function can be useful when composing a new operation in Python (such as my_func in the example above). All standard Python op constructors apply this function to each of their Tensor-valued inputs, which allows those ops to accept numpy arrays, Python lists, and scalars in addition to Tensor objects. Args: value: An object whose type has a registered Tensorconversion function. dtype: Optional element type for the returned tensor. If missing, the type is inferred from the type of value. name: Optional name to use if a new Tensoris created. preferred_dtype: Optional element type for the returned tensor, used when dtype is None. In some cases, a caller may not have a dtype in mind when converting to a tensor, so preferred_dtype can be used as a soft preference. If the conversion to preferred_dtypeis not possible, this argument has no effect. Returns: An Output based on value. Raises: TypeError: If no conversion function is registered for value. RuntimeError: If a registered conversion function returns an invalid value.
https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor
CC-MAIN-2019-04
refinedweb
277
51.44
User talk:The Thinker/Archive 7 From Uncyclopedia, the content-free encyclopedia Talk Archive 2 | Talk Archive 5 Talk Archive 3 | Talk Archive 6 Talk Page Virginity Steal! That is all.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 17:50, 25 November 2007 (UTC) - Now it's a gang-bang! - P.M., WotM, & GUN, Sir Led Balloon (Tick Tock) (Contribs) 18:09, Nov 25 Talk Page Rape! Yes, I raped your talk page. It almost refused to drink the wine, too...your talk page has a strong will. Nonetheless, I got at it eventually. Shame it was already devirginized. They're always better the first time around. --THE 22:13, 25 November 2007 (UTC) YES!!!1 CONGRATULATIONS!!! You have won a raffle for tickets to Of course, when we did the raffle we were drunk, and all the other tickets except yours were spilled and blown away, but still! Congrats! AT LAST!!! :-D Thanks for the vote, not to mention all the help you gave me on that article. I couldn't have gotten it featured without you. Come to think of it, I probably wouldn't have gotten an account without you...so thanks! WOOOOOO! SO, who's up for a little drunken bowling? */me pulls out a bowling ball and aims it at a nearby group of pidgeons* --THE 01:07, 28 November 2007 (UTC) - Yeah, its about freakin time already!! Glad to see it make the main page with such a high voting margin; I even added it to my THINKER features. :) --THINKER 01:25, 28 November 2007 (UTC) - Heee, thanks! Third times the charm!! I love Jaws did WTC, by the way, probably one of the funniest articles I've read in a while. I'll probably make it my "favorite artikle" for next month, unless I forget. :) --THE 23:33, 28 November 2007 (UTC) Speaking of how awesome you are... Wanna split the credit for Jews did WTC? What with the fact you wrote all but like two paragraphs of it, I figured I'd better ask before taking partial credit for that one when it's featured Tomorrow.--<< >> 13:12, 29 November 2007 (UTC) - Oh yeah man, thats a given. My rule of thumb is that if I didn't come up with the concept and/or start the article 100% on my own, anyone else involved gets due credit for the work as well regardless of how much I did. I try to be good about credit; I've got a feature or two under my belt, so a .5 vs. 1 isn't big enough a concern to start short-changing my valued collaborators. On my page I have it listed as an original collaboration between you, me and Mhaille on percussionpictures. I'm really glad it turned out so well, and especially proud that its going to FA :) --THINKER 16:58, 29 November 2007 (UTC) - Does that mean that I get .0000001 of an FA for giving Brad the Loose Change idea on IRC? :D --)}"> 22:59, 29 November 2007 (UTC) - Sure does! However, you failed to do anything to the actual article itself, so I'm appropriating that .0000001 into the UnCommons FA municipal fund. Don't be sad; its for the greater good! Of me, and the fake organization I just made up! --THINKER 00:50, 30 November 2007 (UTC) - Yay! Do I get a tax rebate now? --)}"> 00:52, 30 November 2007 (UTC) - Sure thing. Its .00000001 FA. Oh, and of course we pay out in drachmas. --THINKER 00:58, 30 November 2007 (UTC) - Ooh, drachmas. You know, I hear that if you convert .00000001 FA to those you could actually get about 15,000,000 yen. Imagine that.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 01:24, 30 November 2007 (UTC) - Mordillo says he'll trade it for 25 shekels. I don't know which is more valuable. --THINKER 01:29, 30 November 2007 (UTC) - Well, how about this offer: I'll give you....1 American dollar for it, which comes out to about .25 shekels, and then we can use it to buy a turn on the "aim the nuke" machine at the arcade.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) Thanks for your vote! In the n00b! Er, Is there something wrong with the maths there??? Have fun:57, 30 November 2007 (UTC) - Indeed there was. Math was never my strong suit. Thats why I bullshit around here all the livelong day. And I hope you win; you're actively voting on VFH, which I feel is extremely important. Good luck buddy. --THINKER 02:55, 30 November 2007 (UTC) Thanks! Thanks! --Andorin Kato 02:03, 3 December 2007 (UTC) I'm sorry I'm sorry that you didn't like my article. I hope I didn't do anything wrong personally to get you on my bad side, Thinker, cuz I think you're pretty cool. • <-> (Dec 5 @ 03:59) - Nah, its nothing more than the explanation I gave there. I've liked many of your other pieces, voted as such. And I do like the article; just not the ending. ;) --THINKER 04:19, 5 December 2007 (UTC) I am sorry that you hated it tho, that's all I wanted to say. I tried to make it funny anyway • <-> (Dec 5 @ 04:26) - Oh man, I'm whining... I'll go now... • <-> (Dec 5 @ 05:02) Unicorns-slash-rainbows and happiness I'm not sorry you voted for unicorns! Fear, Loathing, and NOTES ON THE PROJECT in Las Vegas Pretty snazzy title for a "notes on the project" section, eh? :D. Alright, thinks seem to have slowed down a bit as far as continuing development of "the Project" is concerned. Do you still want to do the "Behind the scenes with the Smiths" page? If not, then shall we take what we have already and go ahead with the DVD menu plan? If you do still want to do it, then STOP PROCRASTINATING ALREADY!!!!! :-D --THE 16:03, 8 December 2007 (UTC) - Ah ha! The project indeed! God its been a while since we've done anything on that one... I still think the Smiths doc is cool, I'll create a userspace page for us to work on (unless you want to do it in yours for consistency). But yes you are correct, we must do that to full-out the DVD, then get that ready, get pics on everything, and release this damn thing. Lets try to have it out by new years, ay? I think we could. :) --THINKER 02:11, 9 December 2007 (UTC) - Yeah, let's go for it! Yes, before the dawn of 2008, the DVD page shall be complete. Excellent! I got a behind the scenes with the Smiths page started up here. Let's get rollin'! :D --THE 13:31, 9 December 2007 (UTC) - So, just to klarify...the Smiths page is gonna be a sort of mix between an unscript and a regularly written article, right? --THE 22:16, 10 December 2007 (UTC) - Yes, definitely. I was thinking we'd have the article portion be an almost narrator-like presence, intermixed with smith dialogue. And I promise I will start on it very soon, and no, I'm not bullshitting this time! :D --THINKER 04:55, 11 December 2007 (UTC) - Excellent! I loved it so far. I got a few ideas for jokes as I read through what you have so far, and I added some content. It feels good to be workin' on the project again :D! As far as length and formatting is concerned...are we gonna divide the Smiths page into various sections with a table of contents, like a script, or just leave it as a kind of "big block," like it is in its present form? --THE 15:16, 15 December 2007 (UTC) - I was contemplating that. Due to its nature, maybe we could just use line breaks like how Hell's Chicken is divided? Maybe those lines could be like commercial breaks (ie. "Well be back with more innocent bystanders right after this" then line break). What do you think? --THINKER 19:10, 15 December 2007 (UTC) - Yeah, that sounds good. Oh, and is the Smith commentary also going to include the "smythe" guy who played Alister? I think it'd be hilariously ironic to make Smythe some sort of moronic gas station attendant, with no idea of what any of his dialogue was supposed to mean. Whaddya think? I'll be able to work on it more tomorrow, and perhaps today, if this beastly snowstorm doesn't kill my electricity, as I'm expecting it to any sec-- *ZAP!* --THE 14:28, 16 December 2007 (UTC) - Sure man, that sounds good. We can evolve it as it goes along. Lets just try to either not include the Smiths that are listed in the character box on the script page, or if we do (and I guess we sorta should, since they are the main characters), lets keep their descriptions from the box consistsnt with what we write about them in the article. I started reading your edits and got a little sidetracked last night, so I'll read them today after business is taken care of. --THINKER 19:21, 16 December 2007 (UTC) - Yeah, I agree about the main characters...I suppose we can put them in a kind of secondary place in the special, and focus more on the "untold story" of the production crew. But we should definitely have some bits from the guy who played Alister...and I of course couldn't resist a brief appearance by the female impersonator. There's just too much potential humor there to ignore :D. --THE 02:11, 17 December 2007 (UTC) Our noob needs help! "Have him write an article or two and we'll talk. --THINKER 01:29, 3 December 2007 (UTC)" - UnTalented has written a very funny article, which is on VFH. He is struggling in NotM, largely because the other nominees are whores. I'm now officially UnTalented's whore. Read his article, and vote! This noob needs a chance for greatness! He is by far the best writer up for NotM! -:20, Dec 10 Userpage Thanks for reverting the vandalism to my user page. I appreciate the help. MadMax 07:32, 16 December 2007 (UTC) - Of course dude, any time. Even when it leads to my own page getting vandal'd. But hey, at least I got a vanity page out of it. :D --THINKER 07:34, 16 December 2007 (UTC) PLS Judging Would you have any interest in judging for the upcoming PLS in January? Please let me know on my talk page whether or not you are/will be able to. Danke. --Hotadmin4u69 [TALK] 22:00,) 00:21, Dec 17 Bloody Pagans)}"> X-Mas Oh, and I'm sorry your vanity article got deleted. It was great-- 01:31, 17 December 2007 (UTC) - Hah, no worries. Twas funny when it was there. Thanks for the well wisherings --THINKER 01:59, 17 December 2007 (UTC) One Of These Ah, Happy Holidays to my good friend, The Thinker. May your holidays be happy, may your presents be plenty, may your manly parts never mysteriously fail to function, for what is apparently no reason at all. And mostly, may all your Christmases be white...in south Florida. -RAHB 03:25, 17 December 2007 (UTC) - Ah, much appreciated RAHB, and the same to you (quite especially the manly parts thing). My New Years resolution involves writing something important. ;) --THINKER 16:29, 17 December 2007 (UTC) - My new years resolution involves alcohol, several types of drugs, sex, and other things of that nature. But it also involves writing some fucking important, fucking funny things. I can probably mix a few parts of that resolution too, but I'd probably be best to keep the sex out. Unless maybe I write something funny about sex. That's likely going to happen, I mean, come on, it's me. -RAHB 23:39, 17 December 2007 (UTC) Good day Sir RE your abstention of Uncyclopedia:VFH/Gay. Is there anything I can do to the article to make you reconsider? It's ambitious I know, and any additional comments you might have would be very welcome.:28, Dec 22 - Mmm, well I think the intro is good, but then gets a bit blatant. I would've stuck with the whole old-meaning-new-meaning thing with a bit more subtlety. Good piece though, that's why I didn't vote against. ;) --THINKER 21:45, 22 December 2007 (UTC) - I will see what I can:51, Dec 22 Merry Christmas -- Mitch 13:24, 24 December 2007 (UTC) Merry Xmas! --YeOldeLuke 08:00, 26 December 2007 (UTC) Help out a noob in need? You and RAHB wrote that page about Chicken Soup for the soul, right? Well, there's this noob down in the help forums who's working on a similar article, maybe you could help him out? Cheers, -) 06:04, Dec 29 Happy New Yeeeah! And thanks for helping me become a fully fledged Uncyclopedian, as opposed to a half-assed IP user. Alas, we didn't finish Sex Seafood in 2007, but we'll finish in 2008, dammit! --THE 22:42, 31 December 2007 (UTC) - Ah damn it, you're right. The best laid plans, ay? But yes, it will certainly be finished early next year. So happy new year to you as well my friend. :) --THINKER 01:21, 1 January 2008 (UTC) Thanks -- 00:41, 1 January 2008 (UTC) James man! EMC REVEWED IT A 47 man! I\mm Still drunk1!!! IMMA Nom IT!!!!!! -RAHB 05:00, 1 January 2008 (UTC) - Jesus Christ man!.....as I said, never again man. Never again. Or at least if I do ever again, I'll make sure nobody lets me near a computer, or a phone. -RAHB 08:38, 2 January 2008 (UTC) Ok, some assistance or something... I've noticed a trend. When I have an article on VFH, it will either pass with flying colors or struggle to stay alive after a series of votes. The trend I've noticed, is that when you vote against or abstain due to "missing something", it fails (or looks like it could fail/quasi). I'm trying to become a better writer here, and this seems like someplace to start. Thusly, it would be greatly appreciated if you could perhaps give a more in-depth analysis of what makes my condom article unfeaturable (in your eyes). Or, if you still just think it is "missing something", could you take a look at something in development of mine? Or you can not help me out at all, it's fine. Hell, I probably wouldn't even help me. Thanks either way! -:04, Jan 3 - I'd love to help. My only caveat is that I don't like to work with articles that have already been posted on VFH (with rare exception). I feel that if an author believes their work to be ready for VFH, then the work should stand as posted, rather than be preened during the nomination (excluding minor spelling, grammar and formatting mistakes, which are understandable). That's not what the nom is for. That is of course a broad personal belief and isn't meant to offend you. - As such, I would be more than happy to look at anything you have in development, help you cultivate article concepts, and of course collaborate on concepts that have potential. Link away! :) --THINKER 22:15, 3 January 2008 (UTC) - The simple answer to your question, UnIdiot, is that The Thinker is right about everything. Therefore, when he marks something against, he is not only expressing his opinion, but making a prediction of the future, based on it. This is not intentional, as he is blessed with the curse that is being right, every single time. Note: Somewhere hiding in the hills above Uncyclopedia, there is a mischievous group of monkeys, who constantly plot to throw the time continuum out of its....vertex or something. Thus, occasionally, they succeed in making the predicted event the opposite of what The Thinker said. They also have death rays and time machines, but thankfully they're not quite advanced enough to know how to use them. (FUCK YOU AND YOUR EDIT CONFLICTING THINKER!) -RAHB 22:17, 3 January 2008 (UTC) - Well yes, all of this is quite true. In addition, I still would be glad to help. And I can edit conflict all I want god damn it, this is my talk page! NOW CUP MY BALLS! --THINKER 22:19, 3 January 2008 (UTC) - God dammit. I was afraid you'd bring that completely relevant point in this conversation... /me hands you a glass (FUCK YOU UNIDIOT AND YOUR EDIT CONFLICTING ASS!!!!! In addition, have a nice day.) -RAHB 22:29, 3 January 2008 (UTC) - Ya, I don't want the condom article tweaked at all now, its how it is, and I still think its funny. Who knows, maybe it will pass VFH somehow, and the curse will be lifted. I've also noticed the strong RAHB/Thinker one two VFH punch, where you two vote right after another, usually with similar opinions on the article, and with RAHB either agreeing with Thinker or saying something along those lines. Oh yes Thinker, if you would take a peek at User:The UnIdiot/UnScript, that would be very nice. Its part 1 of a 2 part movie series thing I felt like doing. Before I write part 2 though, I should probably make sure part 1 isn't crap and likely to get me killed if I move it into the main namespace. Anyway, thanks for helping! HORRAY NO EDIT CONFLICT-:27, Jan 3 - The simple answer to that atrocious accusation is that I am Thinker's telekinetic apprentice. Though I am not yet as skilled as he in the art of premonition, I have been known to accurately predict the future (and be foiled in the process by those damn monkeys on more than one occasion) many times...../me looks around....woo! No conflict! -RAHB 22:32, 3 January 2008 (UTC) Before being FUCKING EDIT CONFLICTED, this was my message, word for word, in it's entirety: - Well, RAHB and I are actually linked telekinetically. As goofy as it sounds, its actually quite frightening how true it often is. That, and we also have hetero-sex with one another. Nothing goofy about that. Fucking frightening indeed. --THINKER 22:37, 3 January 2008 (UTC) - I have no words to say, but those that my wide-open-mouthed expression would convey. Though we're telekinetically linked, so if there were any words I was thinking right now, you'd probably know them anyways. Jesus fucking christ. =/ (and I won't even say a word about being FUCKING EDIT CONFLICTED by the UnIdiot just now!) -RAHB 22:41, 3 January 2008 (UTC) - Sounds good, thanks for the help. WELL SO FAR IT SEEMS I'M MANAGING TO AVOID BEING EDIT CONFLICTED SOMEHOW-:40, Jan 3 'Lo Thinker Just a quick response to the WotM stuff, and I thought I'd bring it here, to avoid any shit on the vote page, and hopefully any drama. Simply put: I don't want to cross swords with one of my fave writers, and I don't have a problem with your sentiments or your opinion, but statements like "oh come the fuck on you idiots" rub me up the wrong way a tad. I voted for Cajek last month as a result of a lot of careful thought, on what was a really tough call in my opinion. THE has been consistently good for ages, but in the last couple of months, I personally think no-one's pumped out a higher volume of articles that make me laugh than Cajek. Simple. Nothing to do with a popularity contest from this quarter, and while your comment wasn't aimed at anyone in particular, I was having a rough day at work, and my "narky bastard" reflex was triggered. Hope that explains things. /me reads through that again. Christ, I'm taking this wayyyy too seriously, aren't I? I have to remember this is a comedy site. *sigh* I'll go do something else now. Good luck with that other popularity contest you're involved with! ;-) --Sir Under User (Hi, How Are You?) VFH KUN 09:44, 4 January 2008 (UTC) - My comment had very little to do with Cajek. My comment was meant more as "come on people, why wouldn't you vote for THE?". And, like I said in my edit summary, the popularity contest comment was also not directed towards Cajek's run, but more at So So's 3 month stint in nominationland. But, if the spotlight is shown on Cajek's run, one might make mention of the fact that he has an established connection to many new users, of which a majority of his votership was comprised. Nothing wrong with that: he chose to get defensive about it. - Like I always say, all I care about is the quality of the site. Anything around here that I do, or the comments that I make are all directly related to that concept. However, I also realize that that concept is relative, so no hard feelings. :) --THINKER 17:49, 4 January 2008 (UTC) Message acknowledged I received your message on my IP account. I already have a Log - in - User account. Sometimes, though, I still use the IP. Thanks for your interest. It's up to you to figure which one it is. Happy New Year - 170.115.251.13 13:10, 4 January 2008 (UTC) A Noob in Need I, Stateyourname, do solemnly swear that I am a Noob in need of adoption.--Anakin 20:17, 8 January 2008 (UTC) - Hmm... eh, why not. How may I help you? --THINKER 20:25, 8 January 2008 (UTC) - Well, for one thing, I'm writing my first Uncyclopedia article: Atlanta. I'd like to know if what I've written is any good. Also, I've never uploaded images before, and I've already checked the HowTo thingy, and I'm still confused.--Anakin 15:53, 9 January 2008 (UTC) - HA! I just read your piece, and I like your style. You've definitely got the makings of a funny article there. Here are a few suggestions for it that I see at first viewing: - Since its a city (and a major one), its going to need a decent amount more input (not to discourage you -- I just dont want to see it get deleted after its WIP time has expired). If you continue with the route you're going now, you'll be fine. Take a look at some other city pages on the site; you'll be able to tell what is good city-specific humor and what is overkill. You want to have a good balance. - Once its lengthened, it just needs to be wikified: get a city template on it, links, etc. When you're ready for that, I'd be glad to help there also. - If you need more time to work on the piece past it's alotted construction time, fear not. You can move the article into your userspace and work on it there for as long as you'd like. In this case, the link would be [[User:Anakin/Atlanta]]. Again, I can help with that too if need be. - As for uploading an image, its a fairly simple process. All you have to do is click the Upload file link in the toolbox section of the sidebar in the lower left of your screen. When you're there, click "Browse" to find the file, then hit "Upload file" at the bottom of the page. And just like that, Uncyclopedia gains another much needed image. - When you want to put the image into the article, the generally used code is: [[Image:YourImage.jpg|thumb|250px|Whatever text you want to appear underneath the image as a caption]]. The first portion is Image:, then whatever the actual image name is (with file extension -- no space between it and the Image: part). Second is to make it a thumbnail-type image. Third is the size of the thumb. Fourth is something funny to accentuate the picture. - Well now that I've over-explained it as thoroughly as possible hope this helps. Keep me posted on your progress. When its ready, we'll get you up on the Pee Review. :) --THINKER 17:06, 9 January 2008 (UTC) It's almost done! What do you think?--Anakin 20:43, 10 January 2008 (UTC) - Looking good man! Keep it going!! I'll go through when you think you're 100% done and do some touchups for ya. Great effort dude, you've got the makings of a stellar Uncyclopedian. :) --THINKER 20:56, 10 January 2008 (UTC) TA-DA!!!--Anakin 15:33, 15 January 2008 (UTC) - I'll read this when I get off of work. Good stuff on following it through; I saw TKF pitched in a bit, which is a good sign. --THINKER 15:52, 15 January 2008 (UTC) Atlanta has been submitted to the Pee Review.--Anakin 18:13, 16 January 2008 (UTC) Whoa there Settle down, pal. I'd hardly consider what I put on here to be whoring. It's just a reminder that an article you already voted for is up for further voting. Chill out. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 02:33, 16 January 2008 (UTC) - Well your impersonal generic whoring (you might not call it as such, but it kinda is) overlooked the fact that I already voted on the top 10. And by now you really should know that whoring is the opposite of what Uncyc needs, but yet you continuously do it anyway. And you're a sysop. Mind = boggled. --THINKER 04:51, 16 January 2008 (UTC) - "Continuously" is an exaggeration. The last several articles I've written, I've hardly mentioned in IRC for days after completion. Since unsolicited feedback is pretty rare, I like to at least get a reaction (in addition to any pee reviewing) from readers, and "whoring" (as you and everybody else so delicately put it) is the only way to ensure that, it seems. Advertising is a cornerstone of capitalism and any semi-democratic society, and yet even at its barest it's frowned upon here. I don't get it. Sorry for making you take a whole ten seconds to read my innocent reminder; I'll be sure not to blemish your immaculate talk page again. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 06:31, 16 January 2008 (UTC) - Continuously is no exaggeration in the time I've been here. Granted it hasnt been nearly as long as you, but yet, I've hiked up the HoS ladder. And I perhaps, as a very young n00b, I might've "advertised" an article. But as I got situated, and wrote worth-while pieces, they got the votes on their own. Justify it as you may; right now (amazingly) one of my pieces is leading the top 10 race. Check around if any "advertising," "mentioning," or any other form of "getting people to notice your work" was used in getting support for that. - I want the best for this website. As sad as that point is, it is totally the truth. And I'm sorry dude, while I respect you, and definitely enjoy your work, if you come to my page and leave a totally impersonal, obviously opinion-swaying message on my talk page, I'm gunna give you my opinion about the situation. If you want to continue the campaign, hit up the n00bs. Not the satirists. --THINKER 06:49, 16 January 2008 (UTC) - Well, I'm sorry, Mr. Twain. I guess I just feel shitty that I've been here since practically the beginning, hold the record for most features and yet have never had one in the top 10. When I should have had a good shot, stupid bullshit like euroipods (which is funny but not the top ten of anything) beat out my articles. This year, with so much good stuff up, I felt like I needed some way to seperate mine, or at least to remind people who liked them in the past that, well, they liked them in the past. I guess none of my three that are on there are as inherently funny as pouring hot water down trousers, but I would think that at least one of them could make the list. It doesn't look that way right now. In my life Uncyclopedia is way down the list of priorities, but it's something I'm good at (and in the big picture, writing is part of what I plan to make a living on... I'm eventually moving to Chicago to do improv and then from there angling for a writing job on Conan or a similar show) and for whatever reason I'm wired to seek out recognition rather than let it come to me. Anyway, sorry for whoring or whatever, and sorry for the impromptu therapy session, but I just thought I'd let you know why I still feel the need to campaign for my articles. Bye. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 17:09, 16 January 2008 (UTC) - Its fine dude. Whoring is just a peeve of mine. I hope one of yours makes it in the top 10, but if not, you're still the HoS king. As sad as it may be, Uncyc may not be a priority, but it is something I particularly enjoy. As writers, the better the site looks, the better our work looks on it. - I know a number of us want to write in the big picture.. What troupe are you joining in Chicago? I was thinking about going to UCB in NY at some point, depending on life. --THINKER 18:59, 16 January 2008 (UTC) - Well, I already did a summer thing at Second City a couple of years ago, but that was just a two week thing. This summer I'm planning on taking the 5-week intensive at iO so that when I finally move up there, I can enter at level three. I've been wanting to do comedy for a living since 7th grade, but my choice was really solidified when my sketch writing teacher at the Second City bootcamp said she could see me being very successful for a long time. I just wish I weren't stuck here in Kansas. I got accepted to Loyola University Chicago and got plenty of scholarship money (more than down here, in fact) but at KU I got enough scholarship money to cover everything and then some. Actually, I'm going to be using the leftover money they pay me to pay for the iO course. So I suppose with no student loans to worry about, the transition to Chicago should be a least a bit more manageable financially. - I've never been to NYC but I've heard the UCB theatre is fantastic. The improv special UCB did on Bravo several years ago inspired the format for the high school troupe I founded. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 04:58, 17 January 2008 (UTC) Get your ass into gear, Thinker Uncyclopedia:Pee Review/UnScripts:Chex Mix Addiction: Volume 1 Remember this? - UnIdiot | Talk? | Theme - 02:01, Jan 17 GTFO! Whore! Whore! Whore!)}"> 14:58, 20 January 2008 (UTC) I'm BAAAAaaaack! And I've written something new. However, you'll only get it if you've read The Odyssey.--Anakin 15:51, 23 January 2008 (UTC) Question: Have you officially adopted me as a n00b?--Anakin 15:31, 24 January 2008 (UTC) - I like the Odyssey parody. Of course it'd be nice if it was a complete parody from start to finish of the work, but yeah, that'd be 8,000 pages long so I understand. I'm glad to see you're exploring the alternate namespaces around here; have fun with them, thats what they're there for. - Sure, why not? I personally don't make it such a formal thing. The fact that I just always seem to be around here makes me accessable for any questions or help you might need. So yes, you are adopt'd, lest you feel someone else would be better suited to handle your concerns. --THINKER 17:22, 24 January 2008 (UTC) Mr. Kearsy Sorry about my last edit. MadMax 19:11, 24 January 2008 (UTC) - Not at all man! Thanks for showing me that the article still exists period. It allowed me to contact the author, and now we might get that page fixed up enough to be remade in mainspace. Besides, it was an easy fix either way. You're the man Max! <3 --THINKER 19:16, 24 January 2008 (UTC) That's a relief, I thought I was slipping for a minute! I generally have a soft spot for "2005-cruft" (although admittedly this particular article came a bit later) and Dave's Leather Jacket is an old favorite. I hope you and User:Standard Enemy AI can bring it back. MadMax 08:55, 25 January 2008 (UTC) Wheeeee! Thanks man! :) --THE 18:33, 25 January 2008 (UTC) I am deeply and seriously confused Hey there. I just came out of reading your most recently VFH'd article, Stephen Colbert, and, as promised, I am deeply and seriously confused. I should say to start that I am the epitome of the reader described in the article; I love The Colbert Report and I had never heard of Stephen before it. I should also say that I don't particularly enjoy articles that insult the reader; I believe the last good one was written over a year ago. So, at first glance at this article, I went through a variety of negative emotions, through disappointment, anger, bitterness, betrayal, and confusion. (You see, I had thought that most Uncyc users would be fans of the Report.) But I remembered that it was written by Thinker, a writer who truly knows his stuff, who can write with subtlety and nuance and whose articles can be trusted as having a deeper meaning. I was pretty sure that the last sentence of the introduction was a sign of this (even if the link's target disappointed me). However, after reading through it twice, I am struggling to justify it as a satirical piece. I am, admittedly, no comedic mastermind, but I know a little about satire: that it takes elements like exaggeration, reversal, etc. I could find precious little of these in the article outside of the obvious one, exaggeration. Is it meant to be a satire so straight-faced that it deceives? Am I simply missing the subtlety? Or is it genuinely attesting to the insipidity of the Report? I am honestly tempted to vote Against, something I never thought I'd do to one of your articles... but I can't do that when I don't really understand it, which I clearly don't. Can you please explain it to me? — Sir Wehp! (t!) (c!) — 05:24, 26 January 2008 (UTC) - (Thinker looks to the rest of Uncyc) See people, unlike most of you, who have brains incapable of this type of logic, Wehp thinks to ask about that which he doesn't comprehend at first glance. I think the same thing happened with Bow tie, which I was grateful for also. So let me explain a bit. - Essentially, this article is actually me parodying myself. For you see, there are people in this world who are snobs about anything and everything that they like. Sports snobs, film snobs, and in this case, comedy snobs. I have the tendency to be very prickish when people like a comedian for something that (I think) is less humorous than a previous work of the performer. This is because I generally encounter people in my everyday life who don't understand humor whatsoever, but like to purport that they do (I call it "college syndrome"). So anyway, the point of this piece is that this is an overly exaggerated version of me, knowing that the person I'm talking to doesn't understand and doesn't care about the "truths" that I'm conveying, yet I still feel the need to continue justifying my opinion into the ground regardless. - In essence, its actually a piece that I like very much personally because it does contain some points that I really do believe, such as the belief that the Colbert Report, like the Daily Show, is a weak platform for political opinions rather than a comedic venue; something Colbert was not prone to before his tenure under Jon Stewart, and that Stephen was a million times funnier on Strangers with Candy than he is now. Do I think every person who doesn't know about SWC or thinks he's funny on the Colbert Report is an idiot? Of course not (especially not the people here -- in real life.. well.. that depends, haha). - I hope that clears it up a bit. And try to check out some episodes of Strangers with Candy if you can, Wehp. I think Comedy Central still airs it every once and a while, and like the article says, it really is one of the greatest comedic ventures ever aired on television (in my seemingly unhumble opinion). :) --THINKER 17:47, 26 January 2008 (UTC) - I've always loved Strangers With Candy. I've also always loved Jon Stewart and Colbert's Daily Show work and the spin-off. Even though I understand it's an exaggeration, I don't support the sentiment in the slightest. (Especially that Kilborn has ever been even half as funny as Stewart, but that's a conversation for another time.) I suppose that with all the goodwill both shows have accumulated, backlash is inevitable. This looks like it is a case of "I liked Colbert before he was cool, and now he's such a sellout so fuck him". I've been a fan of Colbert's work from pretty close to the beginning as well, and I think that The Daily Show and especially the Colbert Report are the realization of the recognition Colbert has always deserved. I honestly think if you had never seen Colbert before the Report, you'd love his current work. Because if you remove the jaded hipster lens, he's just as good as he's always been. - Just my opinion. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 08:56, 28 January 2008 (UTC) - Stewart's delivery is a go-to formula that is 100% predictable. Perhaps when he first took over I was into it, but after however many years of the same old song and dance, I can almost do his act for him. I didn't say Kilborn was funnier, I said the show wasn't a platform for half-jokes masking political sentiment under his command, which was a better atmosphere for Colbert (according to this, Colbert wasn't political until the show took it's turn). - But thats just one of the few reasons I don't like either show. And as much as I'd like to attribute my distaste to being a jaded viewer, clinging to the productions I found funny and not accepting anything further, I know that not to be the case. I just know what I find funny and what I don't find funny. I've been cultivating that instinct for many, many years. And I'm sorry, I don't find these politically-charged springboards humorous in the slightest. - I'm not alone though. Like I said in the article, listen to the audience. They never laugh. I suppose everyone's laughing at home. Anyway, no cigar on that one my friend; as a seasoned observer of the art of comedy, I do not find them funny. You don't have to agree or vote for the article though. I had absolutely 0 intentions of this ever going to VFH anyway. But, I would think that as someone who also believes that he has a grasp on comedic disproportion, that you'd see past the sentiment and recognize the humor of the work itself. Book 'em Danno. --THINKER 12:57, 28 January 2008 (UTC) - I thought it was too much angry, not enough funny (and by "not enough" I mean "hardly any"). That's why I voted against. But I disagree with the sentiment. Agree to disagree. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 18:34, 28 January 2008 (UTC) - LOL, thats so ironic, considering that every article you've written around here I personally find so exceedingly hilarious. - I was considering starting a convo with you about the Colbert page, but then there already was one, so I decided to join in the pre-existing one. And I believe I sensed a little sarcasm in your first sentence. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 18:51, 28 January 2008 (UTC) - So then, this convo would be: Your opinion, my rebuttle, dead stop? Thats not much of a convo. - I was by no means ending the conversation. I mean, odds are we won't convince the other one to budge from his position on this particular topic, but any healthy and stimulating debate you'd want to participate in, I'd be game for. Such as: I love Steve Martin, and most of his earlier movies make my favorites list, but he'd be a better fit for this type of article than Colbert. I mean, the excess vitriol didn't appeal to me from a comedic standpoint to begin with, but I'd be more inclined to agree that a guy who made The Jerk and The Man With Two Brains turning around to make Cheaper By The Dozen 1 and 2 might qualify as the biggest traitor in comedy. Yeah, Steve Martin is making those movies for the paycheck so he can focus on his writing (which is still pretty good), but I'd like to see another genuine Steve Martin movie. It's not as controversial a target as a critically acclaimed hour of television, but in my opinion it's a more accurate one. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 20:15, 28 January 2008 (UTC) - Steve Martin certainly fits the mold too, but the jump is not as dramatic. His progression downward was a bit more paced. Almost a snowballing-type of dive into the land of poor comedy. And, I could see the article going that same way if I felt as strongly about Steve Martin, and had been following his career from the moment I'd seen him (which in a sense I had, since I too am a fan of classic Steve Martin, but I just didn't connect as strongly early on because of the time gap between his career's launch and my existence -- I've been following Colbert since Exit 57). - Excessive vitrol is a great comedic device when used properly: obviously we see that you don't feel it was used properly here, but in the case of a Lewis Black, Andrew Dice Clay, Sam Kinnison or Lenny Bruce, angry comedy can be a perfectly viable form of comedic expression. Its not as trumped up in this piece, but thats part of the point, as I described previously. --THINKER 20:42, 28 January 2008 (UTC) - Oh, sure. But in the case of your article, I just couldn't find any jokes, you know? It was a rant, and a well-written rant, but it just lacked any actual comedy. And frankly, in this case the anger was directed squarely at any readers who like Colbert or Stewart, whereas with Black and Bruce the anger is with the audience. There's a difference. - Exit 57 was around before I had cable, and now it's just about impossible to find any trace of it beyond a few pretty great sketches on the YouTubes. I have a version of Dinello's "Guy Named Jesus" song from when it was originally performed at Second City, and it's one of the best things on the disc. But I've been a Colbert fan since SWC and own the whole series, and I could watch a classic SWC episode and a Colbert Report episode back-to-back and call it a damn fine hour of comedy. I see elements of Noblet in "Colbert" and vice versa, and at its core CR isn't a political show at all. It's a character-driven comedy, featuring a classic Colbert creation, and that creation just happens to have a politically-oriented job. Since CR is a show produced in the span of one day four times a week, it's pretty unfair to expect it to hit as often as a 12-episode sketch show or a 3-season sitcom. But mostly, CR produces character-based comedy and commentary that isn't rivaled in the medium of television. I think the criticism in your article is mostly pretty flimsy, and there are enough remaining elements of the "Old Colbert" visible to me in the "New Colbert" that I can't help but love them both. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 22:58, 28 January 2008 (UTC) - You analysis of CR and the character-driven comedy is fundamentally flawed. As a comedian, one inherently plays a character. How close or far from the actuality of it's presenter depends on that presenter. Hell, "THINKER" is a character. It draws from many elements of the me I actually am, but distorts many others, and changes to fit the needs of the comedy I present via this website. - Stephen Colbert had perfected his presentation to a tee in SWC. The character, the contour of his comedic style. It was presented through Chuck Noblet. On CR, yes, he is a character. However the character has no contour. It is a springboard for opinions. The character is not an actual character; it is little more shell. An imitation. The man behind the anchorman, like a puppet-master, is presenting his opinions via this created vessel. To say that there isn't a political motivation is not only ludicrous (given how amazingly overt the sentiment is), but its missing the mark which the presenter is hoping to achieve. And if you want to deny this fact in Colbert, as much of a stretch as it may be, so be it. But there is no way you can deny it in Jon Stewart. And as his protege (refer to the wikipedia reference mentioned previously), its an ipso facto situation. - Do I expect CR to hit as often as a 12-episode sketch show or 3-season sitcom? Of course not. Not because it isn't possible to achieve: because of all the reasons I've stated prior. The concept is a platform, and that cannot display humor anywhere near as cleverly, nor as functionally, as a serial, political slant or otherwise. The slant just deeply worsens the chances for those so called hits. - The step-up calls for punchlines. SWC did not. Consider this point before responding. --THINKER 05:21, 29 January 2008 (UTC) - I never said there isn't a political motivation. The character is obviously politically oriented. But it's also a pretty well fleshed out one. He's not merely a right-wing parody; he has myriad personal likes and dislikes, fears, behaviors and a well-established ego. The reason so many people love the show is because of how endearingly smart-yet-oblivious the character is. People don't "hate" the character (the way the audience "hates" the villain in a melodrama) even though most of the political positions he takes are usually counter to the positions of the fanbase. There's obviously the inherent irony, but he still has an appeal beyond that. An appeal only a great character can have. People don't fall in love with springboards. - As for Jon Stewart and the Daily Show, I believe their golden years were from Indecision 2000 to Indecision 2004. In that era there was still plenty of political humor, but it was much more based in cleverness and silliness than divisiveness. After Bush was re-elected, though, there was a definite increase in that divisive attitude from Jon Stewart, which admittedly made the show a tad less pleasant. But I still think that overall The Daily Show under Stewart's reign has produced some of the most memorable and funny moments in recent television history. Because the show is more than Stewart winking into the camera, even if you won't admit it. And if the audience doesn't laugh (in both shows), what's that sound they make after jokes that the hosts have to hold for? Hmm. - But your qualm is evidently not so much with Colbert and Stewart as it is with the templates of the shows themselves. I think that no matter how good the material really is, your excessively negative perception of the shows as "platforms" is clouding whatever enjoyment you might otherwise get out of them. Frankly, I just don't get it. You don't like the political slant? They're "reporting" on politics; what kind of slant are they supposed to have? You don't like the "constant stream of Bush jokes"? Well, Bush has been the most powerful man in the world for the past eight years. Why pretend like he doesn't exist in a faux news show? Both shows have produced brilliant satire (yes) but because they are the way that they are, you don't like them? I just don't buy how SWC being so great means TDS and TCR suck. TDS is just a different animal, and TCR is another Colbert character just with a political orientation. So they both require punchlines; so what? Honestly, if it's all about the presentation, your article's excessive anger is much cheaper than TDS and TCR's supposed springboardiness. Colbert never called me a fuckhead. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 06:14, 31 January 2008 (UTC) I am deeply confused ...and start my first sentence with a little sarcasm. And a tad of hubris. --] 23:27, 27 January 2008 (UTC) I am deeply fried Not that I have any qualms with your vote on Polish Inquisition. Just wondered why? Thanks. ~ Mordillo where is my GIVING HEAD? 16:55, 30 January 2008 (UTC) - Eh, being a Jewy McJewinstein (and a partially Polish one at that) the joke was a little thin for me personally. Not for, not against, just in the middles. :) --THINKER 17:01, 30 January 2008 (UTC) - First, it's good that I don't have to explain it to you, like those other gentiles. Any suggestions? ~ Mordillo where is my GIVING HEAD? 17:03, 30 January 2008 (UTC) Congrats! Keep up the good, um, wordmaking. You are, uh, right real good. Sir Modusoperandi Boinc! 00:44, 1 February 2008 (UTC) - And I second that congrats. I propose a toast....to something. -RAHB 02:27, 1 February 2008 (UTC) - /me fills a goblet.... --) Congratulations, compadre On your slaughter of me and victory over everybody else nominated for Writer of the Year. Of course, you are quite deserving of the honor. Know that, if I hadn't voted for Mhaille (another person for whom I have great respect), then my vote would have certainly been cast in your favor. Yours in writing, Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 02:46, 1 February 2008 (UTC) - I want you to know even though I voted for someone else, I still didn't want you to win. Congrats. --:50, Feb. 1, 2008 Let's not forget who's mainly to blame for your winning of WotY Yes, some may say that it's "you" who actually "wrote" all those "featured articles", but they'd be wrong. Clearly, they are forgetting that I was the one who voted for your first featured article, and also nominated you, and probably did other stuff too that makes me completely responsible for your victory. You're welcome. --:48, 1 February 2008 (UTC) Perfect timing; all you chucklepots shall get your personal thankerings this evening. --THINKER 14:26, 1 February 2008 (UTC) A Word from My Newber Congratulations, The Thinker! Your GRAND PRIZE for winning the 2007 Writer of the Year Award is two prepaid round-trip tickets to the innovative city of Atlanta!!! Click here to fill out the appropriate forms!--Anakin 15:39, 1 February 2008 (UTC) - LOL! Now THAT looks like it has serious potential. Actually, if he isn't doing anything overly pressing, I might recommend that you mention this project to THE. He is quite good with UnScripts and may be able to guide you in this endeavor. - Keep it up n00bie! This type of consistency will land you a FA some time soon. --THINKER 15:54, 1 February 2008 (UTC) Fixed The image is fixed, your vote now sir? Dame GUN PotY WotM 2xPotM 17xVFH VFP Poo PMS •YAP• 01:07, 2 February 2008 (UTC) - I want to, but I still don't understand the joke of it. Why isn't it just UnBooks:Gone with the Wind? --THINKER 03:45, 2 February 2008 (UTC) Congratulations ...on being named Writer of the Year. Wow. What an honour. And a pretty template indeed that adorns the top of your user page. Well, you deserved it. Just now I randomly picked one of your articles - the one on Osama bin Laden and Disney World - and it confirmed that I made the right vote. Yes, I've been away, because I'm a student once again. And because I study creative writing, it's not exactly a break to write Uncyclopedia articles. I'm busy writing "literary" fiction, children's stuff, and song lyrics. I did one article recently - the UnNews about Uncyclopedia reaching 23,000 articles once again (by the way, you're UnQuoted in the article). Sir Roger 19:03, 2 February 2008 (UTC) - Nice! Yes I read that piece and liked it very much. Inadvertently, I think it might've been the catalyst for a widespread flame war going on in VD. - Good luck with the creative writing thing dude! Let me know when you've got anything online, I'd love to read it. --THINKER 19:09, 2 February 2008 (UTC) Frankly my dear...
http://uncyclopedia.wikia.com/wiki/User_talk:The_Thinker/Archive_7?direction=prev&oldid=5287532
CC-MAIN-2014-35
refinedweb
9,195
72.87
One. Java Program to Find Prime Factors Without delaying any further here is our complete Java program to find prime factors. Logic of calculating prime factors are written inside method primeFactors (long number), it’s a simple brute-force logic to find prime factors. We start from 2, because that’s the first prime number and every number is also divisible by 1, then we iterate until we find a prime factor by incrementing and stepping one at a time. When we find a prime factor, we store it inside a Set and also reduce the number till which we are looping. In order to run this program, you can simply copy paste it in a file PrimeFactors.java and then compile and run by using javac and java command. If you find any difficulty running this program, you can also refer this article for step by step guide on how to run a Java program from command prompt. import java.util.HashSet; import java.util.Random; import java.util.Scanner; import java.util.Set; /** * Java program to print prime factors of a number. For example if input is 15, * then it should print 3 and 5, similarly if input is 30, then it should * display 2, 3 and 5. * * @author Javin Paul */ public class PrimeFactors{ public static void main(String args[]) { System.out.printf("Prime factors of number '%d' are : %s %n", 35, primeFactors(35)); System.out.printf("Prime factors of integer '%d' are : %s %n", 72, primeFactors(72)); System.out.printf("Prime factors of positive number '%d' is : %s %n", 189, primeFactors(189)); System.out.printf("Prime factors of number '%d' are as follows : %s %n", 232321, primeFactors(232321)); System.out.printf("Prime factors of number '%d' are as follows : %s %n", 67232321, primeFactors(67232321)); } /** * @return prime factors of a positive integer in Java. * @input 40 * @output 2, 5 */ public static Set primeFactors(long number) { long i; Set primefactors = new HashSet<>(); long copyOfInput = number; for (int i = 2; i <= copyOfInput; i++) { if (copyOfInput % i == 0) { primefactors.add(i); // prime factor copyOfInput /= i; i--; } } return primefactors; } } Output: Prime factors of number '35' are : [5, 7] Prime factors of integer '72' are : [2, 3] Prime factors of positive number '189' is : [3, 7] Prime factors of number '232321' are as follows : [4943, 47] Prime factors of number '67232321' are as follows : [12343, 419, 13] If you are curious about what is that angle bracket <>, its diamond operator introduced in Java 7 for better type inference. Now you don’t need to write type parameters in both side of expression, as you have to do in Java 1.6, this makes them more readable. Now coming back to exercise, if you look at the output, it only returns the unique prime factors because we are using Set interface, which doesn’t allow duplicates. If your Interviewer ask you to write program to divide a number into its prime factors, and print all of them, then you need to use List interface instead of Set. For example, unique prime factors of ’72‘ are [2,3] but the number in terms of its prime factor is [2, 2, 2, 3, 3]. If you need that kind of output, you can rewrite our primeFactors(long number) method to return a List<Integer>, as shown below : public static List<Integer> primeFactors(long number) { List<Integer> primefactors = new ArrayList<>(); long copyOfInput = number; for (int i = 2; i <= copyOfInput; i++) { if (copyOfInput % i == 0) { primefactors.add(i); // prime factor copyOfInput /= i; i--; } } return primefactors; } and, here is the output of running same program with this version of primeFactors(long number) method. This time you can see all the prime factors instead of just the unique ones. This also explains difference between Set and List interface, a very important lesson for beginners. Prime factors of number '35' are : [5, 7] Prime factors of integer '72' are : [2, 2, 2, 3, 3] Prime factors of positive number '189' is : [3, 3, 3, 7] Prime factors of number '232321' are as follows : [47, 4943] Prime factors of number '67232321' are as follows : [13, 419, 12343] Now, its time to practice writing some JUnit tests. Actually there are two ways to test your code, one is by writing main method, calling method and comparing actual output to expected output by yourself. Other, much more advanced and preferred approach is to use unit test framework like JUnit to do that. If you follow test driven development, than you can even write test before writing code and let test drive your design and coding. Let’s see how our program fares with some JUnit testing. import static org.junit.Assert.*; import java.util.ArrayList; import java.util.List; import org.junit.Test; public class PrimeFactorTest { private List<Integer> list(int... factors){ List<Integer> listOfFactors = new ArrayList<>(); for(int i : factors){ listOfFactors.add(i); } return listOfFactors; } @Test public void testOne() { assertEquals(list(), PrimeFactors.primeFactors(1)); } @Test public void testTwo() { assertEquals(list(2), PrimeFactors.primeFactors(2)); } @Test public void testThree() { assertEquals(list(3), PrimeFactors.primeFactors(3)); } @Test public void testFour() { assertEquals(list(2,2), PrimeFactors.primeFactors(4)); } @Test public void testSeventyTwo() { assertEquals(list(2,2,2,3,3), PrimeFactors.primeFactors(72)); } } In our test class, PrimeFactorsTest we have five test cases to test corner cases, single prime factor cases, and multiple prime factor cases. We have also created a utility method list(int… ints) which take advantage of Java 5 varargs to return List of given numbers. You can call this method with any number of arguments including zero, in which case it will return an empty List. If you like, you can extend our test class to add few more tests e.g. performance test, or some special case tests to test our prime factorization algorithm. here is the output of our JUnit tests, if your new, you can also see this tutorial to learn how to create and run JUnit test. That’s all about how to find prime factors of an Integer number in Java. If you need more practice, you can also check out following 20 programming exercises, ranging from various topics e.g. LinkdList, String, Array, Logic, and Concurrency. - How to Swap Two Numbers without using Temp Variable in Java? (Trick) - How to check if LinkedList contains loop in Java? (Solution) - Write a Program to Check if a number is Power of Two or not? (Answer) - How to find middle element of LinkedList in one pass? (See here for Solution) - How to check if a number is Prime or not? (Solution) - Write a Program to find Fibonacci Series of a Given Number? (Solution) - How to check if a number is Armstrong number or not? (Solution) - Write a Program to prevent Deadlock in Java? (Click here for solution) - Write a Program to solve Producer Consumer Problem in Java. (Solution) - How to reverse String in Java without using API methods? (Solution) - Write a Program to calculate factorial using recursion in Java? (Click here for solution) - How to check if a number is Palindrome or not? (Solution) - How to check if Array contains duplicate number or not? (Solution) - How to remove duplicates from ArrayList in Java? (Solution) - Write a Java Program to See if two String are Anagram of each other? (Solution) - How to count occurrences of a character in String? (Solution) - How to find first non repeated characters from String in Java? (See here for solution) - Write a Program to check if a number is binary in Java? ( Solution) - How to remove duplicates from array without using Collection API? (Solution) - Write a Program to calculate Sum of Digits of a number in Java? (Solution)
http://www.javacodegeeks.com/2014/05/how-to-find-prime-factors-of-integer-numbers-in-java-factorization.html
CC-MAIN-2015-06
refinedweb
1,272
63.49
IntroductionIntroduction Repository for Open Translators to Things. Grouped by translators written for specific schemas. The schema names are uniquely namespaced. The translator name is a unique string identifying a Thing. This README will help get you started developing in this repo. Install ToolsInstall Tools Get your dev environment set up (PC or Mac): - Install Git - Install Node - Choose your favorite IDE, e.g. Visual Studio Code. Get the SourceGet the Source Next, clone this repo to your local machine to get started. Navigate to the directory where you want to clone the repo to locally, then run: git clone Create a New TranslatorCreate a New Translator Follow our getting started guide at. Note that we have some required naming rules for translator node packages: - The npm package names must always have "opent2t-translator-" prefix. We are not currently using npm namespacing. - After the prefix, we will kebab-case the reverse-URI that is translator name, so e.g. "com.wink.lightbulb" becomes "opent2t-translator-com-wink-lightbulb". Here is some background reading for those who are curious: - Node package name requirements/rules:. - Issue #50 includes a discussion and some context behind this naming guidance. Run Integration TestsRun Integration Tests - Install gulp globally. npm install -g gulp - Install dependencies. npm install verifiers - Run integration tests. gulp ci-checks Notes: - Other gulp tasks can be run as well, see gulpfile.js for available tasks. - By default all files under the translators repo will be tested. - Use the --cwd option to only test files under a specified directory: gulp --cwd .\org.opent2t.sample.windowshade.superpopular\ ci-checks Create a Pull RequestCreate a Pull Request Made any changes we should consider? Send us a pull request! Check out this article on how to get started. Publish a Translator Package to NPMPublish a Translator Package to NPM A translator package includes one thing translator along with all the schemas it references. Because those are not organized in the way npm publish expects, the process of publishing a translator package uses a script from the CLI repo. Update the versionproperty in the package.json file in the translator directory. (Of cource any other metadata may be updated also, but a version bump is required when publishing to npm, since you may not re-publish over an existing version.) Clone the CLI repo (or sync it as needed), and install its dependencies: cd .. git clone npm install cd ../translators - Use the script to generate a package.json for the translator to be published. Note the last parameter is a simple name of a translator, not a directory path, which would include a schema name. node ../opent2t-cli/pack-translator.js com.wink.thermostat Edit the package.json to include directories for referenced schemas in the filescollection at the end. Lines will include at least "oic.core"and "oic.baseresource"; possibly others if the OCF schema .json file has $refreferences to others. (Eventually the pack-translator.js script should add these lines automatically.) Ensure you're logged in to NPM under the opent2t account: npm login Username: opent2t Password: ********* - Publish the package to NPM: npm publish Code of ConductCode of Conduct This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
https://www.npmjs.com/package/opent2t-translator-com-wink-lightbulb
CC-MAIN-2021-31
refinedweb
554
59.8
On 12/3/05, Martin van den Bemt <mllist@mvdb.net> wrote: > Hi Stephen, > > Just add something like this in the maven.xml (the test is pretty useless probably..): > <j:if > <ant:fixcrlf > </j:if> > > And of course you should add xmlns:ant="jelly:ant" to have the ant namespace available.. > > Don't know from the top of my head which target to hijack when using the dist target (this was using a completely custom made dist target) Which is what you need to do to make that work. You can't pregoal any of the subgoals of dist to make this work, unfortunately. Requires a patch to the dist plugin itself, which has been applied to svn head. What is in svn head now does not allow the filter to be configured. I will submit another patch that allows both crlf and lf filters to be configured. We can choose to ignore the lf conversion if that is the consensus, but apply crlf to .txt and other files for zips. Phil --------------------------------------------------------------------- To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org For additional commands, e-mail: commons-dev-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/commons-dev/200512.mbox/%3C8a81b4af0512031937rb594d7bid0e14c23365464c4@mail.gmail.com%3E
CC-MAIN-2016-22
refinedweb
193
75.3